A Robotic Wheelchair Based on the Integration of Human and Environmental Observations. Look Where You re Going

Size: px
Start display at page:

Download "A Robotic Wheelchair Based on the Integration of Human and Environmental Observations. Look Where You re Going"

Transcription

1 A Robotic Wheelchair Based on the Integration of Human and Environmental Observations Look Where You re Going 2001 IMAGESTATE With the increase in the number of senior citizens, there is a growing demand for humanfriendly wheelchairs as mobility aids. Recently, robotic/intelligent wheelchairs have been proposed to meet this need [1]-[5]. Most important evaluation factors for such wheelchairs are safety and ease of operation. Providing autonomy is a way to improve both factors. The avoidance of obstacles using infrared, ultrasonic, vision, and other sensors has been investigated. The ultimate goal is a wheelchair that will take users automatically to the destination that they specify. Such autonomous capability has been also studied. However, in addition to going to certain designated places, we often want to move as freely as we wish. In this case, a good human interface becomes the key factor. Instead of a joystick, like on conventional power wheelchairs, voice can be used in issuing commands [3], [5]. In the Wheelesly robotic wheelchair system [4], the user can control the system by picking up a command icon on the CRT screen by gaze. Eye movements are measured through five electrodes placed on the user s head. Although autonomous capabilities are important, the techniques required are mostly common with those developed for autonomous mobile robots. Thus, we concentrate on the human-interface issue in our research and implement conventional autonomous capabilities with necessary modifications to realize an actual working system. We need environmental information to realize autonomous capabilities. Static information can be given by maps. Vision, ultrasonic, and other sensors are used to obtain current information. On the other hand, we need the user s information for human interfaces. Although the user s information is obtained through joystick operation in conventional power wheelchairs, we would like to use much simpler actions to realize a user-friendly interface. This requires the wheelchair to observe the user s actions to understand his/her intentions. However, simple actions tend to be noisy; even if users do not intend to issue commands, they may show movements that can be seen as command actions to the wheelchair. To solve this problem, we propose to integrate the observation of the user with the environmental observations, which are usually used to realize autonomy. In our proposed robotic wheelchair, we choose to use face direction to convey the user s intentions to the system. When users want to move in a certain direction, it is a natural action for them to look in the direction. Using face direction to control motion has another merit: when the user wants to turn to the left or right, he/she needs to see in the direction by turning the head intentionally. As the turn is getting completed, the user should turn his/her head back to the frontal position to adjust the wheelchair s direction. However, this behavior is so natural that it happens almost unconsciously. If the user uses a steering lever or wheel to control the vehicle s BY YOSHINORI KUNO, NOBUTAKA SHIMADA, AND YOSHIAKI SHIRAI /03$ IEEE

2 motion, he/she needs to move it consciously all the time. The interface by face direction can be user-friendlier than such conventional methods. As mentioned before, however, the problem is that such natural motions are noisy. Users may move their heads even when they do not intend to make turns. At first, we tried to solve the problem just by ignoring quick movements because we can expect that behaviors for turning are done intentionally, thus, they are slow and steady [6]. However, this simple method is not enough. For example, although it is rather natural for humans to look at obstacles when they come close, the previous system would turn toward the obstacles. This article presents our new modified robotic wheelchair. The key point of the research is the integration of the interface by face direction and autonomous capabilities. In other words, it combines the information obtained by observing the user and the environment. The integration means more than that the system has these two kinds of separate functions; it uses the sensor information obtained for autonomous navigation to solve the problem with control by face direction mentioned above. Also, if it can understand the user s intentions from observing the face, it chooses an appropriate autonomous navigation function to reduce the user s burden of operation. In addition, we introduce another function, which has not been considered in conventional systems, into our wheelchair. This is realized by observing the user when he/she is off the wheelchair. It has the capability to find the user by face recognition. Then it can move according to the user s commands, indicated by hand gestures. This function is useful because people who need a wheelchair may find it difficult to walk to their wheelchair. This article describes our design concept of the robotic wheelchair and shows our experimental system with operational experiments. wheelchair. If any obstacles are detected, the wheelchair is controlled to avoid them. This obstacle-avoidance behavior overrides other behaviors except those done manually with the joystick to ensure safety. The system computes the face direction from images of the user-observing camera. Users can convey their intentions to Figure 1. System overview. System Design Figure 1 shows an overview of our robotic wheelchair system. As a computing resource, the system has a PC (AMD Athlon 400 MHz) with a real-time image-processing board consisting of 256 processors developed by NEC Corporation [7]. The system has 16 ultrasonic sensors to see the environment and two video cameras. One camera is set up to see the environment, while the other is used to look at the user s face. The system controls its motion based on the sensor data. Figure 2 illustrates how the sensor data is used in the wheelchair. The ultrasonic sensors detect obstacles around the Race Gesture Environment Observing Camera Target Tracking Figure 2. System configuration. User Observing Camera Face Direction Computation User s Intention Ultrasonic Sensors Environment Follow Walls Go Straight Turn Avoid Obstacles 27

3 The integration of observation of the user and that of the environment provides a user-friendly human interface. the system by turning their faces. The problem, however, is that users move their heads for various other reasons than controlling the wheelchair s direction. The system needs to discern wheelchair-control behaviors from others. This is the intention-understanding problem in the current system. Our basic assumption is that users move their heads slowly and steadily when they are told that the wheelchair moves in their face s direction. Thus, the system ignores quick head movements and only responds to slow, steady movements. There are cases, however, where users look in a certain direction steadily without any intention of controlling the wheelchair. For example, when users notice a poster on a corridor wall, they may look at it by turning their faces in its direction while moving straight. In general, if there are objects close to the user, he/she may tend to look at them for safety and other reasons. In these cases, users usually do not want to turn in those directions. However, we cannot exclude the possibility that they really want to turn toward those objects. To adapt to these situations, the system changes the sensitivity to face-turning detection depending on the environmental data obtained from the ultrasonic sensors. If the ultrasonic sensors detect objects close to the wheelchair in certain directions, the system reduces the sensitivity to detection of turning the face in these directions. Thus, the wheelchair will not turn in these directions unless the user looks there steadily. The environment-observing camera is used for moving straight autonomously. With conventional power wheelchairs, users need to hold the joystick to control the motion all the time, even when they just want to go straight. In addition to avoiding-obstacle behavior, the current system has an autonomous behavior to reduce the user s burden in such cases. This behavior is initiated by the face-direction observation. If the face is looking straight awhile, the system considers that the user wants to go straight, starting this behavior. If it can find a certain feature that can be tracked around the image center, it controls the wheelchair motion to keep the feature in the image center. In the case of moving in a corridor, it extracts the side edges of the corridor, controlling the wheelchair motion to keep the intersection of the edges (vanishing point) at the same image position. The ultrasonic sensors are also used to help the wheelchair move straight. We have implemented the wall-following behavior developed for behavior-based robots [8]. The wheelchair can follow the corridor by keeping the distance from the wall using the ultrasonic sensor data. In addition, the environment-observing camera is used to watch the user when he/she is off the wheelchair. People who need a wheelchair may find it difficult to walk to their wheelchair. Thus, our wheelchair has the capability to find the user by face recognition. Then it can recognize his/her hand gestures. The user can make it come or go by hand gestures. Motion Control by Face Direction Face Direction Computation Our previous system uses a face-direction computation method based on tracking face features [6]. However, the (a) (c) Figure 3. Face-direction computation. (b) (d) Table 1. Experimental evaluation for response time by six subjects (A-F). Frames (n) A B C D E F Total

4 Table 2. Experimental evaluation for unintentional head movements. Frames (n) Level 1 Level 2 Level = the wheelchair motion was not affected. = the wheelchair motion was affected. tracking process sometimes loses the features. Thus, we have devised a new simple robust method as follows. Figure 3 shows each step of the process. Figure 3(a) is an original image. First, we extract bright regions in the image [Figure 3(b)], choosing the largest region as the face region [Figure 3(c)]. Then, we extract dark regions inside the face region, which contain face features, such as the eyes, eyebrows, and mouth [Figure 3(d)]. We compare the centroid of the face region and that of all the face features combined. In Figure 3(c) and (d), the vertical lines passing the centroid of the face region are drawn in thick red lines, and, in Figure 3(d), a vertical line passing the centroid of the face features is drawn in a thin green line. If the latter lies to the right/left of the former (to the left/right in the image), the face can be considered as turning right/left. The output of the process is this horizontal distance between the two centroids, which roughly indicates the face direction. We use vertical movements of the face features for the start-stop switch. If the user nods, the wheelchair starts moving. If the user nods again, it stops. Preliminary Experiments The system can compute face direction at 30 frames per second. We apply a filter to the direction data to separate wheelchair-control behaviors from others. The filter that we use is a simple, smoothing filter that averages the values over a certain number of frames. If this number is large, the system will not be affected by quick unintentional head movements. However, the user may feel uneasy at the slow response of the system. We performed actual running experiments by changing the number of frames n used in the filter to obtain basic data. First, we examined the degree of uneasiness about the slow response. We used six subjects. They were male students at Osaka University and not regular wheelchair users. We told them that the wheelchair would move in their face direction. They drove the wheelchair without any problem. This confirms that face direction can be used for the wheelchair operation. They were asked to give their subjective evaluation score for each smoothing filter condition from the viewpoint of the system s response to their head movements: 1 for not good, 2 for moderate, and 3 for good. Table 1 shows the result. When n is small, the wheelchair responds sensitively to any head movements. Thus, the scores are a little small. When n is large, the wheelchair does not respond quickly, even when the user turns his/her head intentionally. The scores in these cases are considerably small. The highest score was obtained when n was five. Second, we examined whether the system would be affected by quick, unintentional head movements. We considered three levels of movements: quick movements with duration less than 0.5 s (Level 1), moderate speed movements with duration from 0.5 to 1 s (Level 2), and slow movements with duration from 1 to 1.5 s (Level 3). At Level 3, users turn their heads and can read characters in the scene around them. We asked a subject to move his head five times for each level while the wheelchair was moving straight. Then we examined whether or not the wheelchair motions were affected. Table 2 shows the result. It indicates that n should be equal to or greater than 15 if we want the system to not be affected by movements at Levels 1 and 2. Use of Environmental Information The experimental results about the two evaluation factors show a tradeoff relation. We chose a constant n = 10 in our previous wheelchair, satisfying both to some degree [6]. To increase both evaluation factors, we need a method of recognizing human intentions more certainly. We considered using the environmental information instead of obtaining more information from observing the user. This is based on the assumption that our behaviors may be constrained by environmental conditions. We have modified the wheelchair so that it can change the value of n depending on the environmental information obtained by the ultrasonic sensors. If there are objects close to the user, he/she may tend to look at them for n [Frame] Distance [m] Figure 4. Number of frames versus distance. 29

5 Our robotic wheelchair observes the user and the environment, can understand the intentions from behavior and the environmental information, and observes the user when he/she is off the wheelchair. safety and other reasons. However, the user usually does not want to turn in those directions. When users move in a narrow space, such as a corridor, they usually want to move along it. In addition, it might be dangerous if the wheelchair would unexpectedly turn due to the failure of user s intention recognition. Thus, it is better to use large n in such cases. On the other hand, users may prefer quick response to move freely if they are in an open space where they do not have any obstacles around them. Even if errors occur in user s intention recognition, it is safe in such cases. Thus, it is appropriate to use small n. Based on the above consideration and the results of the preliminary experiments, we have determined to choose n using the relation shown in Figure 4. The horizontal axis shows the distance to objects measured by the ultrasonic sensors. The value n is determined for each turn direction using the sensor measurements in the direction. In the current implementation, we consider only the forward-right and forward-left directions. In addition, we have determined to use n = 8 all the time when the face is turned from either right or left to the center. This is based on comments by the subjects after the preliminary experiments. Their comments are summarized as follows. When they turned the wheelchair to the left/right, they did not mind the slow response. However, when the turn was nearly completed, and they turned their faces back to the frontal position, they felt uneasy if the response was slow. In the former case, they did turning-head behaviors intention- (a) B A Start (b) Figure 5. Vision processes for straight motion. Figure 6. Experimental environment map. 30

6 ally. Thus, they did not mind the slow response because their movements themselves were slow and steady. In the latter case, however, their behaviors were almost unconscious and quick. Thus, the slow response caused their uneasiness. This consideration has led us to use small n for center-oriented face movements. Going Straight Using Vision With conventional power wheelchairs, users need to hold the joystick to control the motion all the time, even when they just want to go straight. The system has an autonomous behavior to reduce the user s burden in such cases. This behavior is initiated by the face-direction observation. If the face is looking straight for a while, the system considers that the user wants to go straight, starting this behavior. The system has two vision processes for this behavior. One is a simple template matching based on the sum of absolute difference (SAD). When the system knows that the user would like to go straight, it examines the center region of the image. If the intensity variance in this region is large, the system selects this region as a template. Then, in successive images, it calculates the best matching position with the template, controlling the wheelchair to keep the matching position at the center. The template is updated as the wheelchair moves. If the method fails, and the wheelchair moves in a wrong direction, the user will turn the face in the intended direction. This means that the system can tell its failure from the face-direction computation result. If this happens, it waits awhile until the user keeps looking forward steadily, then resumes the process with a new template. The second is based on a vanishing point. This is used such as in a corridor where a distinguished target to be tracked may not be obtained. The system extracts both side edges of the corridor, calculating the intersection of the edges. It controls the wheelchair motion to keep the intersection (vanishing point) at the same position in the images. Figure 5(a) shows an example of the tracking method where the tracking region is indicated by the white square. Figure 5(b) shows an example of the vanishing-point method, where the extracted corridor edges are drawn in white. A, the system set n large. When the wheelchair came close to B, the forward-right side became open. Thus, the value of n became small. The upper part of Figure 8 shows face direction during the travel the lower part the actual wheelchair motion. Although the user moved his head three times to look around the scene before reaching B, the wheelchair moved straight. Then, it turned right when the user moved his head Distance [m] A Frame Distance n [Frame] Path [m] Figure 7. Distance to obstacles and the number of frames. L Face Direction B Total System Experiments Function Check First, we performed experiments to examine whether or not the proposed functions could work properly, and the wheelchair could navigate effectively in a narrow crowded space. The subjects were not wheelchair users. Figure 6 shows the experimental environment map, where the rectangles and ellipses show desks and chairs, respectively. The curve in this figure shows an example path of the wheelchair. Figure 7 shows the distance from the wheelchair to the obstacles on the right side along the path from A to B in Figure 6. It also shows the number of frames n used in smoothing face-direction data. Since several objects were located around R L Wheelchair Motion A B R Path [m] Figure 8. Face direction and wheelchair motion. 31

7 to the right to show his intention of turning. Figure 9 shows the wheelchair in the experimental run. Although we have not evaluated the system quantitatively, the six users who tried the previous version of the wheelchair gave favorable comments to this new version. (a) (c) (e) Figure 9. Experimental run. (b) (d) (f) Experiments by Wheelchair Users Next, we took the wheelchair to a hospital where rehabilitation programs were provided for people who had difficulty in moving around. Five patients, three females and two males, joined the experiments. They were all wheelchair users; three mainly used manual ones and the other two used power wheelchairs daily, which happened to be the same type that we adopted in our system. They were told before riding, If you nod, the wheelchair will start moving, and if you nod again, it will stop. It will move in the direction where you look. With this instruction only, all were able to move the wheelchair without any problem. After confirming that they were able to control the wheelchair, we performed experiments to check the effectiveness of the integration of both user and environmental observations. We put boxes, as shown in Figure 10, on the path. We asked them to avoid the boxes. Even though they tended to look at the boxes, the wheelchair successfully avoided the obstacles. Then, we switched off the integration function and asked them again to avoid the boxes. Although they were also able to avoid the obstacles, the radii of the turn around the obstacles were larger than those in the previous case. This can be explained as follows: in this case, when they looked at the obstacles, the wheelchair started turning in their direction. Thus, to avoid collision they tried to move the wheelchair to depart far from them. One subject had a problem with her right elbow, having difficulty in steering her own power wheelchair. She told us that she preferred our robotic wheelchair. Our wheelchair proved to be useful for people like her. The others gave a high evaluation because the wheelchair can move safely by simple operations. However, since the current wheelchair is heavy and cannot turn as quickly as the base machine, they were not sure whether or not the proposed interface was definitely better than that with a joystick. This aspect is the focus of our future work: to make the wheelchair move 32

8 Figure 10. Experimental scene at a hospital. (a) Person A Figure 12. Tracking the face and the hands. Figure 11. Face recognition to detect the user (Person A). as quickly as the base machine by solving implementation issues and to perform usability tests to compare our wheelchair with ordinary power wheelchairs. Remote Control Using Face and Gesture In this section, we describe a new optional function of our wheelchair. Since this is an ongoing project, we briefly describe what we have done so far in this article. Details can be found in [9]. (b) On various occasions, people using wheelchairs have to get off them. They need to move their wheelchairs where they do not bother other people. Then, when they leave, they want to make their wheelchairs come to them. It is convenient if they can do these operations by hand gestures, since hand gestures can be used in noisy conditions. However, computer recognition of hand gestures is difficult in complex scenes [10]. In typical cases, many people are moving with their hands moving. Thus, it is difficult to distinguish a come here or any other command gestures by the user from other movements in the scene. We propose to solve this problem by combining face recognition and gesture recognition. Our system first extracts face regions, detecting the user s face. Then, it tracks the user s face and hands, recognizing hand gestures. Since the user s face is located, a simple method can recognize hand gestures. It cannot be distracted by other movements in the scene. The system extracts skin-color regions by color segmentation. It also extracts moving regions by subtraction between consecutive frames. Regions around those extracted in both processes are considered as face candidates. The system zooms in on each face candidate one by one, checking whether or not it is really a face by examining the existence of face features. Then, the selected face region data is fed into the face-recognition process. We use the face-recognition method proposed by Moghaddam and Pentland [11]. Images of the user from various angles are compressed in an eigenspace in advance. An observed image is projected onto the eigenspace, and the distance-from-feature-space (DFFS) and the distance-in-feature-space (DIFS) are computed. Using both data, the system identifies whether or not the current face is the user s face. Figure 11 shows an example of face recognition. After the user s face is detected, it is tracked based on the simple SAD method. Moving skin-color regions around and under the face are considered the hands. They are also tracked. Figure 12 shows a tracking result. The relative hand positions, with respect to the face, are computed. The spotting-recognition method [12] based on continuous dynamic programming carries out both segmentation and recognition simultaneously using the position data. Gesture recognition in complex environments cannot be perfect. Thus, the system improves the capability through interaction with the user. If the matching score for a particular registered gesture exceeds a predetermined threshold, the wheelchair moves according to the command indicated by this gesture. Otherwise, the gesture with the highest matching score, although it is smaller than the threshold, is chosen. Then, the wheelchair moves a little according to this gesture command. If the user continues the same gesture after seeing this small motion of the wheelchair, it considers that the recognition result is correct, carrying out the order. If the user changes his/her gesture, it begins to recognize the new gesture, iterating the above process. In the former case, the 33

9 gesture pattern is registered so that the system can learn it as a new variation of the gesture command. Experiments in our laboratory environments, where several people were walking, have confirmed that we can move the wheelchair by the same hand gestures as we use between humans. Conclusion We have proposed a robotic wheelchair that observes the user and the environment. It can understand the user s intentions from his/her behaviors and the environmental information. It also observes the user when he/she is off the wheelchair, recognizing the user s commands indicated by hand gestures. Experimental results prove our approach promising. Although the current system uses face direction, for people who find it difficult to move their faces, it can be modified to use the movements of the mouth, eyes, or any other body parts that they can move. Since such movements are generally noisy, the integration of observing the user and the environment will be effective to know the real intentions of the user and will be a useful technique for better human interfaces. Acknowledgments The authors would like to thank Tadashi Takase and Satoshi Kobatake at Kyowakai Hospital for their cooperation in the experiments at the hospital. They would also like to thank Yoshihisa Adachi, Satoru Nakanishi, Teruhisa Murashima, Yoshifumi Murakami, Takeshi Yoshida, and Toshimitsu Fueda, who participated in the wheelchair project and developed part of the hardware and software. This work was supported in part by the Ministry of Education, Culture, Sports, Science and Technology under the Grant-in-Aid for Scientific Research (KAKENHI , , ). References [1] D.P. Miller and M.G. Slack, Design and testing of a low-cost robotic wheelchair prototype, Autonomous Robotics, vol. 2, pp , [2] T. Gomi and A. Griffith, Developing intelligent wheelchairs for the handicapped, Lecture Notes in AI: Assistive Technology and Artificial Intelligence, vol. 1458, pp , [3] R.C. Simpson and S.P. Levine, Adaptive shared control of a smart wheelchair operated by voice control, in Proc IEEE/RSJ Int. Conf. Intelligent Robots and Systems, 1997, vol. 2, pp [4] H.A. Yanco and J. Gips, Preliminary investigation of a semi-autonomous robotic wheelchair directed through electrodes, in Proc. Rehabilitation Engineering Society of North America 1997 Annual Conf., 1997, pp [5] N.I. Katevas, N.M. Sgouros, S.G. Tzafestas, G. Papakonstantinou, P. Beattie, J.M. Bishop, P. Tsanakas, and D. Koutsouris, The autonomous mobile robot SENARIO: A sensor-aided intelligent navigation system for powered wheelchair, IEEE Robot. Automat. Mag., vol. 4, pp , [6] Y. Adachi, Y. Kuno, N. Shimada, and Y. Shirai, Intelligent wheelchair using visual information on human faces, in Proc IEEE/RSJ Int. Conf. Intelligent Robots and Systems, 1998, vol. 1, pp [7] S. Okazaki, Y. Fujita, and N. Yamashita, A compact real-time vision system using integrated memory array processor architecture, IEEE Trans. Circuits Syst. Video Technol., vol. 5, pp , [8] I. Kweon, Y. Kuno, M. Watanabe, and K. Onoguchi, Behavior-based mobile robot using active sensor fusion, in Proc IEEE Int. Conf. Robotics and Automation, 1992, pp [9] Y. Kuno, T. Murashima, N. Shimada, and Y. Shirai, Understanding and learning of gestures through human-machine interaction, in Proc IEEE/RSJ Int. Conf. Intelligent Robots and Systems, vol. 3, pp [10] V.I. Pavlovic, R. Sharma, and T.S. Huang, Visual interpretation of hand gestures for human-computer interaction: A review, IEEE Trans. Pattern Anal. Machine Intell., vol. 19, pp , [11] B. Moghaddam and A. Pentland, Probabilistic visual learning for object representation, IEEE Trans. Pattern Anal. Machine Intell., vol. 19, pp , [12] T. Nishimura, T. Mukai, and R. Oka, Spotting recognition of gestures performed by people from a single time-varying image, in Proc IEEE/RSJ Int. Conf. Intelligent Robots and Systems, 1997, vol. 2, pp Yoshinori Kuno received the B.S., M.S., and Ph.D. degrees in electrical and electronics engineering from the University of Tokyo in 1977, 1979, and 1982, respectively. He joined Toshiba Corporation in From , he was a visiting scientist at Carnegie Mellon University. In 1993, he joined Osaka University as an associate professor in the Department of Computer-Controlled Mechanical Systems. Since 2000, he has been a professor in the Department of Information and Computer Sciences at Saitama University. His research interests include computer vision, intelligent robots, and human-computer interaction. Nobutaka Shimada received the B.Eng., the M.Eng., and the Ph.D (Eng.) degrees in computer-controlled mechanical engineering in 1992, 1994, and 1997, respectively, all from Osaka University, Osaka, Japan. In 1997, he joined the Department of Computer-Controlled Mechanical Systems, Osaka University, Suita, Japan, where he is currently a research associate. His research interests include computer vision, vision-based human interfaces, and robotics. Yoshiaki Shirai received the B.E. degree from Nagoya University in 1964, and received the M.E. and the Ph.D. degrees from the University of Tokyo in 1966 and 1969, respectively. He joined the Electrotechnical Laboratory in From , he was a visiting researcher at the MIT AI Lab. Since 1988, he has been a professor of the Department of Computer-Controlled Mechanical Systems, Graduate School of Engineering, Osaka University. His research area has been computer vision, robotics, and artificial intelligence. Address for Correspondence: Yoshinori Kuno, Department of Information and Computer Sciences, Saitama University, 255 Shimo-okubo, Saitama, Saitama , Japan. Tel.: Fax: kuno@cv.ics. saitama-u.ac.jp, ykuno@ieee.org. 34

Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping

Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping Robotics and Autonomous Systems 54 (2006) 414 418 www.elsevier.com/locate/robot Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping Masaki Ogino

More information

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

Development of an Automatic Camera Control System for Videoing a Normal Classroom to Realize a Distant Lecture

Development of an Automatic Camera Control System for Videoing a Normal Classroom to Realize a Distant Lecture Development of an Automatic Camera Control System for Videoing a Normal Classroom to Realize a Distant Lecture Akira Suganuma Depertment of Intelligent Systems, Kyushu University, 6 1, Kasuga-koen, Kasuga,

More information

Initial Report on Wheelesley: A Robotic Wheelchair System

Initial Report on Wheelesley: A Robotic Wheelchair System Initial Report on Wheelesley: A Robotic Wheelchair System Holly A. Yanco *, Anna Hazel, Alison Peacock, Suzanna Smith, and Harriet Wintermute Department of Computer Science Wellesley College Wellesley,

More information

Flexible Cooperation between Human and Robot by interpreting Human Intention from Gaze Information

Flexible Cooperation between Human and Robot by interpreting Human Intention from Gaze Information Proceedings of 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems September 28 - October 2, 2004, Sendai, Japan Flexible Cooperation between Human and Robot by interpreting Human

More information

1. INTRODUCTION: 2. EOG: system, handicapped people, wheelchair.

1. INTRODUCTION: 2. EOG: system, handicapped people, wheelchair. ABSTRACT This paper presents a new method to control and guide mobile robots. In this case, to send different commands we have used electrooculography (EOG) techniques, so that, control is made by means

More information

Pupil Detection and Tracking Based on a Round Shape Criterion by Image Processing Techniques for a Human Eye-Computer Interaction System

Pupil Detection and Tracking Based on a Round Shape Criterion by Image Processing Techniques for a Human Eye-Computer Interaction System Pupil Detection and Tracking Based on a Round Shape Criterion by Image Processing Techniques for a Human Eye-Computer Interaction System Tsumoru Ochiai and Yoshihiro Mitani Abstract The pupil detection

More information

System of Recognizing Human Action by Mining in Time-Series Motion Logs and Applications

System of Recognizing Human Action by Mining in Time-Series Motion Logs and Applications The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan System of Recognizing Human Action by Mining in Time-Series Motion Logs and Applications

More information

Development of an Intuitive Interface for PC Mouse Operation Based on Both Arms Gesture

Development of an Intuitive Interface for PC Mouse Operation Based on Both Arms Gesture Development of an Intuitive Interface for PC Mouse Operation Based on Both Arms Gesture Nobuaki Nakazawa 1*, Toshikazu Matsui 1, Yusaku Fujii 2 1 Faculty of Science and Technology, Gunma University, 29-1

More information

Behaviour-Based Control. IAR Lecture 5 Barbara Webb

Behaviour-Based Control. IAR Lecture 5 Barbara Webb Behaviour-Based Control IAR Lecture 5 Barbara Webb Traditional sense-plan-act approach suggests a vertical (serial) task decomposition Sensors Actuators perception modelling planning task execution motor

More information

Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball

Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball Masaki Ogino 1, Masaaki Kikuchi 1, Jun ichiro Ooga 1, Masahiro Aono 1 and Minoru Asada 1,2 1 Dept. of Adaptive Machine

More information

Controlling Humanoid Robot Using Head Movements

Controlling Humanoid Robot Using Head Movements Volume-5, Issue-2, April-2015 International Journal of Engineering and Management Research Page Number: 648-652 Controlling Humanoid Robot Using Head Movements S. Mounica 1, A. Naga bhavani 2, Namani.Niharika

More information

HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA

HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA HAND-SHAPED INTERFACE FOR INTUITIVE HUMAN- ROBOT COMMUNICATION THROUGH HAPTIC MEDIA RIKU HIKIJI AND SHUJI HASHIMOTO Department of Applied Physics, School of Science and Engineering, Waseda University 3-4-1

More information

Fuzzy Logic Based Robot Navigation In Uncertain Environments By Multisensor Integration

Fuzzy Logic Based Robot Navigation In Uncertain Environments By Multisensor Integration Proceedings of the 1994 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MF1 94) Las Vega, NV Oct. 2-5, 1994 Fuzzy Logic Based Robot Navigation In Uncertain

More information

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects NCCT Promise for the Best Projects IEEE PROJECTS in various Domains Latest Projects, 2009-2010 ADVANCED ROBOTICS SOLUTIONS EMBEDDED SYSTEM PROJECTS Microcontrollers VLSI DSP Matlab Robotics ADVANCED ROBOTICS

More information

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged ADVANCED ROBOTICS SOLUTIONS * Intelli Mobile Robot for Multi Specialty Operations * Advanced Robotic Pick and Place Arm and Hand System * Automatic Color Sensing Robot using PC * AI Based Image Capturing

More information

Face Registration Using Wearable Active Vision Systems for Augmented Memory

Face Registration Using Wearable Active Vision Systems for Augmented Memory DICTA2002: Digital Image Computing Techniques and Applications, 21 22 January 2002, Melbourne, Australia 1 Face Registration Using Wearable Active Vision Systems for Augmented Memory Takekazu Kato Takeshi

More information

Abstract. 1. Introduction

Abstract. 1. Introduction Trans Am: An Experiment in Autonomous Navigation Jason W. Grzywna, Dr. A. Antonio Arroyo Machine Intelligence Laboratory Dept. of Electrical Engineering University of Florida, USA Tel. (352) 392-6605 Email:

More information

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Dipartimento di Elettronica Informazione e Bioingegneria Robotics Dipartimento di Elettronica Informazione e Bioingegneria Robotics Behavioral robotics @ 2014 Behaviorism behave is what organisms do Behaviorism is built on this assumption, and its goal is to promote

More information

Recognizing Words in Scenes with a Head-Mounted Eye-Tracker

Recognizing Words in Scenes with a Head-Mounted Eye-Tracker Recognizing Words in Scenes with a Head-Mounted Eye-Tracker Takuya Kobayashi, Takumi Toyama, Faisal Shafait, Masakazu Iwamura, Koichi Kise and Andreas Dengel Graduate School of Engineering Osaka Prefecture

More information

Using Reactive and Adaptive Behaviors to Play Soccer

Using Reactive and Adaptive Behaviors to Play Soccer AI Magazine Volume 21 Number 3 (2000) ( AAAI) Articles Using Reactive and Adaptive Behaviors to Play Soccer Vincent Hugel, Patrick Bonnin, and Pierre Blazevic This work deals with designing simple behaviors

More information

Voice based Control Signal Generation for Intelligent Patient Vehicle

Voice based Control Signal Generation for Intelligent Patient Vehicle International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 12 (2014), pp. 1229-1235 International Research Publications House http://www. irphouse.com Voice based Control

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

Autonomic gaze control of avatars using voice information in virtual space voice chat system

Autonomic gaze control of avatars using voice information in virtual space voice chat system Autonomic gaze control of avatars using voice information in virtual space voice chat system Kinya Fujita, Toshimitsu Miyajima and Takashi Shimoji Tokyo University of Agriculture and Technology 2-24-16

More information

Development of Video Chat System Based on Space Sharing and Haptic Communication

Development of Video Chat System Based on Space Sharing and Haptic Communication Sensors and Materials, Vol. 30, No. 7 (2018) 1427 1435 MYU Tokyo 1427 S & M 1597 Development of Video Chat System Based on Space Sharing and Haptic Communication Takahiro Hayashi 1* and Keisuke Suzuki

More information

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit)

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit) Vishnu Nath Usage of computer vision and humanoid robotics to create autonomous robots (Ximea Currera RL04C Camera Kit) Acknowledgements Firstly, I would like to thank Ivan Klimkovic of Ximea Corporation,

More information

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment Proceedings of the International MultiConference of Engineers and Computer Scientists 2016 Vol I,, March 16-18, 2016, Hong Kong Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free

More information

The Relationship between the Arrangement of Participants and the Comfortableness of Conversation in HyperMirror

The Relationship between the Arrangement of Participants and the Comfortableness of Conversation in HyperMirror The Relationship between the Arrangement of Participants and the Comfortableness of Conversation in HyperMirror Osamu Morikawa 1 and Takanori Maesako 2 1 Research Institute for Human Science and Biomedical

More information

INTELLWHEELS A Development Platform for Intelligent Wheelchairs for Disabled People

INTELLWHEELS A Development Platform for Intelligent Wheelchairs for Disabled People INTELLWHEELS A Development Platform for Intelligent Wheelchairs for Disabled People Rodrigo A. M. Braga 1,2, Marcelo Petry 2, Antonio Paulo Moreira 2 and Luis Paulo Reis 1,2 1 Artificial Intelligence and

More information

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung,

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung, IJCSNS International Journal of Computer Science and Network Security, VOL.11 No.9, September 2011 55 A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang,

More information

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization Sensors and Materials, Vol. 28, No. 6 (2016) 695 705 MYU Tokyo 695 S & M 1227 Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization Chun-Chi Lai and Kuo-Lan Su * Department

More information

Enhanced Method for Face Detection Based on Feature Color

Enhanced Method for Face Detection Based on Feature Color Journal of Image and Graphics, Vol. 4, No. 1, June 2016 Enhanced Method for Face Detection Based on Feature Color Nobuaki Nakazawa1, Motohiro Kano2, and Toshikazu Matsui1 1 Graduate School of Science and

More information

HMM-based Error Recovery of Dance Step Selection for Dance Partner Robot

HMM-based Error Recovery of Dance Step Selection for Dance Partner Robot 27 IEEE International Conference on Robotics and Automation Roma, Italy, 1-14 April 27 ThA4.3 HMM-based Error Recovery of Dance Step Selection for Dance Partner Robot Takahiro Takeda, Yasuhisa Hirata,

More information

Informing a User of Robot s Mind by Motion

Informing a User of Robot s Mind by Motion Informing a User of Robot s Mind by Motion Kazuki KOBAYASHI 1 and Seiji YAMADA 2,1 1 The Graduate University for Advanced Studies 2-1-2 Hitotsubashi, Chiyoda, Tokyo 101-8430 Japan kazuki@grad.nii.ac.jp

More information

PupilMouse: Cursor Control by Head Rotation Using Pupil Detection Technique

PupilMouse: Cursor Control by Head Rotation Using Pupil Detection Technique PupilMouse: Cursor Control by Head Rotation Using Pupil Detection Technique Yoshinobu Ebisawa, Daisuke Ishima, Shintaro Inoue, Yasuko Murayama Faculty of Engineering, Shizuoka University Hamamatsu, 432-8561,

More information

STUDY ON REFERENCE MODELS FOR HMI IN VOICE TELEMATICS TO MEET DRIVER S MIND DISTRACTION

STUDY ON REFERENCE MODELS FOR HMI IN VOICE TELEMATICS TO MEET DRIVER S MIND DISTRACTION STUDY ON REFERENCE MODELS FOR HMI IN VOICE TELEMATICS TO MEET DRIVER S MIND DISTRACTION Makoto Shioya, Senior Researcher Systems Development Laboratory, Hitachi, Ltd. 1099 Ohzenji, Asao-ku, Kawasaki-shi,

More information

Simulation of a mobile robot navigation system

Simulation of a mobile robot navigation system Edith Cowan University Research Online ECU Publications 2011 2011 Simulation of a mobile robot navigation system Ahmed Khusheef Edith Cowan University Ganesh Kothapalli Edith Cowan University Majid Tolouei

More information

Live Hand Gesture Recognition using an Android Device

Live Hand Gesture Recognition using an Android Device Live Hand Gesture Recognition using an Android Device Mr. Yogesh B. Dongare Department of Computer Engineering. G.H.Raisoni College of Engineering and Management, Ahmednagar. Email- yogesh.dongare05@gmail.com

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

IN MOST human robot coordination systems that have

IN MOST human robot coordination systems that have IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 54, NO. 2, APRIL 2007 699 Dance Step Estimation Method Based on HMM for Dance Partner Robot Takahiro Takeda, Student Member, IEEE, Yasuhisa Hirata, Member,

More information

Undefined Obstacle Avoidance and Path Planning

Undefined Obstacle Avoidance and Path Planning Paper ID #6116 Undefined Obstacle Avoidance and Path Planning Prof. Akram Hossain, Purdue University, Calumet (Tech) Akram Hossain is a professor in the department of Engineering Technology and director

More information

The Research of the Lane Detection Algorithm Base on Vision Sensor

The Research of the Lane Detection Algorithm Base on Vision Sensor Research Journal of Applied Sciences, Engineering and Technology 6(4): 642-646, 2013 ISSN: 2040-7459; e-issn: 2040-7467 Maxwell Scientific Organization, 2013 Submitted: September 03, 2012 Accepted: October

More information

Cooperative Behavior Acquisition in A Multiple Mobile Robot Environment by Co-evolution

Cooperative Behavior Acquisition in A Multiple Mobile Robot Environment by Co-evolution Cooperative Behavior Acquisition in A Multiple Mobile Robot Environment by Co-evolution Eiji Uchibe, Masateru Nakamura, Minoru Asada Dept. of Adaptive Machine Systems, Graduate School of Eng., Osaka University,

More information

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department EE631 Cooperating Autonomous Mobile Robots Lecture 1: Introduction Prof. Yi Guo ECE Department Plan Overview of Syllabus Introduction to Robotics Applications of Mobile Robots Ways of Operation Single

More information

A SURVEY ON GESTURE RECOGNITION TECHNOLOGY

A SURVEY ON GESTURE RECOGNITION TECHNOLOGY A SURVEY ON GESTURE RECOGNITION TECHNOLOGY Deeba Kazim 1, Mohd Faisal 2 1 MCA Student, Integral University, Lucknow (India) 2 Assistant Professor, Integral University, Lucknow (india) ABSTRACT Gesture

More information

Combined Approach for Face Detection, Eye Region Detection and Eye State Analysis- Extended Paper

Combined Approach for Face Detection, Eye Region Detection and Eye State Analysis- Extended Paper International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 10, Issue 9 (September 2014), PP.57-68 Combined Approach for Face Detection, Eye

More information

Associated Emotion and its Expression in an Entertainment Robot QRIO

Associated Emotion and its Expression in an Entertainment Robot QRIO Associated Emotion and its Expression in an Entertainment Robot QRIO Fumihide Tanaka 1. Kuniaki Noda 1. Tsutomu Sawada 2. Masahiro Fujita 1.2. 1. Life Dynamics Laboratory Preparatory Office, Sony Corporation,

More information

Visuo-Haptic Interface for Teleoperation of Mobile Robot Exploration Tasks

Visuo-Haptic Interface for Teleoperation of Mobile Robot Exploration Tasks Visuo-Haptic Interface for Teleoperation of Mobile Robot Exploration Tasks Nikos C. Mitsou, Spyros V. Velanas and Costas S. Tzafestas Abstract With the spread of low-cost haptic devices, haptic interfaces

More information

Mouth Movement Recognition Using Template Matching and its Implementation in an Intelligent Room

Mouth Movement Recognition Using Template Matching and its Implementation in an Intelligent Room Mouth Movement Recognition Using Template Matching Paper: Rb24-2-5302; 2012/3/23 Mouth Movement Recognition Using Template Matching and its Implementation in an Intelligent Room Kiyoshi Takita, Takeshi

More information

Generating Personality Character in a Face Robot through Interaction with Human

Generating Personality Character in a Face Robot through Interaction with Human Generating Personality Character in a Face Robot through Interaction with Human F. Iida, M. Tabata and F. Hara Department of Mechanical Engineering Science University of Tokyo - Kagurazaka, Shinjuku-ku,

More information

Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics -

Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics - Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics - Hiroshi Ishiguro 1,2, Tetsuo Ono 1, Michita Imai 1, Takayuki Kanda

More information

Yue Bao Graduate School of Engineering, Tokyo City University

Yue Bao Graduate School of Engineering, Tokyo City University World of Computer Science and Information Technology Journal (WCSIT) ISSN: 2221-0741 Vol. 8, No. 1, 1-6, 2018 Crack Detection on Concrete Surfaces Using V-shaped Features Yoshihiro Sato Graduate School

More information

Sensor system of a small biped entertainment robot

Sensor system of a small biped entertainment robot Advanced Robotics, Vol. 18, No. 10, pp. 1039 1052 (2004) VSP and Robotics Society of Japan 2004. Also available online - www.vsppub.com Sensor system of a small biped entertainment robot Short paper TATSUZO

More information

Mobile Robots Exploration and Mapping in 2D

Mobile Robots Exploration and Mapping in 2D ASEE 2014 Zone I Conference, April 3-5, 2014, University of Bridgeport, Bridgpeort, CT, USA. Mobile Robots Exploration and Mapping in 2D Sithisone Kalaya Robotics, Intelligent Sensing & Control (RISC)

More information

Visual Interpretation of Hand Gestures as a Practical Interface Modality

Visual Interpretation of Hand Gestures as a Practical Interface Modality Visual Interpretation of Hand Gestures as a Practical Interface Modality Frederik C. M. Kjeldsen Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate

More information

Gesture Recognition with Real World Environment using Kinect: A Review

Gesture Recognition with Real World Environment using Kinect: A Review Gesture Recognition with Real World Environment using Kinect: A Review Prakash S. Sawai 1, Prof. V. K. Shandilya 2 P.G. Student, Department of Computer Science & Engineering, Sipna COET, Amravati, Maharashtra,

More information

Eye Contact Camera System for VIDEO Conference

Eye Contact Camera System for VIDEO Conference Eye Contact Camera System for VIDEO Conference Takuma Funahashi, Takayuki Fujiwara and Hiroyasu Koshimizu School of Information Science and Technology, Chukyo University e-mail: takuma@koshi-lab.sist.chukyo-u.ac.jp,

More information

EV3 Advanced Topics for FLL

EV3 Advanced Topics for FLL EV3 Advanced Topics for FLL Jim Keller GRASP Laboratory University of Pennsylvania August 14, 2016 Part 1 of 2 Topics Intro to Line Following Basic concepts Calibrate Calibrate the light sensor Display

More information

Smooth collision avoidance in human-robot coexisting environment

Smooth collision avoidance in human-robot coexisting environment The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Smooth collision avoidance in human-robot coexisting environment Yusue Tamura, Tomohiro

More information

Session 11 Introduction to Robotics and Programming mbot. >_ {Code4Loop}; Roochir Purani

Session 11 Introduction to Robotics and Programming mbot. >_ {Code4Loop}; Roochir Purani Session 11 Introduction to Robotics and Programming mbot >_ {Code4Loop}; Roochir Purani RECAP from last 2 sessions 3D Programming with Events and Messages Homework Review /Questions Understanding 3D Programming

More information

Hybrid architectures. IAR Lecture 6 Barbara Webb

Hybrid architectures. IAR Lecture 6 Barbara Webb Hybrid architectures IAR Lecture 6 Barbara Webb Behaviour Based: Conclusions But arbitrary and difficult to design emergent behaviour for a given task. Architectures do not impose strong constraints Options?

More information

A New Analytical Representation to Robot Path Generation with Collision Avoidance through the Use of the Collision Map

A New Analytical Representation to Robot Path Generation with Collision Avoidance through the Use of the Collision Map International A New Journal Analytical of Representation Control, Automation, Robot and Path Systems, Generation vol. 4, no. with 1, Collision pp. 77-86, Avoidance February through 006 the Use of 77 A

More information

Nagoya University Center of Innovation (COI)

Nagoya University Center of Innovation (COI) The 18th International Conference on Industrial Technology Innovation (ICITI, 2017) Nagoya University Center of Innovation (COI) -Empowering an aging society through advanced mobility- August 22, 2017

More information

Augmented Desk Interface. Graduate School of Information Systems. Tokyo , Japan. is GUI for using computer programs. As a result, users

Augmented Desk Interface. Graduate School of Information Systems. Tokyo , Japan. is GUI for using computer programs. As a result, users Fast Tracking of Hands and Fingertips in Infrared Images for Augmented Desk Interface Yoichi Sato Institute of Industrial Science University oftokyo 7-22-1 Roppongi, Minato-ku Tokyo 106-8558, Japan ysato@cvl.iis.u-tokyo.ac.jp

More information

A Lego-Based Soccer-Playing Robot Competition For Teaching Design

A Lego-Based Soccer-Playing Robot Competition For Teaching Design Session 2620 A Lego-Based Soccer-Playing Robot Competition For Teaching Design Ronald A. Lessard Norwich University Abstract Course Objectives in the ME382 Instrumentation Laboratory at Norwich University

More information

Team KMUTT: Team Description Paper

Team KMUTT: Team Description Paper Team KMUTT: Team Description Paper Thavida Maneewarn, Xye, Pasan Kulvanit, Sathit Wanitchaikit, Panuvat Sinsaranon, Kawroong Saktaweekulkit, Nattapong Kaewlek Djitt Laowattana King Mongkut s University

More information

Group Robots Forming a Mechanical Structure - Development of slide motion mechanism and estimation of energy consumption of the structural formation -

Group Robots Forming a Mechanical Structure - Development of slide motion mechanism and estimation of energy consumption of the structural formation - Proceedings 2003 IEEE International Symposium on Computational Intelligence in Robotics and Automation July 16-20, 2003, Kobe, Japan Group Robots Forming a Mechanical Structure - Development of slide motion

More information

Learning Reactive Neurocontrollers using Simulated Annealing for Mobile Robots

Learning Reactive Neurocontrollers using Simulated Annealing for Mobile Robots Learning Reactive Neurocontrollers using Simulated Annealing for Mobile Robots Philippe Lucidarme, Alain Liégeois LIRMM, University Montpellier II, France, lucidarm@lirmm.fr Abstract This paper presents

More information

The Future of AI A Robotics Perspective

The Future of AI A Robotics Perspective The Future of AI A Robotics Perspective Wolfram Burgard Autonomous Intelligent Systems Department of Computer Science University of Freiburg Germany The Future of AI My Robotics Perspective Wolfram Burgard

More information

Speed Traffic-Sign Recognition Algorithm for Real-Time Driving Assistant System

Speed Traffic-Sign Recognition Algorithm for Real-Time Driving Assistant System R3-11 SASIMI 2013 Proceedings Speed Traffic-Sign Recognition Algorithm for Real-Time Driving Assistant System Masaharu Yamamoto 1), Anh-Tuan Hoang 2), Mutsumi Omori 2), Tetsushi Koide 1) 2). 1) Graduate

More information

Simultaneous presentation of tactile and auditory motion on the abdomen to realize the experience of being cut by a sword

Simultaneous presentation of tactile and auditory motion on the abdomen to realize the experience of being cut by a sword Simultaneous presentation of tactile and auditory motion on the abdomen to realize the experience of being cut by a sword Sayaka Ooshima 1), Yuki Hashimoto 1), Hideyuki Ando 2), Junji Watanabe 3), and

More information

Evaluation of Five-finger Haptic Communication with Network Delay

Evaluation of Five-finger Haptic Communication with Network Delay Tactile Communication Haptic Communication Network Delay Evaluation of Five-finger Haptic Communication with Network Delay To realize tactile communication, we clarify some issues regarding how delay affects

More information

Key-Words: - Fuzzy Behaviour Controls, Multiple Target Tracking, Obstacle Avoidance, Ultrasonic Range Finders

Key-Words: - Fuzzy Behaviour Controls, Multiple Target Tracking, Obstacle Avoidance, Ultrasonic Range Finders Fuzzy Behaviour Based Navigation of a Mobile Robot for Tracking Multiple Targets in an Unstructured Environment NASIR RAHMAN, ALI RAZA JAFRI, M. USMAN KEERIO School of Mechatronics Engineering Beijing

More information

Automatic Licenses Plate Recognition System

Automatic Licenses Plate Recognition System Automatic Licenses Plate Recognition System Garima R. Yadav Dept. of Electronics & Comm. Engineering Marathwada Institute of Technology, Aurangabad (Maharashtra), India yadavgarima08@gmail.com Prof. H.K.

More information

S.P.Q.R. Legged Team Report from RoboCup 2003

S.P.Q.R. Legged Team Report from RoboCup 2003 S.P.Q.R. Legged Team Report from RoboCup 2003 L. Iocchi and D. Nardi Dipartimento di Informatica e Sistemistica Universitá di Roma La Sapienza Via Salaria 113-00198 Roma, Italy {iocchi,nardi}@dis.uniroma1.it,

More information

Multi-Robot Cooperative System For Object Detection

Multi-Robot Cooperative System For Object Detection Multi-Robot Cooperative System For Object Detection Duaa Abdel-Fattah Mehiar AL-Khawarizmi international collage Duaa.mehiar@kawarizmi.com Abstract- The present study proposes a multi-agent system based

More information

Visual Perception Based Behaviors for a Small Autonomous Mobile Robot

Visual Perception Based Behaviors for a Small Autonomous Mobile Robot Visual Perception Based Behaviors for a Small Autonomous Mobile Robot Scott Jantz and Keith L Doty Machine Intelligence Laboratory Mekatronix, Inc. Department of Electrical and Computer Engineering Gainesville,

More information

Image processing of the weld pool and tracking of the welding line in pulsed MAG welding *

Image processing of the weld pool and tracking of the welding line in pulsed MAG welding * [ 溶接学会論文集第 33 巻第 2 号 p. 156s-16s (215)] Image processing of the weld pool and tracking of the welding line in pulsed MAG welding * by Satoshi Yamane**, Katsuhito Shirota***, Sota Tsukano*** and Da Lu Wang***

More information

must be obtained from the IEEE..

must be obtained from the IEEE.. Title Toward Vision-Based Intelligent Nav Prototype Author(s) 三浦, 純 ; Itoh, Motokuni; 白井, 良明 Citation IEEE Transactions on Intelligent Tr P.136-P.146 Issue 2002-06 Date Text Version publisher URL http://hdl.handle.net/11094/3363

More information

Keywords: Multi-robot adversarial environments, real-time autonomous robots

Keywords: Multi-robot adversarial environments, real-time autonomous robots ROBOT SOCCER: A MULTI-ROBOT CHALLENGE EXTENDED ABSTRACT Manuela M. Veloso School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA veloso@cs.cmu.edu Abstract Robot soccer opened

More information

Estimation of Folding Operations Using Silhouette Model

Estimation of Folding Operations Using Silhouette Model Estimation of Folding Operations Using Silhouette Model Yasuhiro Kinoshita Toyohide Watanabe Abstract In order to recognize the state of origami, there are only techniques which use special devices or

More information

CYCLIC GENETIC ALGORITHMS FOR EVOLVING MULTI-LOOP CONTROL PROGRAMS

CYCLIC GENETIC ALGORITHMS FOR EVOLVING MULTI-LOOP CONTROL PROGRAMS CYCLIC GENETIC ALGORITHMS FOR EVOLVING MULTI-LOOP CONTROL PROGRAMS GARY B. PARKER, CONNECTICUT COLLEGE, USA, parker@conncoll.edu IVO I. PARASHKEVOV, CONNECTICUT COLLEGE, USA, iipar@conncoll.edu H. JOSEPH

More information

Evaluation of Connected Vehicle Technology for Concept Proposal Using V2X Testbed

Evaluation of Connected Vehicle Technology for Concept Proposal Using V2X Testbed AUTOMOTIVE Evaluation of Connected Vehicle Technology for Concept Proposal Using V2X Testbed Yoshiaki HAYASHI*, Izumi MEMEZAWA, Takuji KANTOU, Shingo OHASHI, and Koichi TAKAYAMA ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

More information

International Journal of Informative & Futuristic Research ISSN (Online):

International Journal of Informative & Futuristic Research ISSN (Online): Reviewed Paper Volume 2 Issue 4 December 2014 International Journal of Informative & Futuristic Research ISSN (Online): 2347-1697 A Survey On Simultaneous Localization And Mapping Paper ID IJIFR/ V2/ E4/

More information

Public Robotic Experiments to Be Held at Haneda Airport Again This Year

Public Robotic Experiments to Be Held at Haneda Airport Again This Year December 12, 2017 Japan Airport Terminal Co., Ltd. Haneda Robotics Lab Public Robotic Experiments to Be Held at Haneda Airport Again This Year Haneda Robotics Lab Selects Seven Participants for 2nd Round

More information

Robot Navigation System with RFID and Ultrasonic Sensors A.Seshanka Venkatesh 1, K.Vamsi Krishna 2, N.K.R.Swamy 3, P.Simhachalam 4

Robot Navigation System with RFID and Ultrasonic Sensors A.Seshanka Venkatesh 1, K.Vamsi Krishna 2, N.K.R.Swamy 3, P.Simhachalam 4 Robot Navigation System with RFID and Ultrasonic Sensors A.Seshanka Venkatesh 1, K.Vamsi Krishna 2, N.K.R.Swamy 3, P.Simhachalam 4 B.Tech., Student, Dept. Of EEE, Pragati Engineering College,Surampalem,

More information

Invited Speaker Biographies

Invited Speaker Biographies Preface As Artificial Intelligence (AI) research becomes more intertwined with other research domains, the evaluation of systems designed for humanmachine interaction becomes more critical. The design

More information

Development of Gaze Detection Technology toward Driver's State Estimation

Development of Gaze Detection Technology toward Driver's State Estimation Development of Gaze Detection Technology toward Driver's State Estimation Naoyuki OKADA Akira SUGIE Itsuki HAMAUE Minoru FUJIOKA Susumu YAMAMOTO Abstract In recent years, the development of advanced safety

More information

Keywords Multi-Agent, Distributed, Cooperation, Fuzzy, Multi-Robot, Communication Protocol. Fig. 1. Architecture of the Robots.

Keywords Multi-Agent, Distributed, Cooperation, Fuzzy, Multi-Robot, Communication Protocol. Fig. 1. Architecture of the Robots. 1 José Manuel Molina, Vicente Matellán, Lorenzo Sommaruga Laboratorio de Agentes Inteligentes (LAI) Departamento de Informática Avd. Butarque 15, Leganés-Madrid, SPAIN Phone: +34 1 624 94 31 Fax +34 1

More information

REBO: A LIFE-LIKE UNIVERSAL REMOTE CONTROL

REBO: A LIFE-LIKE UNIVERSAL REMOTE CONTROL World Automation Congress 2010 TSI Press. REBO: A LIFE-LIKE UNIVERSAL REMOTE CONTROL SEIJI YAMADA *1 AND KAZUKI KOBAYASHI *2 *1 National Institute of Informatics / The Graduate University for Advanced

More information

The Control of Avatar Motion Using Hand Gesture

The Control of Avatar Motion Using Hand Gesture The Control of Avatar Motion Using Hand Gesture ChanSu Lee, SangWon Ghyme, ChanJong Park Human Computing Dept. VR Team Electronics and Telecommunications Research Institute 305-350, 161 Kajang-dong, Yusong-gu,

More information

Evaluation of a Tricycle-style Teleoperational Interface for Children: a Comparative Experiment with a Video Game Controller

Evaluation of a Tricycle-style Teleoperational Interface for Children: a Comparative Experiment with a Video Game Controller 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication. September 9-13, 2012. Paris, France. Evaluation of a Tricycle-style Teleoperational Interface for Children:

More information

Development of a Personal Service Robot with User-Friendly Interfaces

Development of a Personal Service Robot with User-Friendly Interfaces Development of a Personal Service Robot with User-Friendly Interfaces Jun Miura, oshiaki Shirai, Nobutaka Shimada, asushi Makihara, Masao Takizawa, and oshio ano Dept. of omputer-ontrolled Mechanical Systems,

More information

Graphing Techniques. Figure 1. c 2011 Advanced Instructional Systems, Inc. and the University of North Carolina 1

Graphing Techniques. Figure 1. c 2011 Advanced Instructional Systems, Inc. and the University of North Carolina 1 Graphing Techniques The construction of graphs is a very important technique in experimental physics. Graphs provide a compact and efficient way of displaying the functional relationship between two experimental

More information

Driver status monitoring based on Neuromorphic visual processing

Driver status monitoring based on Neuromorphic visual processing Driver status monitoring based on Neuromorphic visual processing Dongwook Kim, Karam Hwang, Seungyoung Ahn, and Ilsong Han Cho Chun Shik Graduated School for Green Transportation Korea Advanced Institute

More information

Vision System for a Robot Guide System

Vision System for a Robot Guide System Vision System for a Robot Guide System Yu Wua Wong 1, Liqiong Tang 2, Donald Bailey 1 1 Institute of Information Sciences and Technology, 2 Institute of Technology and Engineering Massey University, Palmerston

More information

YUMI IWASHITA

YUMI IWASHITA YUMI IWASHITA yumi@ieee.org http://robotics.ait.kyushu-u.ac.jp/~yumi/index-e.html RESEARCH INTERESTS Computer vision for robotics applications, such as motion capture system using multiple cameras and

More information

FU-Fighters. The Soccer Robots of Freie Universität Berlin. Why RoboCup? What is RoboCup?

FU-Fighters. The Soccer Robots of Freie Universität Berlin. Why RoboCup? What is RoboCup? The Soccer Robots of Freie Universität Berlin We have been building autonomous mobile robots since 1998. Our team, composed of students and researchers from the Mathematics and Computer Science Department,

More information

H2020 RIA COMANOID H2020-RIA

H2020 RIA COMANOID H2020-RIA Ref. Ares(2016)2533586-01/06/2016 H2020 RIA COMANOID H2020-RIA-645097 Deliverable D4.1: Demonstrator specification report M6 D4.1 H2020-RIA-645097 COMANOID M6 Project acronym: Project full title: COMANOID

More information