Socially Acceptable Robot Navigation in the Presence of Humans

Size: px
Start display at page:

Download "Socially Acceptable Robot Navigation in the Presence of Humans"

Transcription

1 Socially Acceptable Robot Navigation in the Presence of Humans Phelipe A. A. Vasconcelos, Henrique N. S. Pereira, Douglas G. Macharet, Erickson R. Nascimento Computer Vision and Robotics Laboratory Computer Science Department Universidade Federal de Minas Gerais Belo Horizonte MG Brazil s: {henriquenicolas, doug, Abstract Considering the widespread use of mobile robots in different parts of society, it is important to provide them with the capability to behave in a socially acceptable manner. Therefore, a research topic of great importance recently has been the study of Human-Robot Interaction (HRI). In this work we propose a methodology to dynamically adapt the robot s behavior during its navigation considering a possible encounter with humans in the environment. The method is divided into two basic steps. The first one is based upon Computer Vision techniques and executes the recognition and analysis of the scene, considering characteristics such as the presence of humans, quantity, and distance to the robot. Considering the information from the previous stage, the methodology decides whether the navigation should undergo some modification. Among the possible adaptation are changing the current trajectory and reduction in speed. Different trials on a real-world scenario were executed, providing a thorough evaluation and validation of the methodology. I. INTRODUCTION Robotics has received notoriety in our society mainly due to the wide use of manipulators. From light industry, such as electronic devices manufacturing to automotive and aerospace production in the heavy industry, robots replaced humans in most of the tasks in industrial assembly lines. Despite all productivity benefits of using robots, they are still distant from daily human activities. In the last few years, however, robots are gaining an increasing attention and becoming closer to our daily life, specially to recent advances in Mobile Robotics, which is responsible for studying robots capable of moving around in the environment in which they are inserted. It is noteworthy that the use of mobile robots in different segments of our society will be common in the near future. This changing from controlled environments, such as robot manipulators used in factories, to environments with virtually no restrictions, where people are constantly present (e.g., homes, public places, hospitals, among others) will require robots to behave in a socially acceptable manner. Therefore, a research topic of great importance recently has been the study of Human-Robot Interaction (HRI). This area is responsible for studying the behavior of humans to interact with robots, allowing the development of techniques that let the use of these as transparent as possible. In this work the human-robot interaction happens when the people position change the way to plan the route, so it can change dynamically, and the robot-human interaction happens when the people are detected by the camera implying in a different behavior of the robot. In this work we present an approach which provides a robot with appropriate behavior when performing an autonomous navigation in an environment with the presence of people. The method is divided into two basic steps. The first one is based upon Computer Vision techniques and executes the recognition and analysis of the scene, considering characteristics such as the presence of humans, quantity, and distance to the robot. Considering the information from the previous stage, the methodology decides whether the navigation should undergo some modification. Among the possible adaptation are changing the current trajectory and reduction in speed. The remainder of the paper is organized as follows. Next section discusses some related works regarding different aspects on human-robot interaction. Section III presents our methodology, describing the mechanisms used for identifying the scene and adapting the robot s navigation. Experiments using a real robot are presented in Section IV. Finally, Section V discusses the results and indicate future directions of investigation. II. RELATED WORK The interaction between humans and robots can be divided into two basic situations, namely (i) the case where the robot must perform a task whilst reducing the social impact, and (ii) the case where the task involves interacting directly with a person. Either way, there are some implicit nonverbal social rules that need to be respected, especially those related to personal space of individuals, called proxemics []. Generally, motion planning methods considering an autonomous navigation treat all obstacles in the environment the same way, including people. However, this approach may not be the best solution, since it is important to consider people as a special entity, for example considering the person s level of comfort with respect to the path of the robot. Works like [], [], [4] have shown that the same proxemic zones that exists in human-human interaction can also be applied to human-robot interaction scenarios. Thus an increasing number of works have incorporated this notion of personal space model in the path planning step in order to create acceptable behavior for robots during their navigation.

2 A path that explicitly takes into account the human presence in the environment must address situations such as not pass between two people talking or avoid getting out of the field of view of the people, with the possibility of scaring them unnecessarily. Many works can be found in the literature with different approaches to this problem [5], [6], [7], [8]. A fundamental problem in socially acceptable motion techniques is the actual detection of persons in the environment. Tasks such as pedestrian detection [9], [], people tracking [], human actions and activities recognition [], [] need to be handled in order to infer the state of an environment where there are people walking and robots navigating and interacting with each other. Similarly to [], [4], [5], the methodology presented in this work is based upon the popular Histogram Oriented Gradient (HOG) [6] descriptor to extract human features, which are used to detect presence of humans on the scene. After the scene has been analyzed, the extracted information, such as number of persons and distance are used to dynamically adapt the robot s behavior during its navigation. III. METHODOLOGY The methodology is divided into two steps: (i) human detection, and (ii) robot navigation. In the first step, based upon the data acquired by an RGB-D sensor, we analyze the environment searching for humans. If any people is detected, information such as quantity and distance to robot are calculated. Next, we use this information in a function which will be used to adapt the robot behavior. The flowchart in Figure shows all the steps of the algorithm. Figure : Diagram of all steps of the proposed methodology. A. Human Detection In order to detect humans in the environment we use the data acquired by a RGB-D sensor. This sensor returns both texture (RGB) and depth (D) information regarding the scene being observed. The detection is performed by extracting shape features using the Histogram Oriented Gradients (HOG) [6] technique. The HOG descriptor consists of a method based on counting the gradients orientation occurrence in portions of an image. We have opted to use this descriptor due to its advantages against others descriptors on the literature, such as invariance to geometric and photometric transformations. The basic steps of operation of the HOG descriptor are shown in Figure. Initially, it divides the image into small cells and computes a histogram of gradient directions for each of the pixels in the cell. Next, each cell is discretized into angular bins according to the gradient orientation. Each cell s pixel weighted gradient contributes to its corresponding angular bin and groups of adjacent cells are considered as spatial regions (blocks). The group of cells within a block is the basis for grouping and normalization of the histograms. Normalized group of histograms represents the block histogram and the set of these block histograms represents the descriptor. Figure : Overview of the HOG descriptor extraction. In order to avoid abrupt changes in the robot velocity when the number of detected persons suddenly changes, a filter was implemented to maintain the number of people constant using an orderly queue and its median to exclude the false-negatives that may happen. This filter is also used to prevent sharp velocity variation that may occur due to inconstant detection over time, this variation happens once the data is acquired during the movement of the robot/camera. Moreover, the data acquisition rate of the the human detection step and the low level robot controller may have different values, therefore we needed a higher image receiving rate than the controller (velocity) refresh. This rate used by the classifier, in addition to improve the reliability of the detection, also needs to be high enough to prevent data lost between frames. B. Height Information After the initial detection step, we calculate an enclosing box considering the position of each detected object. Next, we determine the position (height) of these objects in order to increase the robustness of the technique. The height information about each object is considered important since there may occur false-positives, i.e., the robot may classify regular objects as people, but these objects may be at implausible heights as far above or too close to the ground. The height of the objects H from the captured image is given by H = R D F, () where H is the desired (expected) height of the object, R is the height of the enclosing rectangle drawn upon the detection (in pixels), the D is the depth distance between the camera sensor and the object, and F is the focal length of the sensor. The enclosing rectangle skirts the person with a wide margin, which may compromise the result of the calculation. In order to get around this problem, the amount of vertical pixels corresponding to the actual person s image inside the rectangle was calculated, and a value of approximately 8%was obtained. This parameter was used as a fix constant to improve the precision of the height information. The experiments were executed considering a MS Kinect, with full specifications available in [7].

3 C. Distance Information In addition to ensure a safe navigation, the robot needs to know how far an element (person) is from it, modifying the navigation plan in a way that prevents from an imminent collision, tracing another route at the local map. Traditional navigation algorithm already does that by commonly using a laser sensor and getting its information to change the path, but regarding the presence of people, instead of just regular obstacles, the robot must comport itself in a different way since people move and change their position dynamically. The distance information from every detected people was obtained considering the previously defined enclosing box, as shown in Figure (a). The distance is then calculated by considering the average depth from each pixel of the cropped image, as shown in Figure (b). (a) Figure : An example of the data acquired by the sensor. (a) RGB image with the detected people inside the enclosing box. (b) The detected persons in the cropped depth image. Since the drawn rectangle is not bordering exactly the person edge, there will always exist an error in the distance calculus owing to the other surfaces (behind or in front of the person) that is captured in the rectangle. To reduce this error and after analyzing the position of the people in the rectangle, another region was defined to capture only the person surface for the reduction of this error. The detected shape of the person in the depth image follows a pattern positioned in the center of the image in most of the samples and its person/box size ratio does not present a significant variation that prejudices the estimated position of that new depth region which was drawn in the center of each detection. A rectangle was chosen as the new region to delimit the interesting part of the body because of its geometry, since the region consisting by the chest to the stomach part is appearing most in the depth image. A rectangle fitted better for that region, achieving with that an reduction from % to % error in the depth distance calculus after empirical evaluation. D. Robot Behavior The robot follows a previously defined path on a twodimensional map. If any person or obstacle, which was not known in the initial given map, appears before it, it will try to get around without touching the obstacle and continue following the path. The robot always tries to avoid collision, however, if it happens, the robot will stop moving (detection made by the bumper). (b) For a socially acceptable navigation, there must be a difference in the behavior when the robot detects regular obstacles (objects) from when it detects humans in the environment. In this work, we consider that a basic variable to make people feel comfortable around a robot is its navigation velocity. Therefore, the velocity must be modified (adapted) if a person is detected. However, if the distance between the camera and the person is such as the robot does not recognize that as a pedestrian due to sensor limitations (too close or too far), it will continue to be treated as a regular obstacle, and the robot will deviate with a different velocity than if that object was a person. The detection range was evaluated through an analysis of the minimum distance that contains a person in the frame, determining a value of meter between the camera and the individual. In case of an obstacle in the path, or a person who for some reason was not properly detected, the robot will behave attempting to keep navigating with maximum speed, representing a difference in behavior compared to the case in which people are detected. If the previously defined path is entirely blocked by something or someone, the robot turns around and tries to go to another way by changing the local map to another route to the goal point through the map. To avoid collision, we defined a different value for the radius of robot. The value is slightly higher than the real radius, with the objective of giving the robot the notion that it is larger. Using this method, the robot starts the deviation of an obstacle before and makes a greater curvature, which prevents it from hitting an object if it is very fast and near. E. Robot Velocity The robot s velocity will be adapted according to the dynamics of the people in the scene, depending on the parameters of maximum and minimum distance of the individuals present on the detected image and the total number of people. However, the robot s velocity must not rely upon only the number of people in the scene as a condition to reduce or increase its velocity. In a place with many individuals, it is important to have the maximum and minimum distance from the entire group. This information can be combined with the number of detected persons and guarantee, in addition to a safe navigation, an optimized travel time. Considering these combined data, the robot will navigate with a faster speed as the minimum distance from people increases, and slow down as this distance decreases. At the same time, when the total number of individuals rises, the velocity may be reduce, if a few or any people are detected the velocity can be increased. Therefore, we propose a function which will be used to modify the robot s velocity if necessary. The function will consider the aforementioned characteristics, number of detected people in the scene and the distance of the closest person, which will be referred here as minimum distance. The first term of the function is define by: f(x) = x.x, ()

4 where x is an integer number and gives the number of people, which varies from to the maximum number of people that the classifier can identify. The second term of the function is based upon a variable y, which refers to the distance of the person who is the closest to the robot: g(y) = 4 y.y. () The variable y is a rational number, which varies from to the maximum distance that the RGB-D sensor can acquire, in meters. Finally, both of these terms are combined in a single function which will be responsible to determine the new maximum allowed velocity of the robot considering the scene being observed: f(x, y) = x.x (4 y.y). (4).5 The term regarding the number of persons is inversely proportional to the velocity, whilst the minimum distance term is directly proportional. The value.5 in the denominator is used as a normalization factor, assuming that the ratio between number of persons and minimum distance is satisfied. Figure 4 illustrates the values obtained by the function according to the number of persons and minimum distance. The red area in the left corner of the map shows that with a few detections, the speed increases fast as the minimum distance rises. Observing vertically the gradients from the left to the right, it is possible to analyze a decrease at the velocity as the number of detections rises and a nearly linear behavior when the minimum distance and number of detections rise proportionally. The negatives values showed in the map represents very close detections or multiple ones in a medium distance from the robot, this values are normalized to zero, which causes a stop on the robot until the minimum distance from the people rise or the number of detections decrease. Minimum Distance (m) No. of Persons Figure 4: Illustration of function behavior Algorithm describes the operation of the entire methodology. Algorithm SocialNavigation(I, D) : SetStartlAndGoalPosition(p s, p g ); : while p s p g > ɛ do : I c HOG(I); 4: (n, d min ) CalcPeopleDist(I c, D); 5: V VeloctyFunction(n, d min ); 6: if V < then 7: V max ; 8: else 9: V max min(vmax, R V ); : end if : Navigate(V max ); : end while Initially, the start (p s ) and goal (p g ) position of the navigation are set to initiate the procedure. Each position p i consists on a point p i = x i, y i in the SE() domain. These values are to be informed to any technique that may be used to execute the autonomous drive. Next, while the goal position is not reached (considering an error limit ɛ), the RGB-D sensor provides the texture data (I) to the HOG, which transforms it into a cropped image vector (I c ), containing the data of each individual. Then, the CalcDistPeople function is responsible to calculate the number of persons and the distance of each person, determining the minimum distance (d min ) based on the depth data (D). Moreover, the information contained in I c, about the amount of images obtained at the same time, is sent to a buffer where the median is calculated representing the number of persons (n). Finally, the previously calculated information are informed to the VeloctyFunction (accordingly to Equation 4), which returns a velocity (V ) that the robot can perform during navigation. In the case of a value less than zero, the variable is replaced by the final value to zero (the robot must stop). Otherwise, an assessment is carried out in order to keep the maximum velocity as the maximum velocity of the robot being used (V R max). Finally, the selected maximum allowed velocity (V max ) is sent to the algorithm responsible to continue navigating the robot. IV. EXPERIMENTS In this section we present the experiments performed using the proposed method considering a real-world scenario. The navigation was based upon known techniques available at the framework ROS [8]. The robot navigation is performed by considering a previously defined two-dimensional map of the environment and the localization uses the AMCL algorithm. The robot localization accuracy is directed related to the level of the odometry and gyro calibration. For this purpose, the sensors were calibrated according to the guide information available at [8]. Figure 5 shows the robotic platform used for the experimental evaluation. In the first experiment, the robot executed an autonomous navigation through a predefined path without the presence of any person. This is the base for the next experiment, allowing us to observe the basic behavior of the robot. During the initial experiment it was possible to confirm that the

5 .6.4. Velocity (m/s) No. of Persons Time (s) Figure 7: Number of persons detected and velocity over time. Figure 5: Robotic platform. Figure 8 shows the number of persons detected over time, as well as the minimum distance at each time. velocity remained constant at the maximum allowed (.5 m/s) throughout the entire route, since any obstacle or person were detected in its way. The second experiment consisted of evaluating the behavior of the robot in the presence of humans. The robot encounters two different groups during its navigation. The first group is composed of a single person, and the second one formed by two persons. Figure 6 depicts a map of the environment where it is possible to see the start and goal positions, represented by the red and green squares, respectively. It is also possible to observe a possible illustration of the maximum velocity allowed throughout the navigation. 7 No. of Persons 5 4 Minimum distance (m) Time (s) Figure 8: Number of persons detected and minimum distance over time. One can clearly see the occurrence of some peaks at the minimum distance and velocity axis in both of the graphs when the detection occurs for one and two persons. The reason for this is the filter (buffer) that maintains the number of detected people stable was not used for the depth information, explaining these instant variations in the robot velocity. It was considered because the possibility of unpredictable events that may happen in the robot route like the sudden appearing of a person in front of it. It is possible to note that at some moments the red line representing the number of people continues, even if there is no detection at the time, due to the values still present in the filter. Figure 6: Experimental environment with the start (green square) and goal (red square) positions depicted. The green region representing area with possible max velocity, yellow representing a medium velocity allowed and red a low velocity. The results show that the approach works as expected. Upon the detection of a person, and the closer the robot gets to this person, there is a decrease in the velocity. However, when the robot reaches a certain limit, the image of the person is no longer recognized and it is then treated like any other obstacle. Figure 7 shows the number of persons detected over time, as well as the variation of the velocity at each time. The figures show two distinct moments of people detection. At first moment, one person is detected for about 5 seconds. The speed varies only when the function takes the parameters of minimum distance and number of people updated. Considering our approach for the number of people filter, this value remains constant for a period of time longer than the minimum distance information. The robot starts detecting only one person at a large distance, a fact that does not change the speed and it is displayed on the first green area of Figure 7. Starting at six seconds the minimum distance is changed resulting in a modification in the velocity value and both still decreasing while the robot approaches the individual, fact that occurs in

6 the first yellow area. At the time of seconds, the robot deviates from the person and starts to operate at maximum speed, which represents the second green area. The speed varies in direct proportion to the minimum distance, as can be seen by comparing the speed function in Figure 7 and the minimum distance function in Figure 8, which exhibit similar behavior. The second detection displays data when two people are detected, represented by the red area on Figure 6. Two persons are near each other, but one is slightly farther in relation to the robot. It is possible to observe the falling speed at the moment in which one of the individuals is detected at the instant of approximately seconds. Immediately, the second individual is detected and the speed has a sharp decline, not only because of the number of people had changed, but also the minimum distance has been altered. This means that the new detected person was closer to the robot. The minimum distance data still ranging and decreasing because the robot is initiating the movement to avoid the obstacles and approaching them at the same time. At 4 seconds, the robot deviates from one of the two individuals and detects the most distant person. This phase is characterized by the second yellow area. The filter maintains the information of two people due to high rate of detection of both individuals. The speed decreases until the moment when the robot passes through the two obstacles. From this moment, there are no persons in front of the robot and it navigates at maximum speed, situation represented by the last green area. In the figure it can be seen that the information of one person detected remains for a period of time, a fact explained by the presence of the filter, showing that momentarily only the last individual was being detected, while the robot was still crossing the obstacles. V. CONCLUSIONS AND FUTURE WORK In this work we have presented a methodology to dynamically adapt the robot s behavior during its navigation. We proposed a function which limits the maximum allowed velocity of the robot considering social constraints such as the number of persons in the environment and their distance to the robot. The methodology was evaluated in a real-world scenario, and the experiments showed the effectiveness and flexibility of our approach. The analysis considered a different number of persons at different distances, making it possible to clearly observe the adaptation of the behavior throughout the navigation. However, as expected, it was also possible to observe that different light conditions have a great impact in the human detection step, resulting in false positive cases. Future directions include the extension of the proposed methodology to consider other constraints, such as the position of the humans in the environment as well as different types of behavior if the humans are classified as a group (different from single individuals). We also intend to improve and aggregate the social constraints in the path planning phase. ACKNOWLEDGMENTS This work was developed with the support of the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) and Fundação de Amparo à Pesquisa do Estado de Minas Gerais (FAPEMIG). The authors also thank Elerson R. S. Santos for the valuable help regarding the use of ROS. REFERENCES [] E. T. Hall, The Hidden Dimension: Man s Use of Space in Public and Private. The Bodley Head Ltd, 966. [] J. Mumm and B. Mutlu, Human-robot proxemics: physical and psychological distancing in human-robot interaction, in Proceedings of the 6th international conference on Human-robot interaction (HRI). New York, NY, USA: ACM,, pp. 8. [] M. Walters, M. Oskoei, D. Syrdal, and K. Dautenhahn, A long-term Human-Robot Proxemic study, in IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), aug, pp [4] R. Mead and M. J. Mataric, A probabilistic framework for autonomous proxemic control in situated and mobile human-robot interaction, in Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, ser. HRI. New York, NY, USA: ACM,, pp [5] E. Sisbot, L. Marin-Urias, R. Alami, and T. Simeon, A Human Aware Mobile Robot Motion Planner, IEEE Transactions on Robotics, vol., no. 5, pp , oct. 7. [6] M. Svenstrup, S. Tranberg, H. Andersen, and T. Bak, Pose estimation and adaptive robot behaviour for human-robot interaction, in IEEE International Conference on Robotics and Automation (ICRA), may 9, pp [7] J. Kessler, C. Schroeter, and H.-M. Gross, Approaching a person in a socially acceptable manner using a fast marching planner, in Proceedings of the 4th international conference on Intelligent Robotics and Applications - Volume Part II (ICIRA). Berlin, Heidelberg: Springer-Verlag,, pp [8] T. Kruse, P. Basili, S. Glasauer, and A. Kirsch, Legible robot navigation in the proximity of moving humans, in Proceedings of the IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO ), may, pp [9] P. Dollár, C. Wojek, B. Schiele, and P. Perona, Pedestrian detection: An evaluation of the state of the art, PAMI, vol. 4,. [] A. Mahmoud, A. El-Barkouky, J. Graham, and A. Farag, Pedestrian detection using mixed partial derivative based histogram of oriented gradients, in Image Processing (ICIP), 4 IEEE International Conference on, Oct 4, pp [] M. Andriluka, S. Roth, and B. Schiele, People-tracking-by-detection and people-detection-by-tracking, in Computer Vision and Pattern Recognition, 8. CVPR 8. IEEE Conference on, June 8, pp. 8. [] N. Hu, G. Englebienne, Z. Lou, and B. Krose, Learning latent structure for activity recognition, in Robotics and Automation (ICRA), 4 IEEE International Conference on, May 4, pp [] Y. Song, L. Morency, and R. Davis, Multi-view latent variable discriminative models for action recognition, in Computer Vision and Pattern Recognition (CVPR), IEEE Conference on, June, pp. 7. [4] E. Corvee and F. Bremond, Body parts detection for people tracking using trees of histogram of oriented gradient descriptors, in Advanced Video and Signal Based Surveillance (AVSS), Seventh IEEE International Conference on, Aug, pp [5] L. Spinello and K. Arras, People detection in rgb-d data, in Intelligent Robots and Systems (IROS), IEEE/RSJ International Conference on, Sept, pp [6] N. Dalal and B. Triggs, Histograms of oriented gradients for human detection, in Computer Vision and Pattern Recognition, 5. CVPR 5. IEEE Computer Society Conference on, vol., June 5, pp vol.. [7] Kinect calibration, calibration/technical, accessed: [8] ROS Wiki, calibration/tutorials, accessed:

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Leandro Soriano Marcolino and Luiz Chaimowicz Abstract A very common problem in the navigation of robotic swarms is when groups of robots

More information

May Edited by: Roemi E. Fernández Héctor Montes

May Edited by: Roemi E. Fernández Héctor Montes May 2016 Edited by: Roemi E. Fernández Héctor Montes RoboCity16 Open Conference on Future Trends in Robotics Editors Roemi E. Fernández Saavedra Héctor Montes Franceschi Madrid, 26 May 2016 Edited by:

More information

Stabilize humanoid robot teleoperated by a RGB-D sensor

Stabilize humanoid robot teleoperated by a RGB-D sensor Stabilize humanoid robot teleoperated by a RGB-D sensor Andrea Bisson, Andrea Busatto, Stefano Michieletto, and Emanuele Menegatti Intelligent Autonomous Systems Lab (IAS-Lab) Department of Information

More information

Path Planning in Dynamic Environments Using Time Warps. S. Farzan and G. N. DeSouza

Path Planning in Dynamic Environments Using Time Warps. S. Farzan and G. N. DeSouza Path Planning in Dynamic Environments Using Time Warps S. Farzan and G. N. DeSouza Outline Introduction Harmonic Potential Fields Rubber Band Model Time Warps Kalman Filtering Experimental Results 2 Introduction

More information

Safe and Efficient Autonomous Navigation in the Presence of Humans at Control Level

Safe and Efficient Autonomous Navigation in the Presence of Humans at Control Level Safe and Efficient Autonomous Navigation in the Presence of Humans at Control Level Klaus Buchegger 1, George Todoran 1, and Markus Bader 1 Vienna University of Technology, Karlsplatz 13, Vienna 1040,

More information

Adaptive Human aware Navigation based on Motion Pattern Analysis Hansen, Søren Tranberg; Svenstrup, Mikael; Andersen, Hans Jørgen; Bak, Thomas

Adaptive Human aware Navigation based on Motion Pattern Analysis Hansen, Søren Tranberg; Svenstrup, Mikael; Andersen, Hans Jørgen; Bak, Thomas Aalborg Universitet Adaptive Human aware Navigation based on Motion Pattern Analysis Hansen, Søren Tranberg; Svenstrup, Mikael; Andersen, Hans Jørgen; Bak, Thomas Published in: The 18th IEEE International

More information

A software video stabilization system for automotive oriented applications

A software video stabilization system for automotive oriented applications A software video stabilization system for automotive oriented applications A. Broggi, P. Grisleri Dipartimento di Ingegneria dellinformazione Universita degli studi di Parma 43100 Parma, Italy Email: {broggi,

More information

Removing Temporal Stationary Blur in Route Panoramas

Removing Temporal Stationary Blur in Route Panoramas Removing Temporal Stationary Blur in Route Panoramas Jiang Yu Zheng and Min Shi Indiana University Purdue University Indianapolis jzheng@cs.iupui.edu Abstract The Route Panorama is a continuous, compact

More information

IMAGE PROCESSING TECHNIQUES FOR CROWD DENSITY ESTIMATION USING A REFERENCE IMAGE

IMAGE PROCESSING TECHNIQUES FOR CROWD DENSITY ESTIMATION USING A REFERENCE IMAGE Second Asian Conference on Computer Vision (ACCV9), Singapore, -8 December, Vol. III, pp. 6-1 (invited) IMAGE PROCESSING TECHNIQUES FOR CROWD DENSITY ESTIMATION USING A REFERENCE IMAGE Jia Hong Yin, Sergio

More information

Traffic Control for a Swarm of Robots: Avoiding Target Congestion

Traffic Control for a Swarm of Robots: Avoiding Target Congestion Traffic Control for a Swarm of Robots: Avoiding Target Congestion Leandro Soriano Marcolino and Luiz Chaimowicz Abstract One of the main problems in the navigation of robotic swarms is when several robots

More information

Correcting Odometry Errors for Mobile Robots Using Image Processing

Correcting Odometry Errors for Mobile Robots Using Image Processing Correcting Odometry Errors for Mobile Robots Using Image Processing Adrian Korodi, Toma L. Dragomir Abstract - The mobile robots that are moving in partially known environments have a low availability,

More information

Image Extraction using Image Mining Technique

Image Extraction using Image Mining Technique IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 9 (September. 2013), V2 PP 36-42 Image Extraction using Image Mining Technique Prof. Samir Kumar Bandyopadhyay,

More information

Fast identification of individuals based on iris characteristics for biometric systems

Fast identification of individuals based on iris characteristics for biometric systems Fast identification of individuals based on iris characteristics for biometric systems J.G. Rogeri, M.A. Pontes, A.S. Pereira and N. Marranghello Department of Computer Science and Statistic, IBILCE, Sao

More information

Motion of Robots in a Non Rectangular Workspace K Prasanna Lakshmi Asst. Prof. in Dept of Mechanical Engineering JNTU Hyderabad

Motion of Robots in a Non Rectangular Workspace K Prasanna Lakshmi Asst. Prof. in Dept of Mechanical Engineering JNTU Hyderabad International Journal of Engineering Inventions e-issn: 2278-7461, p-isbn: 2319-6491 Volume 2, Issue 3 (February 2013) PP: 35-40 Motion of Robots in a Non Rectangular Workspace K Prasanna Lakshmi Asst.

More information

Computer Vision. Howie Choset Introduction to Robotics

Computer Vision. Howie Choset   Introduction to Robotics Computer Vision Howie Choset http://www.cs.cmu.edu.edu/~choset Introduction to Robotics http://generalrobotics.org What is vision? What is computer vision? Edge Detection Edge Detection Interest points

More information

Analysis of Compass Sensor Accuracy on Several Mobile Devices in an Industrial Environment

Analysis of Compass Sensor Accuracy on Several Mobile Devices in an Industrial Environment Analysis of Compass Sensor Accuracy on Several Mobile Devices in an Industrial Environment Michael Hölzl, Roland Neumeier and Gerald Ostermayer University of Applied Sciences Hagenberg michael.hoelzl@fh-hagenberg.at,

More information

Improved SIFT Matching for Image Pairs with a Scale Difference

Improved SIFT Matching for Image Pairs with a Scale Difference Improved SIFT Matching for Image Pairs with a Scale Difference Y. Bastanlar, A. Temizel and Y. Yardımcı Informatics Institute, Middle East Technical University, Ankara, 06531, Turkey Published in IET Electronics,

More information

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and 8.1 INTRODUCTION In this chapter, we will study and discuss some fundamental techniques for image processing and image analysis, with a few examples of routines developed for certain purposes. 8.2 IMAGE

More information

MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS

MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS INFOTEH-JAHORINA Vol. 10, Ref. E-VI-11, p. 892-896, March 2011. MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS Jelena Cvetković, Aleksej Makarov, Sasa Vujić, Vlatacom d.o.o. Beograd Abstract -

More information

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path Taichi Yamada 1, Yeow Li Sa 1 and Akihisa Ohya 1 1 Graduate School of Systems and Information Engineering, University of Tsukuba, 1-1-1,

More information

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment Proceedings of the International MultiConference of Engineers and Computer Scientists 2016 Vol I,, March 16-18, 2016, Hong Kong Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free

More information

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp

More information

Detection of License Plate using Sliding Window, Histogram of Oriented Gradient, and Support Vector Machines Method

Detection of License Plate using Sliding Window, Histogram of Oriented Gradient, and Support Vector Machines Method Journal of Physics: Conference Series PAPER OPEN ACCESS Detection of License Plate using Sliding Window, Histogram of Oriented Gradient, and Support Vector Machines Method To cite this article: INGA Astawa

More information

Dimension Recognition and Geometry Reconstruction in Vectorization of Engineering Drawings

Dimension Recognition and Geometry Reconstruction in Vectorization of Engineering Drawings Dimension Recognition and Geometry Reconstruction in Vectorization of Engineering Drawings Feng Su 1, Jiqiang Song 1, Chiew-Lan Tai 2, and Shijie Cai 1 1 State Key Laboratory for Novel Software Technology,

More information

On-line adaptive side-by-side human robot companion to approach a moving person to interact

On-line adaptive side-by-side human robot companion to approach a moving person to interact On-line adaptive side-by-side human robot companion to approach a moving person to interact Ely Repiso, Anaís Garrell, and Alberto Sanfeliu Institut de Robòtica i Informàtica Industrial, CSIC-UPC {erepiso,agarrell,sanfeliu}@iri.upc.edu

More information

A Comparison of Histogram and Template Matching for Face Verification

A Comparison of Histogram and Template Matching for Face Verification A Comparison of and Template Matching for Face Verification Chidambaram Chidambaram Universidade do Estado de Santa Catarina chidambaram@udesc.br Marlon Subtil Marçal, Leyza Baldo Dorini, Hugo Vieira Neto

More information

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Leandro Soriano Marcolino and Luiz Chaimowicz Abstract A very common problem in the navigation of robotic swarms is when groups of robots

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

NTU Robot PAL 2009 Team Report

NTU Robot PAL 2009 Team Report NTU Robot PAL 2009 Team Report Chieh-Chih Wang, Shao-Chen Wang, Hsiao-Chieh Yen, and Chun-Hua Chang The Robot Perception and Learning Laboratory Department of Computer Science and Information Engineering

More information

Journal of Mechatronics, Electrical Power, and Vehicular Technology

Journal of Mechatronics, Electrical Power, and Vehicular Technology Journal of Mechatronics, Electrical Power, and Vehicular Technology 8 (2017) 85 94 Journal of Mechatronics, Electrical Power, and Vehicular Technology e-issn: 2088-6985 p-issn: 2087-3379 www.mevjournal.com

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

Evaluation of Passing Distance for Social Robots

Evaluation of Passing Distance for Social Robots Evaluation of Passing Distance for Social Robots Elena Pacchierotti, Henrik I. Christensen and Patric Jensfelt Centre for Autonomous Systems Royal Institute of Technology SE-100 44 Stockholm, Sweden {elenapa,hic,patric}@nada.kth.se

More information

Simulation of a mobile robot navigation system

Simulation of a mobile robot navigation system Edith Cowan University Research Online ECU Publications 2011 2011 Simulation of a mobile robot navigation system Ahmed Khusheef Edith Cowan University Ganesh Kothapalli Edith Cowan University Majid Tolouei

More information

Obstacle Avoidance in Collective Robotic Search Using Particle Swarm Optimization

Obstacle Avoidance in Collective Robotic Search Using Particle Swarm Optimization Avoidance in Collective Robotic Search Using Particle Swarm Optimization Lisa L. Smith, Student Member, IEEE, Ganesh K. Venayagamoorthy, Senior Member, IEEE, Phillip G. Holloway Real-Time Power and Intelligent

More information

Fuzzy-Heuristic Robot Navigation in a Simulated Environment

Fuzzy-Heuristic Robot Navigation in a Simulated Environment Fuzzy-Heuristic Robot Navigation in a Simulated Environment S. K. Deshpande, M. Blumenstein and B. Verma School of Information Technology, Griffith University-Gold Coast, PMB 50, GCMC, Bundall, QLD 9726,

More information

4D-Particle filter localization for a simulated UAV

4D-Particle filter localization for a simulated UAV 4D-Particle filter localization for a simulated UAV Anna Chiara Bellini annachiara.bellini@gmail.com Abstract. Particle filters are a mathematical method that can be used to build a belief about the location

More information

An Un-awarely Collected Real World Face Database: The ISL-Door Face Database

An Un-awarely Collected Real World Face Database: The ISL-Door Face Database An Un-awarely Collected Real World Face Database: The ISL-Door Face Database Hazım Kemal Ekenel, Rainer Stiefelhagen Interactive Systems Labs (ISL), Universität Karlsruhe (TH), Am Fasanengarten 5, 76131

More information

Classification of Clothes from Two Dimensional Optical Images

Classification of Clothes from Two Dimensional Optical Images Human Journals Research Article June 2017 Vol.:6, Issue:4 All rights are reserved by Sayali S. Junawane et al. Classification of Clothes from Two Dimensional Optical Images Keywords: Dominant Colour; Image

More information

DETERMINATION OF INHERENT AND ADDITIONAL FILTRATION IN ORDER TO ESTABLISH RADIATION QUALITIES ACCORDING TO IEC 61267

DETERMINATION OF INHERENT AND ADDITIONAL FILTRATION IN ORDER TO ESTABLISH RADIATION QUALITIES ACCORDING TO IEC 61267 2009 International Nuclear Atlantic Conference - INAC 2009 Rio de Janeiro,RJ, Brazil, September27 to October 2, 2009 ASSOCIAÇÃO BRASILEIRA DE ENERGIA NUCLEAR - ABEN ISBN: 978-85-99141-03-8 DETERMINATION

More information

Robot Crowd Navigation using Predictive Position Fields in the Potential Function Framework

Robot Crowd Navigation using Predictive Position Fields in the Potential Function Framework Robot Crowd Navigation using Predictive Position Fields in the Potential Function Framework Ninad Pradhan, Timothy Burg, and Stan Birchfield Abstract A potential function based path planner for a mobile

More information

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS Nuno Sousa Eugénio Oliveira Faculdade de Egenharia da Universidade do Porto, Portugal Abstract: This paper describes a platform that enables

More information

VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL

VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL Instructor : Dr. K. R. Rao Presented by: Prasanna Venkatesh Palani (1000660520) prasannaven.palani@mavs.uta.edu

More information

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC ROBOT VISION Dr.M.Madhavi, MED, MVSREC Robotic vision may be defined as the process of acquiring and extracting information from images of 3-D world. Robotic vision is primarily targeted at manipulation

More information

Content Based Image Retrieval Using Color Histogram

Content Based Image Retrieval Using Color Histogram Content Based Image Retrieval Using Color Histogram Nitin Jain Assistant Professor, Lokmanya Tilak College of Engineering, Navi Mumbai, India. Dr. S. S. Salankar Professor, G.H. Raisoni College of Engineering,

More information

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing Digital Image Processing Lecture # 6 Corner Detection & Color Processing 1 Corners Corners (interest points) Unlike edges, corners (patches of pixels surrounding the corner) do not necessarily correspond

More information

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods 19 An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods T.Arunachalam* Post Graduate Student, P.G. Dept. of Computer Science, Govt Arts College, Melur - 625 106 Email-Arunac682@gmail.com

More information

Efficient Construction of SIFT Multi-Scale Image Pyramids for Embedded Robot Vision

Efficient Construction of SIFT Multi-Scale Image Pyramids for Embedded Robot Vision Efficient Construction of SIFT Multi-Scale Image Pyramids for Embedded Robot Vision Peter Andreas Entschev and Hugo Vieira Neto Graduate School of Electrical Engineering and Applied Computer Science Federal

More information

Formation Maintenance for Autonomous Robots by Steering Behavior Parameterization

Formation Maintenance for Autonomous Robots by Steering Behavior Parameterization Formation Maintenance for Autonomous Robots by Steering Behavior Parameterization MAITE LÓPEZ-SÁNCHEZ, JESÚS CERQUIDES WAI Volume Visualization and Artificial Intelligence Research Group, MAiA Dept. Universitat

More information

Approaching a Person in a Socially Acceptable Manner Using a Fast Marching planner

Approaching a Person in a Socially Acceptable Manner Using a Fast Marching planner Approaching a Person in a Socially Acceptable Manner Using a Fast Marching planner Jens Kessler, Christof Schroeter, and Horst-Michael Gross Neuroinformatics and Cognitive Robotics Lab, Ilmenau University

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE

APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE Najirah Umar 1 1 Jurusan Teknik Informatika, STMIK Handayani Makassar Email : najirah_stmikh@yahoo.com

More information

Wheeler-Classified Vehicle Detection System using CCTV Cameras

Wheeler-Classified Vehicle Detection System using CCTV Cameras Wheeler-Classified Vehicle Detection System using CCTV Cameras Pratishtha Gupta Assistant Professor: Computer Science Banasthali University Jaipur, India G. N. Purohit Professor: Computer Science Banasthali

More information

Colour Profiling Using Multiple Colour Spaces

Colour Profiling Using Multiple Colour Spaces Colour Profiling Using Multiple Colour Spaces Nicola Duffy and Gerard Lacey Computer Vision and Robotics Group, Trinity College, Dublin.Ireland duffynn@cs.tcd.ie Abstract This paper presents an original

More information

Self-Tuning Nearness Diagram Navigation

Self-Tuning Nearness Diagram Navigation Self-Tuning Nearness Diagram Navigation Chung-Che Yu, Wei-Chi Chen, Chieh-Chih Wang and Jwu-Sheng Hu Abstract The nearness diagram (ND) navigation method is a reactive navigation method used for obstacle

More information

Intelligent Nighttime Video Surveillance Using Multi-Intensity Infrared Illuminator

Intelligent Nighttime Video Surveillance Using Multi-Intensity Infrared Illuminator , October 19-21, 2011, San Francisco, USA Intelligent Nighttime Video Surveillance Using Multi-Intensity Infrared Illuminator Peggy Joy Lu, Jen-Hui Chuang, and Horng-Horng Lin Abstract In nighttime video

More information

Background Pixel Classification for Motion Detection in Video Image Sequences

Background Pixel Classification for Motion Detection in Video Image Sequences Background Pixel Classification for Motion Detection in Video Image Sequences P. Gil-Jiménez, S. Maldonado-Bascón, R. Gil-Pita, and H. Gómez-Moreno Dpto. de Teoría de la señal y Comunicaciones. Universidad

More information

Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments

Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments IMI Lab, Dept. of Computer Science University of North Carolina Charlotte Outline Problem and Context Basic RAMP Framework

More information

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition Advanced Techniques for Mobile Robotics Location-Based Activity Recognition Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Activity Recognition Based on L. Liao, D. J. Patterson, D. Fox,

More information

An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots

An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots Maren Bennewitz Wolfram Burgard Department of Computer Science, University of Freiburg, 7911 Freiburg, Germany maren,burgard

More information

Real Time Word to Picture Translation for Chinese Restaurant Menus

Real Time Word to Picture Translation for Chinese Restaurant Menus Real Time Word to Picture Translation for Chinese Restaurant Menus Michelle Jin, Ling Xiao Wang, Boyang Zhang Email: mzjin12, lx2wang, boyangz @stanford.edu EE268 Project Report, Spring 2014 Abstract--We

More information

Sponsored by. Nisarg Kothari Carnegie Mellon University April 26, 2011

Sponsored by. Nisarg Kothari Carnegie Mellon University April 26, 2011 Sponsored by Nisarg Kothari Carnegie Mellon University April 26, 2011 Motivation Why indoor localization? Navigating malls, airports, office buildings Museum tours, context aware apps Augmented reality

More information

Similarly, the point marked in red below is a local minimum for the function, since there are no points nearby that are lower than it:

Similarly, the point marked in red below is a local minimum for the function, since there are no points nearby that are lower than it: Extreme Values of Multivariate Functions Our next task is to develop a method for determining local extremes of multivariate functions, as well as absolute extremes of multivariate functions on closed

More information

Color Constancy Using Standard Deviation of Color Channels

Color Constancy Using Standard Deviation of Color Channels 2010 International Conference on Pattern Recognition Color Constancy Using Standard Deviation of Color Channels Anustup Choudhury and Gérard Medioni Department of Computer Science University of Southern

More information

A Comparative Study of Structured Light and Laser Range Finding Devices

A Comparative Study of Structured Light and Laser Range Finding Devices A Comparative Study of Structured Light and Laser Range Finding Devices Todd Bernhard todd.bernhard@colorado.edu Anuraag Chintalapally anuraag.chintalapally@colorado.edu Daniel Zukowski daniel.zukowski@colorado.edu

More information

Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball

Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball Optic Flow Based Skill Learning for A Humanoid to Trap, Approach to, and Pass a Ball Masaki Ogino 1, Masaaki Kikuchi 1, Jun ichiro Ooga 1, Masahiro Aono 1 and Minoru Asada 1,2 1 Dept. of Adaptive Machine

More information

Assessing the Social Criteria for Human-Robot Collaborative Navigation: A Comparison of Human-Aware Navigation Planners

Assessing the Social Criteria for Human-Robot Collaborative Navigation: A Comparison of Human-Aware Navigation Planners Assessing the Social Criteria for Human-Robot Collaborative Navigation: A Comparison of Human-Aware Navigation Planners Harmish Khambhaita, Rachid Alami To cite this version: Harmish Khambhaita, Rachid

More information

Control a 2-Axis Servomechanism by Gesture Recognition using a Generic WebCam

Control a 2-Axis Servomechanism by Gesture Recognition using a Generic WebCam Tavares, J. M. R. S.; Ferreira, R. & Freitas, F. / Control a 2-Axis Servomechanism by Gesture Recognition using a Generic WebCam, pp. 039-040, International Journal of Advanced Robotic Systems, Volume

More information

Extraction and Recognition of Text From Digital English Comic Image Using Median Filter

Extraction and Recognition of Text From Digital English Comic Image Using Median Filter Extraction and Recognition of Text From Digital English Comic Image Using Median Filter S.Ranjini 1 Research Scholar,Department of Information technology Bharathiar University Coimbatore,India ranjinisengottaiyan@gmail.com

More information

Privacy-Protected Camera for the Sensing Web

Privacy-Protected Camera for the Sensing Web Privacy-Protected Camera for the Sensing Web Ikuhisa Mitsugami 1, Masayuki Mukunoki 2, Yasutomo Kawanishi 2, Hironori Hattori 2, and Michihiko Minoh 2 1 Osaka University, 8-1, Mihogaoka, Ibaraki, Osaka

More information

PHYSICAL ROBOTS PROGRAMMING BY IMITATION USING VIRTUAL ROBOT PROTOTYPES

PHYSICAL ROBOTS PROGRAMMING BY IMITATION USING VIRTUAL ROBOT PROTOTYPES Bulletin of the Transilvania University of Braşov Series I: Engineering Sciences Vol. 6 (55) No. 2-2013 PHYSICAL ROBOTS PROGRAMMING BY IMITATION USING VIRTUAL ROBOT PROTOTYPES A. FRATU 1 M. FRATU 2 Abstract:

More information

License Plate Localisation based on Morphological Operations

License Plate Localisation based on Morphological Operations License Plate Localisation based on Morphological Operations Xiaojun Zhai, Faycal Benssali and Soodamani Ramalingam School of Engineering & Technology University of Hertfordshire, UH Hatfield, UK Abstract

More information

COMPANION: A Constraint-Optimizing Method for Person Acceptable Navigation

COMPANION: A Constraint-Optimizing Method for Person Acceptable Navigation The 18th IEEE International Symposium on Robot and Human Interactive Communication Toyama, Japan, Sept. 27-Oct. 2, 2009 WeA4.2 COMPANION: A Constraint-Optimizing Method for Person Acceptable Navigation

More information

Bluetooth Low Energy Sensing Technology for Proximity Construction Applications

Bluetooth Low Energy Sensing Technology for Proximity Construction Applications Bluetooth Low Energy Sensing Technology for Proximity Construction Applications JeeWoong Park School of Civil and Environmental Engineering, Georgia Institute of Technology, 790 Atlantic Dr. N.W., Atlanta,

More information

CS295-1 Final Project : AIBO

CS295-1 Final Project : AIBO CS295-1 Final Project : AIBO Mert Akdere, Ethan F. Leland December 20, 2005 Abstract This document is the final report for our CS295-1 Sensor Data Management Course Final Project: Project AIBO. The main

More information

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Davis Ancona and Jake Weiner Abstract In this report, we examine the plausibility of implementing a NEAT-based solution

More information

Physics-Based Manipulation in Human Environments

Physics-Based Manipulation in Human Environments Vol. 31 No. 4, pp.353 357, 2013 353 Physics-Based Manipulation in Human Environments Mehmet R. Dogar Siddhartha S. Srinivasa The Robotics Institute, School of Computer Science, Carnegie Mellon University

More information

Development of Intelligent Automatic Door System

Development of Intelligent Automatic Door System 2014 IEEE International Conference on Robotics & Automation (ICRA) Hong Kong Convention and Exhibition Center May 31 - June 7, 2014. Hong Kong, China Development of Intelligent Automatic Door System Daiki

More information

Available online at ScienceDirect. Ehsan Golkar*, Anton Satria Prabuwono

Available online at   ScienceDirect. Ehsan Golkar*, Anton Satria Prabuwono Available online at www.sciencedirect.com ScienceDirect Procedia Technology 11 ( 2013 ) 771 777 The 4th International Conference on Electrical Engineering and Informatics (ICEEI 2013) Vision Based Length

More information

UNIVERSIDAD CARLOS III DE MADRID ESCUELA POLITÉCNICA SUPERIOR

UNIVERSIDAD CARLOS III DE MADRID ESCUELA POLITÉCNICA SUPERIOR UNIVERSIDAD CARLOS III DE MADRID ESCUELA POLITÉCNICA SUPERIOR TRABAJO DE FIN DE GRADO GRADO EN INGENIERÍA DE SISTEMAS DE COMUNICACIONES CONTROL CENTRALIZADO DE FLOTAS DE ROBOTS CENTRALIZED CONTROL FOR

More information

Image Processing Based Vehicle Detection And Tracking System

Image Processing Based Vehicle Detection And Tracking System Image Processing Based Vehicle Detection And Tracking System Poonam A. Kandalkar 1, Gajanan P. Dhok 2 ME, Scholar, Electronics and Telecommunication Engineering, Sipna College of Engineering and Technology,

More information

Impulse noise features for automatic selection of noise cleaning filter

Impulse noise features for automatic selection of noise cleaning filter Impulse noise features for automatic selection of noise cleaning filter Odej Kao Department of Computer Science Technical University of Clausthal Julius-Albert-Strasse 37 Clausthal-Zellerfeld, Germany

More information

Robust Hand Gesture Recognition for Robotic Hand Control

Robust Hand Gesture Recognition for Robotic Hand Control Robust Hand Gesture Recognition for Robotic Hand Control Ankit Chaudhary Robust Hand Gesture Recognition for Robotic Hand Control 123 Ankit Chaudhary Department of Computer Science Northwest Missouri State

More information

Service Robots in an Intelligent House

Service Robots in an Intelligent House Service Robots in an Intelligent House Jesus Savage Bio-Robotics Laboratory biorobotics.fi-p.unam.mx School of Engineering Autonomous National University of Mexico UNAM 2017 OUTLINE Introduction A System

More information

Bandit Detection using Color Detection Method

Bandit Detection using Color Detection Method Available online at www.sciencedirect.com Procedia Engineering 29 (2012) 1259 1263 2012 International Workshop on Information and Electronic Engineering Bandit Detection using Color Detection Method Junoh,

More information

Autonomous Localization

Autonomous Localization Autonomous Localization Jennifer Zheng, Maya Kothare-Arora I. Abstract This paper presents an autonomous localization service for the Building-Wide Intelligence segbots at the University of Texas at Austin.

More information

Development of Hybrid Image Sensor for Pedestrian Detection

Development of Hybrid Image Sensor for Pedestrian Detection AUTOMOTIVE Development of Hybrid Image Sensor for Pedestrian Detection Hiroaki Saito*, Kenichi HatanaKa and toshikatsu HayaSaKi To reduce traffic accidents and serious injuries at intersections, development

More information

Creating a 3D environment map from 2D camera images in robotics

Creating a 3D environment map from 2D camera images in robotics Creating a 3D environment map from 2D camera images in robotics J.P. Niemantsverdriet jelle@niemantsverdriet.nl 4th June 2003 Timorstraat 6A 9715 LE Groningen student number: 0919462 internal advisor:

More information

Face Detection System on Ada boost Algorithm Using Haar Classifiers

Face Detection System on Ada boost Algorithm Using Haar Classifiers Vol.2, Issue.6, Nov-Dec. 2012 pp-3996-4000 ISSN: 2249-6645 Face Detection System on Ada boost Algorithm Using Haar Classifiers M. Gopi Krishna, A. Srinivasulu, Prof (Dr.) T.K.Basak 1, 2 Department of Electronics

More information

A Probabilistic Method for Planning Collision-free Trajectories of Multiple Mobile Robots

A Probabilistic Method for Planning Collision-free Trajectories of Multiple Mobile Robots A Probabilistic Method for Planning Collision-free Trajectories of Multiple Mobile Robots Maren Bennewitz Wolfram Burgard Department of Computer Science, University of Freiburg, 7911 Freiburg, Germany

More information

Prediction of Human s Movement for Collision Avoidance of Mobile Robot

Prediction of Human s Movement for Collision Avoidance of Mobile Robot Prediction of Human s Movement for Collision Avoidance of Mobile Robot Shunsuke Hamasaki, Yusuke Tamura, Atsushi Yamashita and Hajime Asama Abstract In order to operate mobile robot that can coexist with

More information

Towards Complex Human Robot Cooperation Based on Gesture-Controlled Autonomous Navigation

Towards Complex Human Robot Cooperation Based on Gesture-Controlled Autonomous Navigation CHAPTER 1 Towards Complex Human Robot Cooperation Based on Gesture-Controlled Autonomous Navigation J. DE LEÓN 1 and M. A. GARZÓN 1 and D. A. GARZÓN 1 and J. DEL CERRO 1 and A. BARRIENTOS 1 1 Centro de

More information

Keyword: Morphological operation, template matching, license plate localization, character recognition.

Keyword: Morphological operation, template matching, license plate localization, character recognition. Volume 4, Issue 11, November 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Automatic

More information

Cooperative Tracking using Mobile Robots and Environment-Embedded, Networked Sensors

Cooperative Tracking using Mobile Robots and Environment-Embedded, Networked Sensors In the 2001 International Symposium on Computational Intelligence in Robotics and Automation pp. 206-211, Banff, Alberta, Canada, July 29 - August 1, 2001. Cooperative Tracking using Mobile Robots and

More information

Introduction to Video Forgery Detection: Part I

Introduction to Video Forgery Detection: Part I Introduction to Video Forgery Detection: Part I Detecting Forgery From Static-Scene Video Based on Inconsistency in Noise Level Functions IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 5,

More information

Multimodal Face Recognition using Hybrid Correlation Filters

Multimodal Face Recognition using Hybrid Correlation Filters Multimodal Face Recognition using Hybrid Correlation Filters Anamika Dubey, Abhishek Sharma Electrical Engineering Department, Indian Institute of Technology Roorkee, India {ana.iitr, abhisharayiya}@gmail.com

More information

Key-Words: - Fuzzy Behaviour Controls, Multiple Target Tracking, Obstacle Avoidance, Ultrasonic Range Finders

Key-Words: - Fuzzy Behaviour Controls, Multiple Target Tracking, Obstacle Avoidance, Ultrasonic Range Finders Fuzzy Behaviour Based Navigation of a Mobile Robot for Tracking Multiple Targets in an Unstructured Environment NASIR RAHMAN, ALI RAZA JAFRI, M. USMAN KEERIO School of Mechatronics Engineering Beijing

More information

Multi-Modal Robot Skins: Proximity Servoing and its Applications

Multi-Modal Robot Skins: Proximity Servoing and its Applications Multi-Modal Robot Skins: Proximity Servoing and its Applications Workshop See and Touch: 1st Workshop on multimodal sensor-based robot control for HRI and soft manipulation at IROS 2015 Stefan Escaida

More information

A Reconfigurable Guidance System

A Reconfigurable Guidance System Lecture tes for the Class: Unmanned Aircraft Design, Modeling and Control A Reconfigurable Guidance System Application to Unmanned Aerial Vehicles (UAVs) y b right aileron: a2 right elevator: e 2 rudder:

More information

High Speed vslam Using System-on-Chip Based Vision. Jörgen Lidholm Mälardalen University Västerås, Sweden

High Speed vslam Using System-on-Chip Based Vision. Jörgen Lidholm Mälardalen University Västerås, Sweden High Speed vslam Using System-on-Chip Based Vision Jörgen Lidholm Mälardalen University Västerås, Sweden jorgen.lidholm@mdh.se February 28, 2007 1 The ChipVision Project Within the ChipVision project we

More information

DiVA Digitala Vetenskapliga Arkivet

DiVA Digitala Vetenskapliga Arkivet DiVA Digitala Vetenskapliga Arkivet http://umu.diva-portal.org This is a paper presented at First International Conference on Robotics and associated Hightechnologies and Equipment for agriculture, RHEA-2012,

More information