Knowledge-Sharing Techniques for Egocentric Navigation *

Size: px
Start display at page:

Download "Knowledge-Sharing Techniques for Egocentric Navigation *"

Transcription

1 Knowledge-Sharing Techniques for Egocentric Navigation * Turker Keskinpala, D. Mitchell Wilkes, Kazuhiko Kawamura A. Bugra Koku Center for Intelligent Systems Mechanical Engineering Dept. Vanderbilt University Middle East Technical University Nashville, TN Ankara, Turker turker.keskinpala@vanderbilt.edu bugra@bugra.net {wilkes, kawamura}@vanderbilt.edu Abstract Teams of humans and robots working together can provide effective solutions to problems. In such applications, effective human-robot teaming relies on being able to communicate information about the current perception or understanding of the environment. In this paper, human-robot teaming on navigational tasks is discussed. The role of the human user will be to specify the goal point(s) for the robot, and also to interact with the robot in the event of perceptual errors. A novel navigation method, Egocentric Navigation (ENav), has been developed based on egocentric representations. In this paper, two knowledge sharing methods are described which exploit the characteristics of ENav. In the first method, robots share Landmark EgoSpheres, and in the second method the robot shares it Sensory EgoSphere with the human user for visual perception correction.. Keywords: Human-robot teaming, Egocentric Navigation, Sensory EgoSphere, Landmark EgoSphere. 1 Introduction Recently, the concept of humans and robots working together in cooperative teams has beeen receiving greater attention. In such an application, communicating information about the current perception or understanding of the environment is very important for effective humanrobot teaming. Since navigation is fundamental to mobile robotics, in this paper we consider human-robot teaming on navigational tasks. The role of the human user will be to specify the goal point, or points, for the robot, and also to interact with the robot in the event of perceptual errors. At the heart of this problem are the representations of the state of the world that are used by the human user and the robots in the team. Representation plays a very important role in perceptual knowledge sharing in a robot-robot and a robothuman team. At the Intelligent Robotics Laboratory at Vanderbilt University, we have been using egocentric * /03/$ IEEE. representations for the perceptual knowledge of the robot. One egocentric representation is called the Sensory EgoSphere (SES). The second representation is called the Landmark EgoSphere (LES). The LES contains information about the angular distribution of the landmarks that the robot expects to see at a goal position, and is similar to the SES in structure [5]. For navigational tasks we often use a simplified EgoSphere structure, in which the original 3-D shape of the EgoSphere is projected down to a 2-D structure. In this work, the SES is used to represent the robot s perception of its environment at its current state and the LES is used to represent the robot s expected perception of its environment at a target position that the robot is trying to reach. The SES is a result of robot s perception. The LES may be obtained in several ways. For example, it may be the result of the robot s perception on a previous navigational mission, the result of another robot s perception at the target point, or it may be derived from a rough map of the area. In this paper, in association with our Egocentric Navigation algorithm (ENav), we describe two methods that demonstrate the communication of perceptual knowledge, represented by the SES and the LES, among humans and robots. The first method will be a knowledge sharing method in a team of two heterogeneous robots with the emphasis given on sharing LES information between the robots. The second method will be a visual perception correction method where the robot shares its understanding of the environment, via its current SES, with the human so that the human may correct possible misperceptions by the robot. Our approach takes inspiration from the qualitative navigation method of Levitt and Lawton [7], Dai and Lawton [3], and Sutherland and Thompson [11]. Additional inspiration comes from the work of Pinette [8] and Cartwright and Collet [2]. This paper is organized as follows: In the next section the importance of the choice of representation will be discussed along with some reasons why egocentric representations were chosen over Cartesian representations. In the third section, the egosphere concept will be

2 described. The fourth section will briefly introduce the Egocentric Navigation algorithm. The knowledge sharing methods in robot-robot and human-robot teams will be described in the fifth section. Finally, future work and conclusions will follow. 2 Representation As humans, when we try to remember a place, we typically remember the geometric relationships between the objects in the environment more naturally than the metric relationships between them. Recent studies suggest that humans prefer angular information over distance information in the context of learning places and performing self-localization [4]. Such angular information is explicitly shown in egocentric representations. Research by Scholl [10] revealed that the cognitive spatial representations employed by human beings produce environment-centered representations of the spatial relations between objects. The environment-centered representations are derived from the sequence of local egocentric views that the human being experiences while moving through the environment. Cartesian representations require that the more natural egocentric information be transformed into Cartesian coordinates, thus making the representation less natural. When knowledge sharing between robot-robot and human-robot teams is considered, having a natural representation that can be understood both by the robots and the humans facilitates effective communication. Sharing a robot s raw sensory data in a heterogeneous team would not make sense to the other robots. Similarly, the human would not understand the raw sensory data of the robot, nor would the robot understand the sensory data sent to the human brain. For this reason, it is important to use a representation that makes sense to any member of the team. We chose egocentric representations over Cartesian representations, since a robot can easily form an egocentric representation of its surrondings using its sensors, and humans can naturally create and understand egocentric representations. In addition, egocentric representations are directly exploited in the Egocentric Navigation algorithm (Enav). 3 Egospheres There are two egocentric representations that are used in this work. One egocentric representation is called the Sensory Egosphere (SES). Albus proposed the egosphere in He envisioned it as a dense map of the visual world, a virtual spherical shell surrounding the robot onto which a snapshot of the world was projected [1]. Our definition and use of the egosphere differs from that of Albus. We call it the Sensory EgoSphere and define it as a database a spherical data structure, centered on the coordinate frame of the robot that is spatially indexed by azimuth and elevation [9]. Its implicit topological structure is that of a Geodesic Dome, each node of which is a pointer to a distinct data structure. The SES is a sparse map of the world that contains pointers to descriptors of objects or events that have been detected recently by the robot. As the robot operates within its environment, events, both external and internal, stimulate the robot s sensors. Upon receiving a stimulus the associated sensory processing module writes its output data (including the time of detection) to the SES at the node that is closest to the direction from which the stimulus arrived. Since the robot s sensory processing modules are independent and concurrent, multiple sensors stimulated by the same event will register the event to the SES at about the same time. If the event is directional, the different modules will write their data at the same location on the SES. Hence, sensory data of different modalities coming from similar directions at similar times will register close to each other on the SES. Given that the sensors on a robot are discrete with regard to angular positioning, there is nothing to be gained by defining the SES to be a continuous structure. Moreover, the computational complexity of using the SES increases with its size which is, in turn, dependent on its density (number of points on its surface). We use a (virtual) geodesic dome structure for the SES since it provides a uniform tessellation of vertices such that each vertex is equidistant (along geodesics) to six neighbors. The tessellation frequency is determined by the highest angular resolution of the SONAR array. The SES is a multiple-linked list of pointers to data structures. There is one pointer for each vertex on the dome. Each pointer record has seven links, one to each of its six nearest neighbors and one to a tagged-format data structure. The latter comprises a terminated list of alphanumeric tags each followed by a time stamp and another pointer. A tag indicates that a specific type of sensory data is stored at the vertex. The corresponding time stamp indicates when the data was stored. The pointer associated with the tag points to the location of a data object that contains the sensory data and any function specifications (such as links to other agents) associated with it. The type and number of tags on any vertex of the dome is completely variable. Often in practice, the SES is not a complete geodesic dome, instead it is restricted to only those vertices that fall within the directional sensory field of the robot. Imagery or image features can be stored at the vertex closest to the direction of the object identified in the image as shown in Figure 1. SONAR and LIDAR work only in the equatorial plane of our robot and so their data is restricted to the vertices near the dome's equator [9]. For navigational tasks we have used a simplified egosphere structure, in which the original 3-D shape of the egosphere is projected down to a simpler 2-D structure, i.e., a circle. In this paper, the SES is used to represent the robot s perception of its environment at its current state.

3 on the current perception of the robot without explicitly requiring any range information. In the absence of range information, the robot uses the SES to represent its current perception by using the angular separation between perceived objects. Figure 3 shows sample egocentric representations. Target Current Figure 1. Geodesic dome around the robot and mapping of objects to egosphere [5] The second representation that is used in this work is called the Landmark Egosphere (LES). The LES is a robocentric representation of environmental features expected at a target position [5]. In other words, the LES contains information about the angular distribution of the landmarks that the robot expects to see at a goal position, and is similar to the SES in structure [5]. The LES can be extracted from a map of the area, or it can be derived from some other description of the region. Furthermore, a previously acquired SES stored in the memory of any robot in a team of robots can be used as an LES. In order for the robot to form egocentric representations, we needed to get 360 vision from the robot. For this purpose, we designed a vision system consisting of seven cameras mounted in a ring formation, and a 8-by-1 video multiplexer. The multiplexer were used to switch between seven cameras and we processed the images from each camera one by one to for the egocentric representation of the robot s environment. Figure 2 shows the vision system mounted on the Pioneer 2 AT robot used in our experiments. Figure 2. Vision system on the Pioneer 2 AT robot 4 Egocentric Navigation Egocentric navigation (ENav) is a basic navigational behavior designed to operate based only on egocentric representations, namely, the SES and the LES [6]. The ENav algorithm moves the robot to a target location based Figure 3. Sample egocentric representations. The current representation in sensory data is labeled as an SES, the target representation is labeled as an LES Navigation depends heavily on perception since the goal is to move the robot to a location where the perception of the robot closely matches a target representation defined by LES [6]. The navigation algorithm takes an SES and an LES as input and compares the SES created by the perception process with the LES. This comparison results in an error term that represents the difference between the current SES and the target LES. A heading is computed using the SES and the LES depending on the error s being satisfactorily above a threshold. As the robot iteratively moves along this heading, it is taken towards the target point at which the error drops below the threshold [6]. Error and heading computations are based on pair-wise analysis of landmarks. First, landmarks not common to both SES and LES are removed. Next, from the common landmarks in both representations, landmark pairs are formed and these pairs are compared between the LES and SES. In this comparison, the smaller magnitude angle between two landmarks is used to define the separation between these landmarks. A unit vector along the bisector of these two landmarks is created by each pair depending on this comparison. The unit vector can either be toward or away from the landmarks. Finally, the final heading vector for the situation defined by the SES and LES pair is created by adding all the resulting unit vectors. ENav algorithm can also be described by vector algebra. First, a unit vector is created pointing to every landmark on the SES and the LES. The landmarks on the SES are represented with unit vectors u c i (the superscript C indicates current perception) and the landmarks on the LES are represented with unit vectors u t i as shown in Figure 4 (the superscript t indicates a target location). Variables i and j index over individual landmarks in

4 distinct pairs formed from the SES and LES. The pair-wise vector analysis is carried out as follows [5]: dc ij = uc i. uc j dt ij = ut i. ut j Cij= uc i x uc j, where i j Tij= ut i x ut j Aij = sgn(dcij dtij) (1) Bij = [sgn(cij. Tij) + 1] / 2 (2) uij = (1 + Bij(Aij -1) )(uci + ucj / uci + ucj ) (3) h = Σ uij (4) Figure 4. Computation of heading based on SES and LES [5] There is a relationship between ENav, EgoSpheres and memory models. The navigation system is composed of local and global paradigms. Local navigation uses only the information within the immediate sensing region of the robot, and is reactive. On the other hand, global navigation is deliberative and uses information beyond the robot s sensory horizon. Dividing the system this way implicitly organizes the robot s memory into long-term and shortterm memory structures. The SES provides short-term memory (STM) for reactive navigation. On the other hand, long-term memory (LTM) contains global layout information and supports global navigation. In ENav, landmarks assumed to be around the robot are represented on the LES, which is a LTM structure. Within the ENav scheme, there is also a task-specific memory module which is task-dependent and holds descriptors of via regions that point to transition points for navigation. [5]. 5 Knowledge Sharing In this work, knowledge sharing techniques for ENav are studied. Knowledge sharing is important when robotrobot and human-robot teaming is considered, since some tasks require cooperation between the members of the team. We address both cases where perceptual knowledge is shared in robot-robot and human-robot teams. 5.1 Knowledge Sharing in a Robot-Robot Team Knowledge sharing in mobile robots is an important aspect of mobile robot research. In a multi-robot team, some tasks require cooperation between robots which may be in the form of knowledge sharing. The content of the knowledge shared depends on the task that the robots must perform. However, the nature of the knowledge that is shared is also important. In a heterogeneous robot team, sharing one robot s raw sensory data would not make sense to the other robots. For this reason, the representation plays an important role for knowledge sharing in a robot team. The shared knowledge should be in a form that can be understood by all the robots in the team. Thus it should be at a higher level of abstraction than raw sensory data. The LES is a very suitable representation that can be shared among robots, because it is a natural representation of the objects in the robot s environment. As previously mentioned, there may be various sources for the LES. It can be extracted from an a priori map, can be provided to the robot by a human user, or it can be provided to the robot by another robot in the team. In this work, effort was made to have two heterogeneous robots share LES information enabling one robot to navigate by ENav. ENav requires that two representations be present in order for the robot to navigate. In this case, the navigating robot creates its own SES using its perception capabilities, and receives a target LES from the other robot in the team who has the knowledge of the target location. For example, the other robot may have been at the target location previously and can use the SES it formed at that location as a target LES for the navigating robot. In general, sharing this knowledge in a robot team is analogous to knowledge sharing in humans. Consider the case where Person A needs to meet with Person B at a location that Person A has never visited before. In this case, Person A may call Person B from his cell phone and ask for directions. Knowing the starting region where Person A is located, Person B can give qualitative descriptions to direct Person A to his region. Using these descriptions Person A can find Person B using his own senses along the way and by comparing what he sees with what Person B described and deciding which way to go. In this analogy, the interaction between Persons A and B is knowledge sharing. In the robotics sense, Robot A can find Robot B if Robot B is in a region to which Robot A does not know how to navigate. Knowing which region Robot A is in, Robot B may provide qualitative descriptions that will help Robot A to navigate to its region. These qualitative descriptions are LESs that can be generated from Robot B s long-term memory. Robot A can then use ENav to reach Robot B. Robot A reaches Robot B by using Robot A s own sensors to create its own perceptual knowledge, and uses the LES knowledge transferred from Robot B to

5 decide on its heading. It must be noted that Robot B does not need to know the precise position of Robot A since no heading calculation is made that explicitly depends on the relative positions of the robots. The ENav algorithm generates the heading directions for Robot A. This knowledge sharing method can be further analyzed using the three memory structures used in ENav. Short-term memory holds robo-centric topological regions (SES), long-term memory holds global layout information, and task specific memory holds robo-centric topological regions in terms of LES representations that indicate transition points, i.e., a sequence of waypoints, for navigation. In this sense, what is really shared between the robots for one robot to navigate by using the knowledge of the other one is task specific memory. R A (actually a Pioneer AT2 robot called Skeeter) does not know how to go the region where R B (a RWI ATRV-Jr. robot called Scooter) is. R B has knowledge of the environment and can generate a sequence of waypoints, described by LESs, to help R A navigate using ENav. To start ENav, Skeeter is able to use its own vision sensors to create an SES. However, since it does not know the environment, it cannot create a target LES. It asks Scooter for an LES. Scooter sends a waypont LES that describes the location L 1. Upon receiving this LES, Skeeter internalizes that LES to use it in ENAV and starts navigating. Skeeter computes its own heading and decides its direction by itself. After Skeeter decides that it has arrived at the LES location, it asks for the next LES. At this point, Scooter sends the second waypoint LES that describes location L 2. Once again, Skeeter navigates creating a chain of SESs and comparing them to the LES for L 2. When Skeeter decides that it has arrived at the second LES, it asks for another LES. This continues until Scooter sends the target LES to Skeeter and notifies Skeeter that this is the final target. Skeeter can ask for as many LESs as are necessary to reach to target. For the LES sharing method to work, the robots need not see each other, and need not have the same absolute heading because no heading computation is done relative to the robot positions. In addition, robots can be positioned in the environment without any constraints except that they must be able to sense at least one landmark. Green Cone Pink Cone Pink Cone Blue Cone Yellow Cone Blue Cone 5.2 Knowledge Sharing in Human-Robot Team As in knowledge sharing in a robot-robot team, representation is also important in knowledge sharing in a human-robot team. It is potentially even more difficult to have human and robot understand each other. This is due to the fact that many robots use knowledge representations that are very unfamiliar for humans. Since humans prefer angular information over distance information while learning places and localizing themselves [4], an egocentric representation is a natural representation for humans. Our robots are also capable of representing their environmental knowledge in an egocentric structure. This suggested that the robot could share its egocentric representation with the human user. Green Cone Yellow Cone Figure 5. Illustration of LES sharing method Figure 5 illustrates how this method works. First, it has to be kept in mind that a robot should have a goal state, and current state of its environment to use ENav. That is, the robot should have an LES and an SES to start navigating with ENav. In the case illustrated in Figure 5, As pointed in Section 4, ENav is highly dependent on perception since the goal is to move the robot to a location where the perception of the robot closely matches a target representation defined by an LES. However, the robot s perception of the objects around it is not perfect. Sometimes the robot does not detect the landmarks around it correctly, potentially affecting the ENav algorithm s heading computation and, as a result, affecting the performance of the navigation system. In addition to trying to solve the problems in the robot s perceptual systems, we allowed for the inclusion of the human user in the decision

6 loop of the robot during SES creation. This way, the system was able to take advantage of the superior visual perception capabilities of the human team member Perception System and Perception Problems Before going into the details of the problems with perception and this method, it is useful to describe the perception system we used. We developed a visual attention, or saliency, algorithm based on defining the concept of bright colors in the Hue, Saturation and Value (HSV) color space. This method is promising for a limited number of landmarks. Since we use color to detect landmarks, both single-color or multi-color landmarks can be used. However, using single-color landmarks has some disadvantages such as limiting the number of unique landmarks with the number of colors that can robustly be detected in the environment. Moreover, the environment may have colors that are more salient than the objects used as landmarks [5]. We adopted a multi-color landmark structure in our perception system. In this structure, two colors and a unit vector pointing from the first color to the second color defines a pair. Figure 6 shows basic color pair structure used in forming multi-color landmarks, and Figure 7 shows the sample color patterns used [5]. y x Green Pink Figure 6. Basic color pair structure used in forming multicolor landmarks: green_pink_1,0 counterparts in the multicolor landmark affect the correct detection of landmarks. A false positive is the case where a non-existent landmark is detected in the environment, or a different landmark is detected instead of the one in the scene. If single color landmarks are used, the probability of false positives is higher than for multi-color landmarks. The perception system detects the most salient color in the environment as a landmark. A three color landmark reduces the occurrence of false positives, since, for something to be detected as a landmark three colors must be detected and the color pairs should agree with the structure of the landmark Perception Correction by Human Intervention In order to address the visual perception problems originating from the robot s relatively poor visual perception capability, we included the human user in the loop during SES creation. We developed a visual perception correction system, and an interface that enables the human user to correct the perception mistakes made by the robot during the SES creation process. In order to navigate using ENav, the robot creates SES and compares this to the target LES. In the perception correction method, the robot creates an SES and before using this SES for navigation, it shares the SES with the user for confirmation. The robot s SES is presented to the user in the perception correction interface where each camera frame is positioned as shown in Figure 8. If anything is detected on any frame, an icon of the detected landmark is placed above the corresponding frame at the position where the landmark was detected. This is illustrated in Figure Horizontal Bar L Vertical Bar Figure 8. Position of camera frames on the interface Figure 7. Sample color patterns used in multi-color landmarks Although this visual attention system has the potential to reduce the occurrence of perception problems associated with dynamic light conditions, perception problems still occur. The problems with visual perception can be categorized in two groups: False negatives and false positives. A false negative means that the robot cannot detect a landmark that is in the scene. This situation occurs because of several factors. First, since the perception system depends on color, changing lighting conditions can affect detection performance. Second, the presence of colors in the environment that appear to be more salient than their Figure 9. Perception Correction Interface after SES creation As can be seen in Figure 9, the robot was able to detect two landmarks in frames 1, and 7. These two landmarks were posted on the SES as shown in Figure 10. From the perception correction interface, the human user can see that the robot was not able to detect some of the landmarks that were visible to it. Using the functionality provided with the interface, the human user corrects the

7 current SES of the robot by adding two of the landmarks seen in frame 2. To do this, the user opens up the Object List by clicking the button on the interface and selects the landmark that is to be added. As selected, the landmark icon appears in the upper left corner of the screen. Then, the user drags and drops the icon on the landmark in the frame. After this operation, the landmark icon is placed on the frame indicating a detected landmark. This illustrates correction of false negatives using this method. Figure 11 shows the Object List, the robot s SES and the perception correction interface after two corrections made by the human user. Figure 10. 2D SES view of the robot detected landmarks are removed from the SES. After correction by the human user, the robot uses the new SES for navigation. 6 Future Work In future work, the LES sharing method can be extended to LES and SES sharing where the robot receiving the LES can send its SES back to the other robot. In this way the robot sending the LES can localize the other robot on its map and may forward it to different targets by sending appropriate LESs. In addition, a backtracking task can be assigned to the robot receiving the sequence of LESs. The robot can perform this task by learning the LES sequence that it received and use this to find its way back to it s starting point by running through this sequence backwards. The human-robot interaction aspect of the knowledge sharing in a human-robot team can also be improved. In our ongoing work, the robot is more involved in the interaction process. This work can lead to a mixed initiative vision system where the human and robot carry out a more intelligent dialog-based interaction. In our approach the robot can be trained to recognize a large set of features extracted from regions in an image. By constraining the features to have clearly identifiable semantic meaning, the robot and human have with a shared semantic context for discussing these features. 7 Conclusion In this paper, two knowledge sharing methods based on ENav were presented. ENav was defined as a basic navigational behavior based only on egocentric representations. Egocentric representations are crucial for both the navigation and the knowledge sharing methods that we developed. The egocentric representations, the SES and LES, represent the objects around the robot by only using their angular distributions relative to the robot. The SES is used to represent the robot s perception of its environment at its current state, while the LES represents the expected state of the world at a target location. As a qualitative navigation method, ENav, without explicitly requiring range information, provides a basic navigation method where a heading is computed by comparing the two egocentric representations. Figure 11. Object List, the robot s corrected SES and perception correction interface after the correction The interface can also be used to correct a false positive. In this case, the falsely detected landmark icons above the associated frames can be dragged and dropped on the trash can icon on the interface. This change is immediately reflected on the robot s SES and the falsely Two knowledge sharing methods were described. In the first method, it was shown that LESs can be shared in a robot-robot team, where one robot navigates using LESs provided by another robot in the team while the navigating robot uses its own visual perception sensors to create the required SESs. In the second method, it was shown that the robot could share its SES with the human user in order for the human to check and correct the SES created by the robot. Using the proposed perception correction interface, the user was able to monitor the SES creation of the robot,

8 and correct false negatives and false positives. After correction by the user, the robot used the updated SES and navigated using ENav. In conclusion, the success of the two knowledge sharing methods we demonstrated showed that it is advantageous to use egocentric representations and sensory features that are understood by both humans and robots, when knowledge sharing in robot-robot and human-robot teams is concerned. Geoforum (Special Issue on Geography, Environment and Cognition), Vol 23 No. 2, pp , [11] K.T. Sutherland and W.B. Thompson, Inexact navigation, Proc. of the IEEE Int. Conf. on Robotics and Automation, Atlanta, GA, pp. 1-7, Acknowledgments This research has been partially funded under a grant from DARPA (Grant #DASG ) and under Grant #DASG References [1] J.S Albus, Outline for a Theory of Intelligence, IEEE Transactions on Systems, Man and Cybernetics, Vol 21, pp , [2] B.A Cartwright and T. S. Collett, Landmark Learning in Bees, Journal of Comparative Physiology, Vol 151, pp , 1983 [3] D. Dai and D.T. Lawton, Range-free qualitative navigation, Proc. of the IEEE Int. Conf. on Robotics and Automation, Atlanta, GA, pp , [4] S. Healy, Spatial Representation in Animals, Oxford University Press, pp. 6-8, [5] K. Kawamura, A.B. Koku, D.M. Wilkes, R.A. Peters II and A. Sekmen, Toward Egocentric Navigation, Int. J. of Robotics and Automation, vol. 17, no. 4, pp , [6] A.B. Koku, Egocentric Navigation and Its Applications, Ph.D. Dissertation, May [7] T.S. Levitt, & D.T. Lawton, Qualitative navigation for mobile robots, Artificial Intelligence, Vol. 44, pp , [8] B. Pinette, Qualitative Homing, Proc. of IEEE International Symposium on Intelligent Control, Alexandria, VA, pp , [9] R.A. Peters II, K.E. Hambuchen, and K. Kawamura, The Sensory EgoSphere as a Short-Term Memory for Humanoids, Proc. of the IEEE-RAS International Conference on Humanoid Robots, Waseda University, Tokyo, Japan, pp , [10] M.J. Scholl, Landmarks, places, environments: multiple mind-brain systems for spatial orientation,

Evaluation of an Enhanced Human-Robot Interface

Evaluation of an Enhanced Human-Robot Interface Evaluation of an Enhanced Human-Robot Carlotta A. Johnson Julie A. Adams Kazuhiko Kawamura Center for Intelligent Systems Center for Intelligent Systems Center for Intelligent Systems Vanderbilt University

More information

ENHANCING A HUMAN-ROBOT INTERFACE USING SENSORY EGOSPHERE

ENHANCING A HUMAN-ROBOT INTERFACE USING SENSORY EGOSPHERE ENHANCING A HUMAN-ROBOT INTERFACE USING SENSORY EGOSPHERE CARLOTTA JOHNSON, A. BUGRA KOKU, KAZUHIKO KAWAMURA, and R. ALAN PETERS II {johnsonc; kokuab; kawamura; rap} @ vuse.vanderbilt.edu Intelligent Robotics

More information

Supervisory Control of Mobile Robots using Sensory EgoSphere

Supervisory Control of Mobile Robots using Sensory EgoSphere Proceedings of 2001 IEEE International Symposium on Computational Intelligence in Robotics and Automation July 29 - August 1, 2001, Banff, Alberta, Canada Supervisory Control of Mobile Robots using Sensory

More information

An Agent-Based Architecture for an Adaptive Human-Robot Interface

An Agent-Based Architecture for an Adaptive Human-Robot Interface An Agent-Based Architecture for an Adaptive Human-Robot Interface Kazuhiko Kawamura, Phongchai Nilas, Kazuhiko Muguruma, Julie A. Adams, and Chen Zhou Center for Intelligent Systems Vanderbilt University

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp

More information

A Neural Model of Landmark Navigation in the Fiddler Crab Uca lactea

A Neural Model of Landmark Navigation in the Fiddler Crab Uca lactea A Neural Model of Landmark Navigation in the Fiddler Crab Uca lactea Hyunggi Cho 1 and DaeEun Kim 2 1- Robotic Institute, Carnegie Melon University, Pittsburgh, PA 15213, USA 2- Biological Cybernetics

More information

COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE

COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE Prof.dr.sc. Mladen Crneković, University of Zagreb, FSB, I. Lučića 5, 10000 Zagreb Prof.dr.sc. Davor Zorc, University of Zagreb, FSB, I. Lučića 5, 10000 Zagreb

More information

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures D.M. Rojas Castro, A. Revel and M. Ménard * Laboratory of Informatics, Image and Interaction (L3I)

More information

DESIGN OF THE PEER AGENT FOR MULTI-ROBOT COMMUNICATION IN AN AGENT-BASED ROBOT CONTROL ARCHITECTURE ANAK BIJAYENDRAYODHIN

DESIGN OF THE PEER AGENT FOR MULTI-ROBOT COMMUNICATION IN AN AGENT-BASED ROBOT CONTROL ARCHITECTURE ANAK BIJAYENDRAYODHIN ELECTRICAL ENGINEERING DESIGN OF THE PEER AGENT FOR MULTI-ROBOT COMMUNICATION IN AN AGENT-BASED ROBOT CONTROL ARCHITECTURE ANAK BIJAYENDRAYODHIN Thesis under the direction of Professor Kazuhiko Kawamura

More information

Extracting Navigation States from a Hand-Drawn Map

Extracting Navigation States from a Hand-Drawn Map Extracting Navigation States from a Hand-Drawn Map Marjorie Skubic, Pascal Matsakis, Benjamin Forrester and George Chronis Dept. of Computer Engineering and Computer Science, University of Missouri-Columbia,

More information

A Real-World Experiments Setup for Investigations of the Problem of Visual Landmarks Selection for Mobile Robots

A Real-World Experiments Setup for Investigations of the Problem of Visual Landmarks Selection for Mobile Robots Applied Mathematical Sciences, Vol. 6, 2012, no. 96, 4767-4771 A Real-World Experiments Setup for Investigations of the Problem of Visual Landmarks Selection for Mobile Robots Anna Gorbenko Department

More information

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing Digital Image Processing Lecture # 6 Corner Detection & Color Processing 1 Corners Corners (interest points) Unlike edges, corners (patches of pixels surrounding the corner) do not necessarily correspond

More information

Segmentation using Saturation Thresholding and its Application in Content-Based Retrieval of Images

Segmentation using Saturation Thresholding and its Application in Content-Based Retrieval of Images Segmentation using Saturation Thresholding and its Application in Content-Based Retrieval of Images A. Vadivel 1, M. Mohan 1, Shamik Sural 2 and A.K.Majumdar 1 1 Department of Computer Science and Engineering,

More information

Cooperative Distributed Vision for Mobile Robots Emanuele Menegatti, Enrico Pagello y Intelligent Autonomous Systems Laboratory Department of Informat

Cooperative Distributed Vision for Mobile Robots Emanuele Menegatti, Enrico Pagello y Intelligent Autonomous Systems Laboratory Department of Informat Cooperative Distributed Vision for Mobile Robots Emanuele Menegatti, Enrico Pagello y Intelligent Autonomous Systems Laboratory Department of Informatics and Electronics University ofpadua, Italy y also

More information

Objective Data Analysis for a PDA-Based Human-Robotic Interface*

Objective Data Analysis for a PDA-Based Human-Robotic Interface* Objective Data Analysis for a PDA-Based Human-Robotic Interface* Hande Kaymaz Keskinpala EECS Department Vanderbilt University Nashville, TN USA hande.kaymaz@vanderbilt.edu Abstract - This paper describes

More information

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit)

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit) Vishnu Nath Usage of computer vision and humanoid robotics to create autonomous robots (Ximea Currera RL04C Camera Kit) Acknowledgements Firstly, I would like to thank Ivan Klimkovic of Ximea Corporation,

More information

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS Nuno Sousa Eugénio Oliveira Faculdade de Egenharia da Universidade do Porto, Portugal Abstract: This paper describes a platform that enables

More information

Effective Iconography....convey ideas without words; attract attention...

Effective Iconography....convey ideas without words; attract attention... Effective Iconography...convey ideas without words; attract attention... Visual Thinking and Icons An icon is an image, picture, or symbol representing a concept Icon-specific guidelines Represent the

More information

Novel Hemispheric Image Formation: Concepts & Applications

Novel Hemispheric Image Formation: Concepts & Applications Novel Hemispheric Image Formation: Concepts & Applications Simon Thibault, Pierre Konen, Patrice Roulet, and Mathieu Villegas ImmerVision 2020 University St., Montreal, Canada H3A 2A5 ABSTRACT Panoramic

More information

IQ-ASyMTRe: Synthesizing Coalition Formation and Execution for Tightly-Coupled Multirobot Tasks

IQ-ASyMTRe: Synthesizing Coalition Formation and Execution for Tightly-Coupled Multirobot Tasks Proc. of IEEE International Conference on Intelligent Robots and Systems, Taipai, Taiwan, 2010. IQ-ASyMTRe: Synthesizing Coalition Formation and Execution for Tightly-Coupled Multirobot Tasks Yu Zhang

More information

Improved SIFT Matching for Image Pairs with a Scale Difference

Improved SIFT Matching for Image Pairs with a Scale Difference Improved SIFT Matching for Image Pairs with a Scale Difference Y. Bastanlar, A. Temizel and Y. Yardımcı Informatics Institute, Middle East Technical University, Ankara, 06531, Turkey Published in IET Electronics,

More information

Intro to Intelligent Robotics EXAM Spring 2008, Page 1 of 9

Intro to Intelligent Robotics EXAM Spring 2008, Page 1 of 9 Intro to Intelligent Robotics EXAM Spring 2008, Page 1 of 9 Student Name: Student ID # UOSA Statement of Academic Integrity On my honor I affirm that I have neither given nor received inappropriate aid

More information

Keywords: Multi-robot adversarial environments, real-time autonomous robots

Keywords: Multi-robot adversarial environments, real-time autonomous robots ROBOT SOCCER: A MULTI-ROBOT CHALLENGE EXTENDED ABSTRACT Manuela M. Veloso School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA veloso@cs.cmu.edu Abstract Robot soccer opened

More information

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization Sensors and Materials, Vol. 28, No. 6 (2016) 695 705 MYU Tokyo 695 S & M 1227 Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization Chun-Chi Lai and Kuo-Lan Su * Department

More information

Learning Behaviors for Environment Modeling by Genetic Algorithm

Learning Behaviors for Environment Modeling by Genetic Algorithm Learning Behaviors for Environment Modeling by Genetic Algorithm Seiji Yamada Department of Computational Intelligence and Systems Science Interdisciplinary Graduate School of Science and Engineering Tokyo

More information

Concentric Spatial Maps for Neural Network Based Navigation

Concentric Spatial Maps for Neural Network Based Navigation Concentric Spatial Maps for Neural Network Based Navigation Gerald Chao and Michael G. Dyer Computer Science Department, University of California, Los Angeles Los Angeles, California 90095, U.S.A. gerald@cs.ucla.edu,

More information

Autodesk Advance Steel. Drawing Style Manager s guide

Autodesk Advance Steel. Drawing Style Manager s guide Autodesk Advance Steel Drawing Style Manager s guide TABLE OF CONTENTS Chapter 1 Introduction... 5 Details and Detail Views... 6 Drawing Styles... 6 Drawing Style Manager... 8 Accessing the Drawing Style

More information

A Probabilistic Method for Planning Collision-free Trajectories of Multiple Mobile Robots

A Probabilistic Method for Planning Collision-free Trajectories of Multiple Mobile Robots A Probabilistic Method for Planning Collision-free Trajectories of Multiple Mobile Robots Maren Bennewitz Wolfram Burgard Department of Computer Science, University of Freiburg, 7911 Freiburg, Germany

More information

USE OF COLOR IN REMOTE SENSING

USE OF COLOR IN REMOTE SENSING 1 USE OF COLOR IN REMOTE SENSING (David Sandwell, Copyright, 2004) Display of large data sets - Most remote sensing systems create arrays of numbers representing an area on the surface of the Earth. The

More information

Advance Steel. Drawing Style Manager s guide

Advance Steel. Drawing Style Manager s guide Advance Steel Drawing Style Manager s guide TABLE OF CONTENTS Chapter 1 Introduction...7 Details and Detail Views...8 Drawing Styles...8 Drawing Style Manager...9 Accessing the Drawing Style Manager...9

More information

Figure 1: Energy Distributions for light

Figure 1: Energy Distributions for light Lecture 4: Colour The physical description of colour Colour vision is a very complicated biological and psychological phenomenon. It can be described in many different ways, including by physics, by subjective

More information

Image Extraction using Image Mining Technique

Image Extraction using Image Mining Technique IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 9 (September. 2013), V2 PP 36-42 Image Extraction using Image Mining Technique Prof. Samir Kumar Bandyopadhyay,

More information

FACE RECOGNITION USING NEURAL NETWORKS

FACE RECOGNITION USING NEURAL NETWORKS Int. J. Elec&Electr.Eng&Telecoms. 2014 Vinoda Yaragatti and Bhaskar B, 2014 Research Paper ISSN 2319 2518 www.ijeetc.com Vol. 3, No. 3, July 2014 2014 IJEETC. All Rights Reserved FACE RECOGNITION USING

More information

Associated Emotion and its Expression in an Entertainment Robot QRIO

Associated Emotion and its Expression in an Entertainment Robot QRIO Associated Emotion and its Expression in an Entertainment Robot QRIO Fumihide Tanaka 1. Kuniaki Noda 1. Tsutomu Sawada 2. Masahiro Fujita 1.2. 1. Life Dynamics Laboratory Preparatory Office, Sony Corporation,

More information

White Intensity = 1. Black Intensity = 0

White Intensity = 1. Black Intensity = 0 A Region-based Color Image Segmentation Scheme N. Ikonomakis a, K. N. Plataniotis b and A. N. Venetsanopoulos a a Dept. of Electrical and Computer Engineering, University of Toronto, Toronto, Canada b

More information

Structure and Synthesis of Robot Motion

Structure and Synthesis of Robot Motion Structure and Synthesis of Robot Motion Motion Synthesis in Groups and Formations I Subramanian Ramamoorthy School of Informatics 5 March 2012 Consider Motion Problems with Many Agents How should we model

More information

Localization (Position Estimation) Problem in WSN

Localization (Position Estimation) Problem in WSN Localization (Position Estimation) Problem in WSN [1] Convex Position Estimation in Wireless Sensor Networks by L. Doherty, K.S.J. Pister, and L.E. Ghaoui [2] Semidefinite Programming for Ad Hoc Wireless

More information

Using Reactive Deliberation for Real-Time Control of Soccer-Playing Robots

Using Reactive Deliberation for Real-Time Control of Soccer-Playing Robots Using Reactive Deliberation for Real-Time Control of Soccer-Playing Robots Yu Zhang and Alan K. Mackworth Department of Computer Science, University of British Columbia, Vancouver B.C. V6T 1Z4, Canada,

More information

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Davis Ancona and Jake Weiner Abstract In this report, we examine the plausibility of implementing a NEAT-based solution

More information

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition Advanced Techniques for Mobile Robotics Location-Based Activity Recognition Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Activity Recognition Based on L. Liao, D. J. Patterson, D. Fox,

More information

NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS

NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS Xianjun Sam Zheng, George W. McConkie, and Benjamin Schaeffer Beckman Institute, University of Illinois at Urbana Champaign This present

More information

Digital Image Processing. Lecture # 8 Color Processing

Digital Image Processing. Lecture # 8 Color Processing Digital Image Processing Lecture # 8 Color Processing 1 COLOR IMAGE PROCESSING COLOR IMAGE PROCESSING Color Importance Color is an excellent descriptor Suitable for object Identification and Extraction

More information

- Basics of informatics - Computer network - Software engineering - Intelligent media processing - Human interface. Professor. Professor.

- Basics of informatics - Computer network - Software engineering - Intelligent media processing - Human interface. Professor. Professor. - Basics of informatics - Computer network - Software engineering - Intelligent media processing - Human interface Computer-Aided Engineering Research of power/signal integrity analysis and EMC design

More information

Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment

Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment Helmut Schrom-Feiertag 1, Christoph Schinko 2, Volker Settgast 3, and Stefan Seer 1 1 Austrian

More information

Object Perception. 23 August PSY Object & Scene 1

Object Perception. 23 August PSY Object & Scene 1 Object Perception Perceiving an object involves many cognitive processes, including recognition (memory), attention, learning, expertise. The first step is feature extraction, the second is feature grouping

More information

Fundamentals of Computer Vision

Fundamentals of Computer Vision Fundamentals of Computer Vision COMP 558 Course notes for Prof. Siddiqi's class. taken by Ruslana Makovetsky (Winter 2012) What is computer vision?! Broadly speaking, it has to do with making a computer

More information

Mel Spectrum Analysis of Speech Recognition using Single Microphone

Mel Spectrum Analysis of Speech Recognition using Single Microphone International Journal of Engineering Research in Electronics and Communication Mel Spectrum Analysis of Speech Recognition using Single Microphone [1] Lakshmi S.A, [2] Cholavendan M [1] PG Scholar, Sree

More information

Color Image Processing

Color Image Processing Color Image Processing Jesus J. Caban Outline Discuss Assignment #1 Project Proposal Color Perception & Analysis 1 Discuss Assignment #1 Project Proposal Due next Monday, Oct 4th Project proposal Submit

More information

Complex DNA and Good Genes for Snakes

Complex DNA and Good Genes for Snakes 458 Int'l Conf. Artificial Intelligence ICAI'15 Complex DNA and Good Genes for Snakes Md. Shahnawaz Khan 1 and Walter D. Potter 2 1,2 Institute of Artificial Intelligence, University of Georgia, Athens,

More information

Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images

Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images Keshav Thakur 1, Er Pooja Gupta 2,Dr.Kuldip Pahwa 3, 1,M.Tech Final Year Student, Deptt. of ECE, MMU Ambala,

More information

DESIGN & DEVELOPMENT OF COLOR MATCHING ALGORITHM FOR IMAGE RETRIEVAL USING HISTOGRAM AND SEGMENTATION TECHNIQUES

DESIGN & DEVELOPMENT OF COLOR MATCHING ALGORITHM FOR IMAGE RETRIEVAL USING HISTOGRAM AND SEGMENTATION TECHNIQUES International Journal of Information Technology and Knowledge Management July-December 2011, Volume 4, No. 2, pp. 585-589 DESIGN & DEVELOPMENT OF COLOR MATCHING ALGORITHM FOR IMAGE RETRIEVAL USING HISTOGRAM

More information

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image Background Computer Vision & Digital Image Processing Introduction to Digital Image Processing Interest comes from two primary backgrounds Improvement of pictorial information for human perception How

More information

Content Based Image Retrieval Using Color Histogram

Content Based Image Retrieval Using Color Histogram Content Based Image Retrieval Using Color Histogram Nitin Jain Assistant Professor, Lokmanya Tilak College of Engineering, Navi Mumbai, India. Dr. S. S. Salankar Professor, G.H. Raisoni College of Engineering,

More information

Mixed-Initiative Interactions for Mobile Robot Search

Mixed-Initiative Interactions for Mobile Robot Search Mixed-Initiative Interactions for Mobile Robot Search Curtis W. Nielsen and David J. Bruemmer and Douglas A. Few and Miles C. Walton Robotic and Human Systems Group Idaho National Laboratory {curtis.nielsen,

More information

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Leandro Soriano Marcolino and Luiz Chaimowicz Abstract A very common problem in the navigation of robotic swarms is when groups of robots

More information

Towards affordance based human-system interaction based on cyber-physical systems

Towards affordance based human-system interaction based on cyber-physical systems Towards affordance based human-system interaction based on cyber-physical systems Zoltán Rusák 1, Imre Horváth 1, Yuemin Hou 2, Ji Lihong 2 1 Faculty of Industrial Design Engineering, Delft University

More information

INTRODUCTION TO CCD IMAGING

INTRODUCTION TO CCD IMAGING ASTR 1030 Astronomy Lab 85 Intro to CCD Imaging INTRODUCTION TO CCD IMAGING SYNOPSIS: In this lab we will learn about some of the advantages of CCD cameras for use in astronomy and how to process an image.

More information

Robotic Systems ECE 401RB Fall 2007

Robotic Systems ECE 401RB Fall 2007 The following notes are from: Robotic Systems ECE 401RB Fall 2007 Lecture 14: Cooperation among Multiple Robots Part 2 Chapter 12, George A. Bekey, Autonomous Robots: From Biological Inspiration to Implementation

More information

Detection and Verification of Missing Components in SMD using AOI Techniques

Detection and Verification of Missing Components in SMD using AOI Techniques , pp.13-22 http://dx.doi.org/10.14257/ijcg.2016.7.2.02 Detection and Verification of Missing Components in SMD using AOI Techniques Sharat Chandra Bhardwaj Graphic Era University, India bhardwaj.sharat@gmail.com

More information

Fig Color spectrum seen by passing white light through a prism.

Fig Color spectrum seen by passing white light through a prism. 1. Explain about color fundamentals. Color of an object is determined by the nature of the light reflected from it. When a beam of sunlight passes through a glass prism, the emerging beam of light is not

More information

Fuzzy-Heuristic Robot Navigation in a Simulated Environment

Fuzzy-Heuristic Robot Navigation in a Simulated Environment Fuzzy-Heuristic Robot Navigation in a Simulated Environment S. K. Deshpande, M. Blumenstein and B. Verma School of Information Technology, Griffith University-Gold Coast, PMB 50, GCMC, Bundall, QLD 9726,

More information

Using a Qualitative Sketch to Control a Team of Robots

Using a Qualitative Sketch to Control a Team of Robots Using a Qualitative Sketch to Control a Team of Robots Marjorie Skubic, Derek Anderson, Samuel Blisard Dennis Perzanowski, Alan Schultz Electrical and Computer Engineering Department University of Missouri-Columbia

More information

Introduction to Image Analysis with

Introduction to Image Analysis with Introduction to Image Analysis with PLEASE ENSURE FIJI IS INSTALLED CORRECTLY! WHAT DO WE HOPE TO ACHIEVE? Specifically, the workshop will cover the following topics: 1. Opening images with Bioformats

More information

SITUATED DESIGN OF VIRTUAL WORLDS USING RATIONAL AGENTS

SITUATED DESIGN OF VIRTUAL WORLDS USING RATIONAL AGENTS SITUATED DESIGN OF VIRTUAL WORLDS USING RATIONAL AGENTS MARY LOU MAHER AND NING GU Key Centre of Design Computing and Cognition University of Sydney, Australia 2006 Email address: mary@arch.usyd.edu.au

More information

On Contrast Sensitivity in an Image Difference Model

On Contrast Sensitivity in an Image Difference Model On Contrast Sensitivity in an Image Difference Model Garrett M. Johnson and Mark D. Fairchild Munsell Color Science Laboratory, Center for Imaging Science Rochester Institute of Technology, Rochester New

More information

Enhancing a Human-Robot Interface Using Sensory EgoSphere

Enhancing a Human-Robot Interface Using Sensory EgoSphere Enhancing a Human-Robot Interface Using Sensory EgoSphere Carlotta A. Johnson Advisor: Dr. Kazuhiko Kawamura Center for Intelligent Systems Vanderbilt University March 29, 2002 CONTENTS Introduction Human-Robot

More information

Multi-Robot Formation. Dr. Daisy Tang

Multi-Robot Formation. Dr. Daisy Tang Multi-Robot Formation Dr. Daisy Tang Objectives Understand key issues in formationkeeping Understand various formation studied by Balch and Arkin and their pros/cons Understand local vs. global control

More information

Key-Words: - Neural Networks, Cerebellum, Cerebellar Model Articulation Controller (CMAC), Auto-pilot

Key-Words: - Neural Networks, Cerebellum, Cerebellar Model Articulation Controller (CMAC), Auto-pilot erebellum Based ar Auto-Pilot System B. HSIEH,.QUEK and A.WAHAB Intelligent Systems Laboratory, School of omputer Engineering Nanyang Technological University, Blk N4 #2A-32 Nanyang Avenue, Singapore 639798

More information

Visual Search using Principal Component Analysis

Visual Search using Principal Component Analysis Visual Search using Principal Component Analysis Project Report Umesh Rajashekar EE381K - Multidimensional Digital Signal Processing FALL 2000 The University of Texas at Austin Abstract The development

More information

CSE1710. Big Picture. Reminder

CSE1710. Big Picture. Reminder CSE1710 Click to edit Master Week text 10, styles Lecture 19 Second level Third level Fourth level Fifth level Fall 2013 Thursday, Nov 14, 2013 1 Big Picture For the next three class meetings, we will

More information

Autonomous Initialization of Robot Formations

Autonomous Initialization of Robot Formations Autonomous Initialization of Robot Formations Mathieu Lemay, François Michaud, Dominic Létourneau and Jean-Marc Valin LABORIUS Research Laboratory on Mobile Robotics and Intelligent Systems Department

More information

Robot Task-Level Programming Language and Simulation

Robot Task-Level Programming Language and Simulation Robot Task-Level Programming Language and Simulation M. Samaka Abstract This paper presents the development of a software application for Off-line robot task programming and simulation. Such application

More information

STRATEGO EXPERT SYSTEM SHELL

STRATEGO EXPERT SYSTEM SHELL STRATEGO EXPERT SYSTEM SHELL Casper Treijtel and Leon Rothkrantz Faculty of Information Technology and Systems Delft University of Technology Mekelweg 4 2628 CD Delft University of Technology E-mail: L.J.M.Rothkrantz@cs.tudelft.nl

More information

Getting Started Guide

Getting Started Guide SOLIDWORKS Getting Started Guide SOLIDWORKS Electrical FIRST Robotics Edition Alexander Ouellet 1/2/2015 Table of Contents INTRODUCTION... 1 What is SOLIDWORKS Electrical?... Error! Bookmark not defined.

More information

Human Visual System. Prof. George Wolberg Dept. of Computer Science City College of New York

Human Visual System. Prof. George Wolberg Dept. of Computer Science City College of New York Human Visual System Prof. George Wolberg Dept. of Computer Science City College of New York Objectives In this lecture we discuss: - Structure of human eye - Mechanics of human visual system (HVS) - Brightness

More information

Investigation of Navigating Mobile Agents in Simulation Environments

Investigation of Navigating Mobile Agents in Simulation Environments Investigation of Navigating Mobile Agents in Simulation Environments Theses of the Doctoral Dissertation Richárd Szabó Department of Software Technology and Methodology Faculty of Informatics Loránd Eötvös

More information

ABSTRACT 1. INTRODUCTION

ABSTRACT 1. INTRODUCTION THE APPLICATION OF SOFTWARE DEFINED RADIO IN A COOPERATIVE WIRELESS NETWORK Jesper M. Kristensen (Aalborg University, Center for Teleinfrastructure, Aalborg, Denmark; jmk@kom.aau.dk); Frank H.P. Fitzek

More information

Multi-Robot Systems, Part II

Multi-Robot Systems, Part II Multi-Robot Systems, Part II October 31, 2002 Class Meeting 20 A team effort is a lot of people doing what I say. -- Michael Winner. Objectives Multi-Robot Systems, Part II Overview (con t.) Multi-Robot

More information

Head-Movement Evaluation for First-Person Games

Head-Movement Evaluation for First-Person Games Head-Movement Evaluation for First-Person Games Paulo G. de Barros Computer Science Department Worcester Polytechnic Institute 100 Institute Road. Worcester, MA 01609 USA pgb@wpi.edu Robert W. Lindeman

More information

Dynamic Robot Formations Using Directional Visual Perception. approaches for robot formations in order to outline

Dynamic Robot Formations Using Directional Visual Perception. approaches for robot formations in order to outline Dynamic Robot Formations Using Directional Visual Perception Franοcois Michaud 1, Dominic Létourneau 1, Matthieu Guilbert 1, Jean-Marc Valin 1 1 Université de Sherbrooke, Sherbrooke (Québec Canada), laborius@gel.usherb.ca

More information

For a long time I limited myself to one color as a form of discipline. Pablo Picasso. Color Image Processing

For a long time I limited myself to one color as a form of discipline. Pablo Picasso. Color Image Processing For a long time I limited myself to one color as a form of discipline. Pablo Picasso Color Image Processing 1 Preview Motive - Color is a powerful descriptor that often simplifies object identification

More information

Image Processing by Bilateral Filtering Method

Image Processing by Bilateral Filtering Method ABHIYANTRIKI An International Journal of Engineering & Technology (A Peer Reviewed & Indexed Journal) Vol. 3, No. 4 (April, 2016) http://www.aijet.in/ eissn: 2394-627X Image Processing by Bilateral Image

More information

Performance Analysis of Color Components in Histogram-Based Image Retrieval

Performance Analysis of Color Components in Histogram-Based Image Retrieval Te-Wei Chiang Department of Accounting Information Systems Chihlee Institute of Technology ctw@mail.chihlee.edu.tw Performance Analysis of s in Histogram-Based Image Retrieval Tienwei Tsai Department of

More information

The human visual system

The human visual system The human visual system Vision and hearing are the two most important means by which humans perceive the outside world. 1 Low-level vision Light is the electromagnetic radiation that stimulates our visual

More information

INFORMATION AND COMMUNICATION TECHNOLOGIES IMPROVING EFFICIENCIES WAYFINDING SWARM CREATURES EXPLORING THE 3D DYNAMIC VIRTUAL WORLDS

INFORMATION AND COMMUNICATION TECHNOLOGIES IMPROVING EFFICIENCIES WAYFINDING SWARM CREATURES EXPLORING THE 3D DYNAMIC VIRTUAL WORLDS INFORMATION AND COMMUNICATION TECHNOLOGIES IMPROVING EFFICIENCIES Refereed Paper WAYFINDING SWARM CREATURES EXPLORING THE 3D DYNAMIC VIRTUAL WORLDS University of Sydney, Australia jyoo6711@arch.usyd.edu.au

More information

A Reactive Collision Avoidance Approach for Mobile Robot in Dynamic Environments

A Reactive Collision Avoidance Approach for Mobile Robot in Dynamic Environments A Reactive Collision Avoidance Approach for Mobile Robot in Dynamic Environments Tang S. H. and C. K. Ang Universiti Putra Malaysia (UPM), Malaysia Email: saihong@eng.upm.edu.my, ack_kit@hotmail.com D.

More information

DECENTRALIZED CONTROL OF STRUCTURAL ACOUSTIC RADIATION

DECENTRALIZED CONTROL OF STRUCTURAL ACOUSTIC RADIATION DECENTRALIZED CONTROL OF STRUCTURAL ACOUSTIC RADIATION Kenneth D. Frampton, PhD., Vanderbilt University 24 Highland Avenue Nashville, TN 37212 (615) 322-2778 (615) 343-6687 Fax ken.frampton@vanderbilt.edu

More information

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Dipartimento di Elettronica Informazione e Bioingegneria Robotics Dipartimento di Elettronica Informazione e Bioingegneria Robotics Behavioral robotics @ 2014 Behaviorism behave is what organisms do Behaviorism is built on this assumption, and its goal is to promote

More information

Reactive Planning with Evolutionary Computation

Reactive Planning with Evolutionary Computation Reactive Planning with Evolutionary Computation Chaiwat Jassadapakorn and Prabhas Chongstitvatana Intelligent System Laboratory, Department of Computer Engineering Chulalongkorn University, Bangkok 10330,

More information

Exploration of Unknown Environments Using a Compass, Topological Map and Neural Network

Exploration of Unknown Environments Using a Compass, Topological Map and Neural Network Exploration of Unknown Environments Using a Compass, Topological Map and Neural Network Tom Duckett and Ulrich Nehmzow Department of Computer Science University of Manchester Manchester M13 9PL United

More information

1.6 Beam Wander vs. Image Jitter

1.6 Beam Wander vs. Image Jitter 8 Chapter 1 1.6 Beam Wander vs. Image Jitter It is common at this point to look at beam wander and image jitter and ask what differentiates them. Consider a cooperative optical communication system that

More information

Gilbert Peterson and Diane J. Cook University of Texas at Arlington Box 19015, Arlington, TX

Gilbert Peterson and Diane J. Cook University of Texas at Arlington Box 19015, Arlington, TX DFA Learning of Opponent Strategies Gilbert Peterson and Diane J. Cook University of Texas at Arlington Box 19015, Arlington, TX 76019-0015 Email: {gpeterso,cook}@cse.uta.edu Abstract This work studies

More information

Key-Words: - Fuzzy Behaviour Controls, Multiple Target Tracking, Obstacle Avoidance, Ultrasonic Range Finders

Key-Words: - Fuzzy Behaviour Controls, Multiple Target Tracking, Obstacle Avoidance, Ultrasonic Range Finders Fuzzy Behaviour Based Navigation of a Mobile Robot for Tracking Multiple Targets in an Unstructured Environment NASIR RAHMAN, ALI RAZA JAFRI, M. USMAN KEERIO School of Mechatronics Engineering Beijing

More information

Spatial Color Indexing using ACC Algorithm

Spatial Color Indexing using ACC Algorithm Spatial Color Indexing using ACC Algorithm Anucha Tungkasthan aimdala@hotmail.com Sarayut Intarasema Darkman502@hotmail.com Wichian Premchaiswadi wichian@siam.edu Abstract This paper presents a fast and

More information

Chapter 3 Part 2 Color image processing

Chapter 3 Part 2 Color image processing Chapter 3 Part 2 Color image processing Motivation Color fundamentals Color models Pseudocolor image processing Full-color image processing: Component-wise Vector-based Recent and current work Spring 2002

More information

Estimation of Folding Operations Using Silhouette Model

Estimation of Folding Operations Using Silhouette Model Estimation of Folding Operations Using Silhouette Model Yasuhiro Kinoshita Toyohide Watanabe Abstract In order to recognize the state of origami, there are only techniques which use special devices or

More information

12 Color Models and Color Applications. Chapter 12. Color Models and Color Applications. Department of Computer Science and Engineering 12-1

12 Color Models and Color Applications. Chapter 12. Color Models and Color Applications. Department of Computer Science and Engineering 12-1 Chapter 12 Color Models and Color Applications 12-1 12.1 Overview Color plays a significant role in achieving realistic computer graphic renderings. This chapter describes the quantitative aspects of color,

More information

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception

More information

Autonomous Localization

Autonomous Localization Autonomous Localization Jennifer Zheng, Maya Kothare-Arora I. Abstract This paper presents an autonomous localization service for the Building-Wide Intelligence segbots at the University of Texas at Austin.

More information