Mobile robot interception using human navigational principles: Comparison of active versus passive tracking algorithms

Size: px
Start display at page:

Download "Mobile robot interception using human navigational principles: Comparison of active versus passive tracking algorithms"

Transcription

1 Auton Robot (2006) 21:43 54 DOI /s Mobile robot interception using human navigational principles: Comparison of active versus passive tracking algorithms Thomas G. Sugar Michael K. McBeath Anthony Suluh Keshav Mundhra Received: 12 October 2004 / Revised: 16 January 2006 / Accepted: 10 February 2006 / Published online: 22 June 2006 C Science + Business Media, LLC 2006 Abstract We examined human navigational principles for intercepting a projected object and tested their application in the design of navigational algorithms for mobile robots. These perceptual principles utilize a viewer-based geometry that allows the robot to approach the target without need of time-consuming calculations to determine the world coordinates of either itself or the target. Human research supports the use of an Optical Acceleration Cancellation (OAC) strategy to achieve interception. Here, the fielder selects a running path that nulls out the acceleration of the retinal image of an approaching ball, and maintains an image that rises at a constant rate throughout the task. We compare two robotic control algorithms for implementing the OAC strategy in cases in which the target remains in the sagittal plane headed directly toward the robot (which only moves forward or backward). In the passive algorithm, the robot keeps the orientation of the camera constant, and the image of the ball rises at a constant rate. In the active algorithm, the robot maintains a camera fixation that is centered on the image of the ball and keeps the tangent of the camera angle rising at a constant rate. Performance was superior with the active algorithm in both computer simulations and trials with actual mobile robots. The performance advantage is principally due to the higher gain and effectively wider viewing angle when the camera remains centered on the ball image. The findings confirm the viability and robustness of human perceptual principles in the design of mobile robot algorithms for tasks like interception. Keywords Mobile robot navigation. Visual servoing. Perceptual principles T. G. Sugar ( ) M. K. McBeath A. Suluh K. Mundhra Arizona State University, Tempe, AZ Introduction Perceptual models that specify how to navigate to catch fly balls are formulated around coordinate systems that are either viewer-based or world-based (McLeod and Dienes, 1993; McBeath et al., 1995; Aboufadel, 1996). Models with viewer-based coordinate systems utilize a geometry that is relative to the vantage of the moving viewer, and only allow information that is perceptually available from that perspective. Such models can use simple control algorithms that are consistent with the behavior of humans and other locomotive animals. In contrast, models with world-based coordinate systems utilize a geometry that is independent of the vantage of the viewer, and typically must rely upon knowledge of physics. Such models either predict the target trajectory from well-defined initial conditions and known environmental factors, or triangulate the ongoing world coordinates of the target and pursuer using multiple external cameras. This approach to modeling also typically assumes knowledge of maps and landmarks of the world, information that often is not perceptually available to a viewer, and may not be reliable or possible to gather. Moreover, studies have shown that human fielders are relatively unaware of the destination of the ball during interception tasks (McBeath et al., 1995; Oudejans et al., 1996; Shaffer and McBeath, 2002). The viewer-based models appear to be more appropriate for describing action tasks in a limited information context (Milner and Goodale, 1995). Over the last decade, perceptual psychologists have proposed several viewer-based catching algorithms. One strategy called the Optical Acceleration Cancellation (OAC) model proscribes how a fielder can intercept a fly ball by selecting a running path that maintains optical acceleration cancellation of the image of the approaching ball, which results in a constant optical rate in a vertical image plane. In this

2 44 Auton Robot (2006) 21:43 54 Fig. 1 The Optical Acceleration Cancellation (OAC) strategy. A running fielder (headed from the right) approaches a projected ball (headed from the left) in equal temporal units (t 1 t 7 ). The OAC strategy directs the fielder to select a running velocity that cancels the acceleration of the image of the ball in a projection plane that moves with the fielder. In the diagram this results in a projection point moving upward along the tilted dotted line shown. This happens because, as the fielder moves forward, the distance between the fielder and the projection plane remains fixed. The fielder approaches the ball along a path such that the height of the ball in the projection plane increases by a constant amount in each fixed interval of time (d(tan(α))/dt = constant) strategy, fielders run along a path that remains aligned with the ball and maintains a constant change in the tangent of the vertical optical angle, α (see Fig. 1). This model focuses on objects hit directly at the fielder. Thus, in order to catch the ball, a fielder simply needs to select a running path that maintains the change in tanα equal to a constant, (d/dt(tan α) = c) (Chapman, 1968; McLeod and Dienes, 1993; 1996; Oudejans et al., 1996). When a fielder uses the OAC strategy, if the rate of change of tan(α) increases, it means that he is running forward too fast (or not fast enough backwards) and the ball will fall behind him. Similarly, if the rate of change of tan(α) decreases, it means he is running forward too slowly (or too quickly backwards) and the ball will fall in front of him. The rule to catch the ball is simple. The fielder can guarantee interception by selecting a running speed that maintains a constant d(tan α)/dt, or an optical acceleration cancellation (OAC) of the image of the ball. If the d(tan α)/dt equals a constant and the projection plane remains a fixed distance from the fielder, then by definition the image of the ball will rise at a constant rate. Therefore, the OAC control algorithm is very simple. If the velocity of the ball in the image plane starts to change, the fielder acts accordingly by moving forward or backward to maintain the image of the ball moving upward at a constant rate (McBeath et al., 2001; Sugar and McBeath, 2001; Suluh et al., 2001; Mori and Miyazaki, 2002; Mundhra et al., 2002; Suluh et al., 2002; Mori and Miyazaki, 2003; Rozendaal and van Soest, 2003; Miyazaki and Mori, 2004). In 1968 Chapman demonstrated that if a fielder uses OAC to approach a ball that travels through a perfect parabola, then there is an ideal solution with a straight-line constant velocity running path (Chapman, 1968). In 1985, Brancazio questioned the utility of Chapman s proposed OAC algorithm because it ignores air resistance, which substantially alters the trajectory of most projectiles (Brancazio, 1985). Thus acceleration of tan(α) of a ball affected by drag is not precisely zero when the ball is on collision course with a stationary observer. Brancazio suggested that when humans intercept a ball, they maintain a constant increment of the optical angle, α (Brancazio, 1985). But psychologists like (McLeod and Dienes, 1993, 1996), confirmed that maintaining OAC will lead to interception even when the ball trajectory is airresistance shortened, this just leads to a non-constant running speed. Moreover, they also confirmed that human fielder behavior is consistent with utilization of the OAC strategy. In 1995, McBeath et al. confirmed that fielders maintain OAC even when balls are headed off to the side (McBeath et al., 1995). In 1999, Oudejans et al. 1999) tested if the control mechanism for canceling acceleration of the tangent of the optical angle α is achieved by maintaining constant speed of the

3 Auton Robot (2006) 21: ball image along the retina, or by maintaining fixation on the ball and a constant rate of change in the tangent of the fixation angle. The former, maintaining constant speed of the ball image along the retina, requires that the perceptual system null out any changes in retinal angle and keep track of ball position relative to the background scenery. In contrast, maintaining fixation on the ball and a constant rate of change requires control and monitoring of extra-retinal mechanisms, such as the vestibular and proprioceptive systems (eye and head movements). Their findings supported the latter model, that fielders move their head and eyes and maintain fixation on the ball with constantly changing eye angle. They also challenged the need for background scenery by confirming that fielders reliably intercept luminous fly balls in the dark in the absence of visual background, under both binocular and monocular viewing conditions. They concluded that eye and head movements are used to determine the optical angle of the falling object. In the robotics literature, Borgstadt and Ferrier (2000) tested several different algorithms to intercept a flyball. In the first algorithm, the robot maintains its velocity proportional to the magnitude of d 2 (tan α)/dt 2, where α is the optical gaze angle. This strategy has difficulties when the ball is too close to the fielder and when the gaze angle approaches its maximum value resulting in unstable behavior for the robot. The second algorithm sets the fielder s acceleration equal to a constant value. In this strategy, the robot either increases or decreases its velocity depending on the sign of d 2 (tan α)/dt 2. They found that this bang-bang strategy works better. However, the implementation of this strategy by collecting acceleration data from the image is very noisy, and this bang-bang approach results in non-smooth robotic trajectories. Rozendaal and van Soest analyzed the OAC algorithm to determine when it is valid and argue that the current model will always generate a path to intercept the object for balls starting at the horizon and hit towards the fielder (Rozendaal and van Soest, 2003). They argue that other degenerate cases such as balls falling down or balls starting below the horizon cause problems for the current OAC model. In our research, it has been found that humans do have difficulty intercepting falling objects and balls launched away from them, consistent with the predictions of Rozendaal and van Soest (McBeath et al., 2002). Our work has been shown that different OAC models are used for falling balls and for intercepting balls rolling on the ground (Mundhra et al., 2002; Mundhra et al., 2003). Mori and Miyazaki have developed a robot that can intercept objects using a modified model, Gaining Angle of Gaze (Mori and Miyazaki, 2002; 2003; Miyazaki and Mori, 2004). This new research focuses on the human perceptual information, and defines viewer-based control variables that can be used in visual servoing tasks. Corrective action is taken based on simple geometrical principles. These models are interesting because the control action is taken in the visual plane and the controller is inherently different because the velocity and acceleration of the pursuer need not and does not approach zero as the destination is reached. In the next section, we define a passive and an active algorithm for implementing the OAC strategy. We then simulate and experimentally demonstrate smooth robotic controllers based on this human perceptual strategy. Modeling and simulation Two algorithms for implementing the OAC strategy are developed and compared in our research, a passive one and an active one. In the passive case, the camera on the robot is kept at a stationary angle relative to the world, whereas in the active case, the camera tilts to maintain a constant rate of the tangent of the gaze angle. The passive and the active OAC algorithms are first compared using computer simulation of robot behavior for different ball trajectories, both with and without drag. Passive OAC model In the passive model, the image of the ball on the camera must rise according to the following perceptual equation, d dt (tan α) = d ( ) yb = C (1) dt x f x b where α is the vertical optical angle of the ball from the perspective of the fielder, (x f, 0) and (x b, y b ) are the world coordinates of the fielder and ball, and C is a constant to be determined at the beginning of the task and remains fixed during the entire task. From the OAC perceptual model, it can be shown that: tan α = Ct (2) x f = x b + y b Ct where, C = ẏb(0) D, D = x f (0) x b (0) (4) D is the initial distance between the fielder and the ball and remains fixed during the task. Equation (4) is the same as Eq. (5) in Rozendaal and van Soest s research (Rozendaal and van Soest, 2003) if it is assumed that the initial height of the ball is zero. From Eq. (3), whenever the height of the (3)

4 46 Auton Robot (2006) 21:43 54 ball is zero during the task, the fielder s position is forced to converge to the ball s position. For this algorithm, the fielder or robot moves so that the image of the ball rises constantly in stationary image plane in order to achieve interception (see Figs. 2 and 3).The camera or head is fixed and does not tilt, and the image of the ball rises upward at a constant rate. The passive algorithm is implemented by using a stationary camera, and adjusting fielder position so that image of the ball rises according to the perceptual control model d/dt tan (α) = C. Because the focal distance of the camera is fixed, this constrains the distance to the projection plane, denoted by D, to be fixed so the vertical image plane effectively moves with the fielder. In the pinhole camera instantiation of the passive OAC algorithm, the height of the ball on the image plane is given by Y image. Since the focal distance is fixed, the image plane can be modeled to move forward or backward as the fielder moves forward or backward. Based on similar triangles with a fixed focal distance, d, the ball image will rise at a constant rate on the CCD image plane throughout the entire interception task (see Fig. 3). In the projection plane, the ball height equals tan(α)d, which simplifies to CDt, because tan(α) equals a constant times time. The rate of change in the height of the ball in the projection plane will equal CD = ẏ b (0). The robot estimates the initial optical velocity after sampling for 4 frames or 0.27 s. The variable, v Projection, is the robot s estimate of CD, the initial upward velocity of the projected image of the ball. By modeling the problem into a viewer-based robotics perspective, the perceptually available information in real world coordinates is converted to viewer-based image coordinates. Figure 4 (like Fig. 1) graphically illustrates Eq. (1) and how the projected image of the ball keeps rising at a constant rate given by v Projection, even while the ball physically descends near the end. Passive robot controller Our passive control law for the robot contains two terms. The firstterm,b, is the feed-forward velocity and the second term is the error between the actual height of the ball and the desired height of the ball in the projection plane, with a gain factor, K p. The desired height of the ball in the projection plane should increase at a constant rate equal to v Projection.The feed-forward term is determined by taking the derivative of Eq (1). Because B cannot be determined experimentally, it is set to zero in the simulations, but was included to allow for future simulations that can more closely mimic biological interception behavior. ẋ f = B + K p (( yb x f x b ) ) D v Projection t (5) Fig. 2 Shown is the OAC strategy in which the image height of the ball, Y projection, varies as a function of robot and ball position. The projection height is defined by a directed line through the points (x f,0)and(x b, y b ) onto a projection plane a fixed distance, D, away B = ẏ b (t f ) v Projection t f D + ẋ b (t f ) (6) Fig. 3 Shown is an enlargement of the camera geometry in the circled portion of Fig. 2 Fig. 4 The passive OAC algorithm when intercepting a fly ball is shown at a set of time intervals. The image of the ball rises at constant velocity relative to the background scenery

5 Auton Robot (2006) 21: In simulations, the robot s position will converge to the destination of the ball, x b (t f ), where t f is the landing time. Simulations were made for ball trajectories either with or without drag, and landing locations were varied to be either in front or in back of the robot s initial position. Figures 5 and 6 show the ball trajectory both without, and with drag, respectively. In these simulations, we used a controller gain of K p = 0.25, t = 1/15 (frame rate in seconds), x b = 0.5 t (m), v Projection = 2 m/s, and feedforward velocity of B = 0. In the simulations without drag, y b = 0.5 t t (m). With the perfect knowledge of v Projection and B, the robot s velocity is constant; otherwise, the robot must more quickly chase the ball toward the end of the ball trajectory. This control model allows a robot to intercept the target object on the run, similar to documented human behavior. It is different from conventional controllers where acceleration and velocity typically are designed to decrease to zero as the destination is reached. In experiments, the robot will estimate v Projection and keep it fixed during the entire task. The effects of the estimation of v Projection are shown in the next three simulations to show the robustness of the controller. For these simulations, K p = 0.25 (controller gain), t = 1/15 (frame rate in seconds), D = 3 (initial distance between the robot and the ball in meters), y b = 0.5t 2 + 2t (m), x b = 0.5t (m), and B = 0. In the simulations, three different estimates for v Projection (1,2,and3m/s) are used for the same ball trajectory and starting robot position. In one simulation, if v Projection is estimated too low, v image = 1 m/s, then the robot does not catch the ball and the robot moves backward and forward in a poor choice of running velocities. In the second two cases when v Projection = 2 and 3 m/s, the robot catches the ball, but running velocity still varies more than is desirable, in contrast to an optimal constant speed (see Fig. 7). Fig. 5 (a) Shown are robot movement trajectories for intercepting a ball without drag using the passive OAC algorithm at two initial distances (x f (0) = 1 and 3 m). Robot and ball movements are only in the x direction (plotted vertically) and shown as a function of time. The ball trajectory is indicated by a solid straight line, and the robot trajectories, starting at 1 and 3, are indicated by dotted lines. (b) Shown are the heights of the projection of the ball as a function of time using the passive OAC algorithm for the two trajectories in (a). The desired projection height of the ball is indicated by a solid, straight line. The simulated ball projections on the projection plane for initial robot positions at 1 and 3, respectively, are indicated by dotted lines that curve somewhat initially, but remain close to the linear ideal (c). Shown is the velocity of the robot starting at two initial distances (x f (0) = 1and 3 m). The ball is headed to land behind the robot starting at an initial distance of 1 m, which has a negative (backward) velocity that stabilizes after about 1.5 s. The ball is heading in front of the robot starting at an initial distance of 3 m, which has a positive velocity that continues to increase until near interception. A more optimal solution would necessitate earlier acceleration to achieve a more constant velocity Active robot controller In the active OAC algorithm, the tilt of the camera on the robot is constantly adjusted using the formula: α des = arctan (Ct) (7) The robot estimates C by setting it equal to the initial observed v Projection /D and tilts the camera based on the arctan function. Here, the desired projection of the ball is always at the center of the projection image. When the ball is not at the center of the projection image, the robot corrects the error by moving forward or backward. It is assumed that the pinhole is fixed and the image rotates about the pinhole. In this case, a similar formula is derived. The function in the

6 48 Auton Robot (2006) 21:43 54 denominator occurs because the calculation is performed in a rotating image plane that is perpendicular to the gaze direction. (( ) )/ yb ẋ f = K p D v Projection t ( 1 + x f x b )( )) vprojection t ( yb x f x b D (8) Fig. 6 (a) Shown are the robot movement trajectories for intercepting a ball with drag using the passive OAC algorithm at two initial distances (x f (0) = 1 and 3 m). The drag is given by 0.05 v 2. The ball trajectory is indicated by a solid line, and the robot trajectories starting at 1 and 3 m are indicated by dotted lines. (b) Shown are the heights of the projection of the ball with drag as a function of time using the passive OAC algorithm for the two trajectories in Fig. 6.(a). The desired projection height of the ball is indicated by a solid, straight line. The simulated ball projections on the projection plane for initial robot positions at 1 and 3 m are indicated by dotted curved paths. (c) Shown is the velocity of the robot starting at two initial distances (x f (0) = 1and3m).The ball is headed to land behind the robot starting at an initial distance of 1 m, which has a negative (backward) velocity that stabilizes after about 1.5 s. The ball is heading in front of the robot starting at an initial distance of 3 m, which has a positive velocity that continues to increase until interception. The terminal velocity at 3 m exceeds that without drag shown in Fig. 5.(c). as the robot moved further forward over a shorter period of time Matching the passive algorithm tests, two sets of simulations, both without and with drag, were performed. The active model simulations used values of v Projection = 2m/s,K p = 5, t = 1/15 (s), and x b = 0.5 t (m). In the simulations without drag, y b = 0.5 t t (m). As before, the robot starts from two different initial positions, at x = 1 m and x = 3m, and the ball travels with a perfect parabolic trajectory (see Fig. 8). In the simulations with drag, the robot and ball have the same initial conditions, but the modeled trajectory is shortened due to the addition of drag (see Fig. 9). With an active controller, the gain can be increased and allows the robot to maintain an error that is well under an order of magnitude smaller than for the comparable passive algorithm shown in Fig. 5. With the active controller, the robot achieves very efficient, near constant-velocity behavior and is successful in catching the ball despite starting from different initial positions, and the disturbance due to drag. The trajectories are also closer to a constant ideal (i.e. straight lines) than the comparable simulation with the passive algorithm shown in Fig. 6. The effects of the estimation of v Projection with the active OAC algorithm are shown in three more simulations without drag present. In these three simulations, K p = 5, t = 1/15 (s), D = 3(m),y b = 0.5t 2 + 2t (m), and x b = 0.5t (m). The results of the simulations are shown in Fig. 10 for v Projection = 1, 2, and 3 m/s. These rates are below, at, and above the initial image velocity that would be observed from the perspective of the robot located at the given starting distance. The robot successfully intercepts the ball in all three conditions, recovering from non-optimally varying its position based on high and low values of v Projection. The success indicates more robustness than the comparable passive algorithm shown in Fig. 7. The results of our simulations support that the active OAC algorithm is superior to the passive one. In the active model, the controller gain, K p, can be increased greatly (in this case by a factor of 20), so that the robot has better response characteristics. This leads to the error between the desired and the actual projection of the ball being markedly smaller in the active model. Thus, the velocity of the robot remains lower in the active model. As our graphs consistently show, a robot achieves more efficient solutions using the active

7 Auton Robot (2006) 21: Fig. 7 (a) Shown are three robot movement trajectories for interception using the passive OAC algorithm (v Projection = 1, 2, and 3 m/s). The trajectory of the ball in the x direction is plotted as a solid straight line, and the robot trajectories for v Projection are plotted as dotted curved paths. For v Projection = 1 m/s, the robot initially heads away in the wrong direction and in the end fails to intercept the ball. (b) Shown is the height of the projection of the ball as a function of time using the passive OAC algorithm for the three v Projection conditions shown in Fig. 7.(a). The desired projection heights are plotted in solid lines, while the actual simulated ball projections are plotted as dotted lines. For v Projection = 1 m/s, the projection height of the ball drops to zero (ground level) at the end; hence, the robot does not catch the ball. (c) Shown is the velocity of the robot starting at an initial distance of 3 m with the different conditions of v Projection. The ball is heading in front of the robot, and the robot generally has a positive velocity to move forward and quickly intercept it. In the case of v Projection = 1m/s,the initial robot velocity is negative, and then it increases greatly near the end of the simulation, trying to recover Fig. 8 (a) Shown are the ball and robot trajectories using the active OAC algorithm (x f (0) = 1 and 3 m). The ball trajectory in the x direction (solid line) and the robot trajectories (dotted lines) are plotted against time. The robot trajectories are closer to a constant ideal (i.e. straight lines) than the comparable simulation with the passive algorithm shown in Fig. 5. (b) Shown are the optical ball projections using the active OAC algorithm. The desired projection of the ball on the projection plane is a straight horizontal line. The dotted curves are the ball projections as viewed by the robot with initial positions at 1 and 3 m. (c) Shown are the robot velocities using the active OAC algorithm. The velocities rapidly asymptote to near ideal constant speeds that remain well under those achieved with the comparable passive algorithm shown in Fig. 5 OAC algorithm, and exhibits more robustness in being able to catch balls within a larger range of values of v Projection.In contrast, a robot using the passive OAC algorithm tends to vary more in velocity and fails to catch the ball when v Projection is extreme (e.g. v Projection = 1 m/s), but was catchable in the active case.

8 50 Auton Robot (2006) 21:43 54 Experiment and discussion of the results Experiment set up In our experiments, we use a Nomad Super Scout robot (Nomadic Technologies Inc.) with an additional pan-tilt mechanism (Directed Perception Inc.). The robot has an on board computer with a second computer for image processing that has a FireWire r interface (see Fig. 11). The commands to control the drive wheels of the robot, the pan-tilt mechanism, and the frame grabber are written in C language under the Linux operating system. A balloon with a light weight is used to create different slow motion trajectories of a ballistic object. The robot control program calculates the center of the balloon using a standard center of mass calculation at rate of 15 frames/s in the 2D camera image. In the first set of trials, the balloon travels in a near parabolic trajectory headed to land in front of the robot. In a second set of trials, the balloon travels in a near parabolic trajectory headed to land beyond the robot. In both cases, the robot adjusts its velocity to intercept the balloon. Pretest validating OAC strategy with stationary robot As a pretest, the OAC strategy is validated for three trials of a ball lofted along approximately parabolic trajectories. The robot is placed at the destination position of the ball and remains stationary during the test. The images captured by the robot confirm the prediction of the OAC model. Essentially constant increments are maintained in the vertical position of the center of the ball on the image plane between two consecutive frames (see Fig. 12). Experimental result of passive vs. active OAC algorithms In the first comparison between passive and active algorithms, the ball heads to land in front of the robot. For the passive algorithm, the camera remains stationary during the experiment, and the desired rise in optical ball velocity, v Projection, is set to maintain the velocity observed during the first four frames. v Projection is measured in pixels/s and could be converted to distance/s by multiplying by the focal length of the camera. It remains fixed during the entire task. In this trial, the robot establishes a desired rise velocity that is somewhat lower than optimal, which leads it to actually back up initially, trying to slow down the observed optical velocity. During the first three-quarters of the trial, the robot successfully keeps the centroid of the ball image moving upward at a near constant rate. During the last quarter, near the end of the trial, the difference between the desired and actual image Fig. 9 (a) Shown are ball and robot trajectories using the active OAC algorithm (x f (0) = 1 and 3 m) with drag present. The ball trajectory in the x direction (solid line) is plotted vertically against time. (b) Shown are ball projections using the active OAC algorithm with drag present. The robot, starting from two different initial positions, is able to maintain the image of the ball very near the center of view and intercept it. The desired image of the ball (solid line) is at the center of the projection plane. (c) Shown is the velocity of the robot starting at the two initial positions. The velocities rapidly asymptote to near ideal constant speeds that remain well under those achieved with the comparable passive algorithm shown in Fig. 6 plane locations of the centroid increases, and the robot is forced to rapidly adjust, accelerating forward quickly (See Fig. 13). The robot s velocity is negative at the beginning and then positive at the end indicating a less than optimal choice of v Projection (2 instead of 2.5) that leads the robot to initially move back and then have difficulty recovering at the end. Because the gain is set low, it is difficult not to have some error when the robot establishes the initial desired rise velocity, and the compounding of this error over time leads

9 Auton Robot (2006) 21: Fig. 11 Nomad Super Scout robot with a monocular camera mounted on a pan tilt platform. The secondary laptop is used for image processing Fig. 12 Shown is the ball centroid versus frame, as measured from a stationary robot located at the destination position for three lofted balls. The centroid of the ball rises at a relatively constant rate on the image plane, consistent with the prediction of the OAC strategy Fig. 10 (a) Shown are robot trajectories using the active OAC algorithm (v Projection = 1, 2, and 3), from a starting distance of three meters. The ball trajectory is indicated by a solid line, and the robot trajectories are indicated by dotted lines for the each of the three values of v Projection. (b) Shown are ball projections using the active OAC algorithm. The desired projection of the ball is a straight line. The actual ball projections are plotted as dotted lines respectively. The robot controlled with v Projection = 1 m/s is able to intercept the ball using an active algorithm, while failing to intercept the ball using a passive one shown in Fig. 7.(a). Shown are the velocities of the robot starting at an initial distance of 3 m. The ball is heading in front of the robot, and the robot has a positive velocity to move forwards and quickly intercept it. In the case of v Projection = 1 m/s, the robot moves backward and then forward to intercept the ball. Even in the cases when v Projection is artificially high or low, the robot velocity still asymptotes to a near constant speed before interception the robot to move non-optimally for much of the duration of the trial. For the active algorithm, the camera is adjusted so that it continuously changes in tilt at a constant rate proportional to the initial rate that the centroid of the ball rises in the image plane during the first four frames (see Eqs. (7) and 8). The robot moves backward or forward to maintain the actual centroid of the ball at the center of the image plane. As found in the simulations, the difference between the desired centroid and the actual centroid is very small. The robot moves with a continuous, much slower velocity and achieves better accuracy than it did with the passive algorithm (see Fig. 14). By comparison, the actual and the desired centroid of the ball in the passive algorithm trial (Fig. 13) are further apart which causes the robot to move with a higher velocity, particularly near the end. In the second experimental comparison, the ball moves along an approximately parabolic trajectory that is headed to land beyond the robot. With the passive algorithm, the image of the ball initially rises too fast, accelerating upward, and thus, the robot rapidly moves backward. Eventually, the robot overreacts and its inertia carries it too far causing the centroid of the ball in the image plane to dramatically decelerate and actually slow down (the ball in the image starts to curve downward), which in turn causes the robot to dramatically

10 52 Auton Robot (2006) 21:43 54 Fig. 13 (a) Shown are the desired and actual optical paths for the passive OAC algorithm with a ball that is headed to land in front of the robot (Case 1). The desired (dashed) and actual (solid) optical paths of the centroid of the ball are plotted against the image frames. The downward dip at the end indicates that the robot is having difficulty approaching the ball fast enough to keep it from falling in front. (b) For the passive OAC algorithm (Case 1), the actual velocity function (solid) and the desired velocity function (dashed) are shown for a ball headed to land in front of the robot. The initial negative velocity indicates that the robot is actually drifting backwards in the wrong direction. The curve up at the end shows the robot lurching forward, making up for its earlier inaccuracies decelerate (see Fig. 15). In the active algorithm, the robot is able to track the ball with much better accuracy, which results in a path to the ball with a much more constant, and hence slower velocity, consistent with the results found in the earlier simulations (see Fig. 16). The robot moves with a maximum of 0.15 m/s using the active algorithm and 0.51 m/s using the passive one, and the active robot velocity is much closer to optimal constancy, especially near the end. Fig. 14 (a) Shown are the desired and actual optical paths for the active OAC algorithm with a ball that is headed to land in front of the robot (Case 1). The desired centroid of the ball (dashed) for the active algorithm is at the center of the image. After 5 frames, the actual image of the ball drops (solid) and the robot moves forward (see Fig. 14.(b)), re-establishing the actual image close to the center. (b) For the active OAC algorithm (Case 1), the desired (dashed) and actual (solid) velocity of the robot increase after 5 frames in response to the drop in the actual image of the ball. The robot moves forward to catch the ball, and achieves an approach speed that gradually diminishes. The overall pattern of motion is smoother and more optimal than with the passive algorithm shown in Fig. 13 Conclusion We investigated and tested the human-based perceptual navigation principle known as Optical Acceleration Cancellation, or OAC. OAC guides a fielder to interception of ballistic objects by keeping the optical image of the approaching projectile constantly rising throughout the entire task. Mobile robotic control algorithms were developed for

11 Auton Robot (2006) 21: Fig. 15 (a) Shown are the desired and actual optical paths for the passive OAC algorithm with a ball that is headed to land beyond the robot (Case 2). The desired (dashed) and actual (solid) images of the center of the ball are plotted against the image frames. For most of the trajectory, the actual image of the ball rises at a higher rate than desired indicating that the ball continues to head beyond the destination of the robot. Eventually, after 30 frames, the robot has accelerated to the point of overcompensating, and the ball image rapidly descends, causing the robot to change directions and accelerate forward. (b) For the passive OAC algorithm (Case 2), the actual (solid) and desired (dashed) velocity of the robot correspond with the image plane data shown in Fig. 15.(a). The robot moves backward since the ball heads behind the robot s initial position. The robot initially moves backward and continues to accelerate backwards until 30 frames. At that point, the robot has over-committed in the backward direction, so the image of the ball drops in the image plane, and the robot reverses direction and moves forward to catch it the OAC strategy that has been proposed by perceptual psychologists, and robotic simulations and experiments were performed. In developing the control algorithms, we showed that keeping the derivative of the tangent of the optical angle equal to a constant is functionally equivalent to keeping the Fig. 16 (a) Shown are the desired and actual optical paths for the active OAC algorithm with a ball that is headed to land beyond the robot (Case 2). The actual (solid) and desired (dashed) centroid of the ball in the image plane are plotted against the image frames. After about 15 frames, the robot establishes a fairly regular error pattern that causes it to smoothly move backwards. (b). For the active OAC algorithm (Case 2), the desired (dashed) and actual (solid) velocity of the robot gradually increases backwards after about 15 frames (in response to the actual image of the ball being too high). Overall, the robot moves backward with a much slower and smoother velocity compared to the passive algorithm shown in Fig. 15 ball rising at a constant rate in a camera with a fixed focal length. We also demonstrated from the simulation and the experimental data that both the passive and the active OAC algorithms can succeed in guiding a robotic fielder to intercept ballistic objects. Simulations and actual robotic tests confirmed this with a variety of different paths, headed both in front of and beyond the fielder, and both without or with drag due to air resistance. We found that the active OAC algorithm, in which a camera is continually tilted and a robot adjusts its position to center the ball in the image, performs better than the

12 54 Auton Robot (2006) 21:43 54 passive OAC algorithm, in which the ball image constantly rises along a stationary camera image. In the active case, the error between the desired image of the ball and the actual image of the ball was consistently smaller than matched trials using the passive algorithm. Because the error was small, the robot s velocity was much more constant and considerably lower with the active algorithm. Moreover, the simulations indicated that the active OAC algorithm was more robust, working over a wider range of different values of v Projection, the estimation of the initial upward optical velocity of the target object. The comparison of passive and active control algorithms supports that the active method, demonstrated to be used by humans, is also superior for projectile interception by mobile robots. Moreover, our findings confirm that these type of control algorithms, ones that use viewer-based perceptual principles and visual-servoing of image data, provide a simple and robust method for mobile robots to intercept projectiles with varying trajectories. These algorithms are quite different than conventional controllers because camera calibration is not needed, only viewer-based information is needed, and the velocity of the robot does not converge to zero when it reaches the destination. Simple geometric principles are easily programmed in the visual plane and can guide autonomous robots to naturally navigate. References Aboufadel, E A mathematician catches a baseball. American Mathematics Monitor, 103: Borgstadt, J.A. and Ferrier, N.J Interception of a projectile using a human vision-based strategy. In IEEE Int. Conf. on Robotics and Automation. Brancazio, P.J Looking into Chapman s Homer. American Journal of Physics, 36: Chapman, S Catching a baseball. American Journal of Physics, 36:868. McBeath, M.K., Shaffer, D.M. et al How baseball outfielders determine where to run to catch fly balls. Science, 268: McBeath, M.K., Sugar, T. et al Comparison of active versus passive ball catching algorithms using robotic simulations [Abstract]. Journal of Vision, 1(3):193a. McBeath, M.K., Sugar, T.G. et al Human and robotic catching of dropped balls and balloons. Journal of Vision [Abstract], 2(7):434. McLeod, P. and Dienes, Z Running to catch the ball. Nature, 362(6415). McLeod, P. and Dienes, Z Do fielders know where to go to catch the ball or only how to get there? Journal of Experimental Psychology: Human Perception and Performance, 22(3): Milner, A. and Goodale, M The visual brain in action. Oxford Psychology Series 27, Oxford University Press. Miyazaki, F. and Mori, R Realization of ball catching task using a mobile robot. In IEEE Int. Conf. on Networking Sensing and Control. Mori, R. and Miyazaki, F Examination of human ball catching strategy through autonomous mobile robot. In IEEE Int. Conf. on Robotics and Automation. Mori, R. and Miyazaki, F GAG (Gaining Angle of Gaze) strategy for ball tracking and catching Task-implementation of GAG to a mobile robot. In 11th Int. Conf. on Advanced Robotics. Mundhra, K., Sugar, T.G. et al Perceptual navigation strategy: A unified approach to interception of ground balls and fly balls. In IEEE Conf. on Robotics and Automation. Mundhra, K., Suluh, A. et al Intercepting a falling object: Digital video robot. In the Procedings of the IEEE Int. Conf. on Robotics and Automation. Oudejans, R.R., Michaels, C.F. et al The relevance of action in perceiving affordances: Perception of catchableness of fly balls. Journal of Experimental Psychology: Human Perception and Performance, 22(4): Oudejans, R.R., Michaels, C.F. et al Shedding some light on catching in the dark: Perceptual mechanism for catching fly balls. Journal of Experimental Psychology: Human Perception and Performance, 25(2): Rozendaal, L.A. and van Soest, A.J Optical acceleration cancellation: A viable interception strategy? Biological Cybernetics, 89(6): Shaffer, D.M. and McBeath, M.K Baseball outfielders maintain a linear optical trajectory when tracking uncatchable fly balls. Journal of Experimental Psychology: Human Perception and Performance, 28(2): Sugar, T. and McBeath, M.K Spatial navigational algorithms: Application to mobile robotics. In Vision Interface Annual Conf., VI Suluh, A., Mundhra, K. et al Spatial interception for mobile robots. In IEEE Int. Conf. on Robotics and Automation. Suluh, A., Sugar, T. et al Spatial navigation principles: Applications to mobile robotics. In IEEE Int. Conf. on Robotics and Automation. Thomas Sugar works in the areas of mobile robot navigation and wearable robotics assisting gait of stroke survivors. In mobile robot navigation, he is interested in combining human perceptual principles with mobile robotics. He majored in business and mechanical engineering for his Bachelors degrees and mechanical engineering for his Doctoral degree all from the University of Pennsylvania. In industry, he worked as a project engineer for W. L. Gore and Associates. He has been a faculty member in the Department of Mechanical and Aerospace Engineering and the Department of Engineering at Arizona State University. His research is currently funded by three grants from the National Sciences Foundation and the National Institutes of Health, and focuses on perception and action, and wearable robots using tunable springs. Michael McBeath works in the area combining Psychology and Engineering. He majored in both fields for his Bachelors degree from Brown University and again for his Doctoral degree from Stanford University. Parallel to his academic career, he worked as a research scientist at NASA Ames Research Center, and at the Interval Corporation, a technology think tank funded by Microsoft co-founder, Paul Allen. He has been a faculty member in the Department of Psychology at Kent State University and at Arizona State University, where he is Program Director for the Cognition and Behavior area, and is on the Executive Committee for the interdisciplinary Arts, Media, and Engineering program. His research is currently funded by three grants from the National Sciences Foundation, and focuses on perception and action, particularly in sports. He is best known for his research on navigational strategies used by baseball players, animals, and robots.

Module 2. Lecture-1. Understanding basic principles of perception including depth and its representation.

Module 2. Lecture-1. Understanding basic principles of perception including depth and its representation. Module 2 Lecture-1 Understanding basic principles of perception including depth and its representation. Initially let us take the reference of Gestalt law in order to have an understanding of the basic

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path Taichi Yamada 1, Yeow Li Sa 1 and Akihisa Ohya 1 1 Graduate School of Systems and Information Engineering, University of Tsukuba, 1-1-1,

More information

GROUPING BASED ON PHENOMENAL PROXIMITY

GROUPING BASED ON PHENOMENAL PROXIMITY Journal of Experimental Psychology 1964, Vol. 67, No. 6, 531-538 GROUPING BASED ON PHENOMENAL PROXIMITY IRVIN ROCK AND LEONARD BROSGOLE l Yeshiva University The question was raised whether the Gestalt

More information

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp

More information

Perceiving Motion and Events

Perceiving Motion and Events Perceiving Motion and Events Chienchih Chen Yutian Chen The computational problem of motion space-time diagrams: image structure as it changes over time 1 The computational problem of motion space-time

More information

Thinking About Psychology: The Science of Mind and Behavior 2e. Charles T. Blair-Broeker Randal M. Ernst

Thinking About Psychology: The Science of Mind and Behavior 2e. Charles T. Blair-Broeker Randal M. Ernst Thinking About Psychology: The Science of Mind and Behavior 2e Charles T. Blair-Broeker Randal M. Ernst Sensation and Perception Chapter Module 9 Perception Perception While sensation is the process by

More information

Lab 4 Projectile Motion

Lab 4 Projectile Motion b Lab 4 Projectile Motion What You Need To Know: x x v v v o ox ox v v ox at 1 t at a x FIGURE 1 Linear Motion Equations The Physics So far in lab you ve dealt with an object moving horizontally or an

More information

Appendix III Graphs in the Introductory Physics Laboratory

Appendix III Graphs in the Introductory Physics Laboratory Appendix III Graphs in the Introductory Physics Laboratory 1. Introduction One of the purposes of the introductory physics laboratory is to train the student in the presentation and analysis of experimental

More information

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Leandro Soriano Marcolino and Luiz Chaimowicz Abstract A very common problem in the navigation of robotic swarms is when groups of robots

More information

Appendix C: Graphing. How do I plot data and uncertainties? Another technique that makes data analysis easier is to record all your data in a table.

Appendix C: Graphing. How do I plot data and uncertainties? Another technique that makes data analysis easier is to record all your data in a table. Appendix C: Graphing One of the most powerful tools used for data presentation and analysis is the graph. Used properly, graphs are an important guide to understanding the results of an experiment. They

More information

Servo Tuning Tutorial

Servo Tuning Tutorial Servo Tuning Tutorial 1 Presentation Outline Introduction Servo system defined Why does a servo system need to be tuned Trajectory generator and velocity profiles The PID Filter Proportional gain Derivative

More information

COPYRIGHTED MATERIAL. Overview

COPYRIGHTED MATERIAL. Overview In normal experience, our eyes are constantly in motion, roving over and around objects and through ever-changing environments. Through this constant scanning, we build up experience data, which is manipulated

More information

Lenses- Worksheet. (Use a ray box to answer questions 3 to 7)

Lenses- Worksheet. (Use a ray box to answer questions 3 to 7) Lenses- Worksheet 1. Look at the lenses in front of you and try to distinguish the different types of lenses? Describe each type and record its characteristics. 2. Using the lenses in front of you, look

More information

COPYRIGHTED MATERIAL OVERVIEW 1

COPYRIGHTED MATERIAL OVERVIEW 1 OVERVIEW 1 In normal experience, our eyes are constantly in motion, roving over and around objects and through ever-changing environments. Through this constant scanning, we build up experiential data,

More information

LAB 1 Linear Motion and Freefall

LAB 1 Linear Motion and Freefall Cabrillo College Physics 10L Name LAB 1 Linear Motion and Freefall Read Hewitt Chapter 3 What to learn and explore A bat can fly around in the dark without bumping into things by sensing the echoes of

More information

A Study of Slanted-Edge MTF Stability and Repeatability

A Study of Slanted-Edge MTF Stability and Repeatability A Study of Slanted-Edge MTF Stability and Repeatability Jackson K.M. Roland Imatest LLC, 2995 Wilderness Place Suite 103, Boulder, CO, USA ABSTRACT The slanted-edge method of measuring the spatial frequency

More information

Laboratory 7: Properties of Lenses and Mirrors

Laboratory 7: Properties of Lenses and Mirrors Laboratory 7: Properties of Lenses and Mirrors Converging and Diverging Lens Focal Lengths: A converging lens is thicker at the center than at the periphery and light from an object at infinity passes

More information

-binary sensors and actuators (such as an on/off controller) are generally more reliable and less expensive

-binary sensors and actuators (such as an on/off controller) are generally more reliable and less expensive Process controls are necessary for designing safe and productive plants. A variety of process controls are used to manipulate processes, however the most simple and often most effective is the PID controller.

More information

Lab 4 Projectile Motion

Lab 4 Projectile Motion b Lab 4 Projectile Motion Physics 211 Lab What You Need To Know: 1 x = x o + voxt + at o ox 2 at v = vox + at at 2 2 v 2 = vox 2 + 2aΔx ox FIGURE 1 Linear FIGURE Motion Linear Equations Motion Equations

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

PSYCHOLOGICAL SCIENCE. Research Report

PSYCHOLOGICAL SCIENCE. Research Report Research Report RETINAL FLOW IS SUFFICIENT FOR STEERING DURING OBSERVER ROTATION Brown University Abstract How do people control locomotion while their eyes are simultaneously rotating? A previous study

More information

10.2 Images Formed by Lenses SUMMARY. Refraction in Lenses. Section 10.1 Questions

10.2 Images Formed by Lenses SUMMARY. Refraction in Lenses. Section 10.1 Questions 10.2 SUMMARY Refraction in Lenses Converging lenses bring parallel rays together after they are refracted. Diverging lenses cause parallel rays to move apart after they are refracted. Rays are refracted

More information

Determining the Relationship Between the Range and Initial Velocity of an Object Moving in Projectile Motion

Determining the Relationship Between the Range and Initial Velocity of an Object Moving in Projectile Motion Determining the Relationship Between the Range and Initial Velocity of an Object Moving in Projectile Motion Sadaf Fatima, Wendy Mixaynath October 07, 2011 ABSTRACT A small, spherical object (bearing ball)

More information

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment

Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free Human Following Navigation in Outdoor Environment Proceedings of the International MultiConference of Engineers and Computer Scientists 2016 Vol I,, March 16-18, 2016, Hong Kong Motion Control of a Three Active Wheeled Mobile Robot and Collision-Free

More information

Constructing Line Graphs*

Constructing Line Graphs* Appendix B Constructing Line Graphs* Suppose we are studying some chemical reaction in which a substance, A, is being used up. We begin with a large quantity (1 mg) of A, and we measure in some way how

More information

Introduction to Psychology Prof. Braj Bhushan Department of Humanities and Social Sciences Indian Institute of Technology, Kanpur

Introduction to Psychology Prof. Braj Bhushan Department of Humanities and Social Sciences Indian Institute of Technology, Kanpur Introduction to Psychology Prof. Braj Bhushan Department of Humanities and Social Sciences Indian Institute of Technology, Kanpur Lecture - 10 Perception Role of Culture in Perception Till now we have

More information

Microsoft Scrolling Strip Prototype: Technical Description

Microsoft Scrolling Strip Prototype: Technical Description Microsoft Scrolling Strip Prototype: Technical Description Primary features implemented in prototype Ken Hinckley 7/24/00 We have done at least some preliminary usability testing on all of the features

More information

Autonomous Stair Climbing Algorithm for a Small Four-Tracked Robot

Autonomous Stair Climbing Algorithm for a Small Four-Tracked Robot Autonomous Stair Climbing Algorithm for a Small Four-Tracked Robot Quy-Hung Vu, Byeong-Sang Kim, Jae-Bok Song Korea University 1 Anam-dong, Seongbuk-gu, Seoul, Korea vuquyhungbk@yahoo.com, lovidia@korea.ac.kr,

More information

Range Sensing strategies

Range Sensing strategies Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart and Nourbakhsh 4.1.6 Range Sensors (time of flight) (1) Large range distance measurement -> called

More information

Motion Graphs. Plotting distance against time can tell you a lot about motion. Let's look at the axes:

Motion Graphs. Plotting distance against time can tell you a lot about motion. Let's look at the axes: Motion Graphs 1 Name Motion Graphs Describing the motion of an object is occasionally hard to do with words. Sometimes graphs help make motion easier to picture, and therefore understand. Remember: Motion

More information

Lecture IV. Sensory processing during active versus passive movements

Lecture IV. Sensory processing during active versus passive movements Lecture IV Sensory processing during active versus passive movements The ability to distinguish sensory inputs that are a consequence of our own actions (reafference) from those that result from changes

More information

Sensing self motion. Key points: Why robots need self-sensing Sensors for proprioception in biological systems in robot systems

Sensing self motion. Key points: Why robots need self-sensing Sensors for proprioception in biological systems in robot systems Sensing self motion Key points: Why robots need self-sensing Sensors for proprioception in biological systems in robot systems Position sensing Velocity and acceleration sensing Force sensing Vision based

More information

Note to Teacher. Description of the investigation. Time Required. Materials. Procedures for Wheel Size Matters TEACHER. LESSONS WHEEL SIZE / Overview

Note to Teacher. Description of the investigation. Time Required. Materials. Procedures for Wheel Size Matters TEACHER. LESSONS WHEEL SIZE / Overview In this investigation students will identify a relationship between the size of the wheel and the distance traveled when the number of rotations of the motor axles remains constant. It is likely that many

More information

Procidia Control Solutions Dead Time Compensation

Procidia Control Solutions Dead Time Compensation APPLICATION DATA Procidia Control Solutions Dead Time Compensation AD353-127 Rev 2 April 2012 This application data sheet describes dead time compensation methods. A configuration can be developed within

More information

Perceived depth is enhanced with parallax scanning

Perceived depth is enhanced with parallax scanning Perceived Depth is Enhanced with Parallax Scanning March 1, 1999 Dennis Proffitt & Tom Banton Department of Psychology University of Virginia Perceived depth is enhanced with parallax scanning Background

More information

Step vs. Servo Selecting the Best

Step vs. Servo Selecting the Best Step vs. Servo Selecting the Best Dan Jones Over the many years, there have been many technical papers and articles about which motor is the best. The short and sweet answer is let s talk about the application.

More information

Shape Memory Alloy Actuator Controller Design for Tactile Displays

Shape Memory Alloy Actuator Controller Design for Tactile Displays 34th IEEE Conference on Decision and Control New Orleans, Dec. 3-5, 995 Shape Memory Alloy Actuator Controller Design for Tactile Displays Robert D. Howe, Dimitrios A. Kontarinis, and William J. Peine

More information

Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment

Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment Helmut Schrom-Feiertag 1, Christoph Schinko 2, Volker Settgast 3, and Stefan Seer 1 1 Austrian

More information

Speed Control of a Pneumatic Monopod using a Neural Network

Speed Control of a Pneumatic Monopod using a Neural Network Tech. Rep. IRIS-2-43 Institute for Robotics and Intelligent Systems, USC, 22 Speed Control of a Pneumatic Monopod using a Neural Network Kale Harbick and Gaurav S. Sukhatme! Robotic Embedded Systems Laboratory

More information

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Davis Ancona and Jake Weiner Abstract In this report, we examine the plausibility of implementing a NEAT-based solution

More information

Object Perception. 23 August PSY Object & Scene 1

Object Perception. 23 August PSY Object & Scene 1 Object Perception Perceiving an object involves many cognitive processes, including recognition (memory), attention, learning, expertise. The first step is feature extraction, the second is feature grouping

More information

An Improved Path Planning Method Based on Artificial Potential Field for a Mobile Robot

An Improved Path Planning Method Based on Artificial Potential Field for a Mobile Robot BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 15, No Sofia 015 Print ISSN: 1311-970; Online ISSN: 1314-4081 DOI: 10.1515/cait-015-0037 An Improved Path Planning Method Based

More information

You identified, analyzed, and graphed quadratic functions. (Lesson 1 5) Analyze and graph equations of parabolas. Write equations of parabolas.

You identified, analyzed, and graphed quadratic functions. (Lesson 1 5) Analyze and graph equations of parabolas. Write equations of parabolas. You identified, analyzed, and graphed quadratic functions. (Lesson 1 5) Analyze and graph equations of parabolas. Write equations of parabolas. conic section degenerate conic locus parabola focus directrix

More information

LINEAR EQUATIONS IN TWO VARIABLES

LINEAR EQUATIONS IN TWO VARIABLES LINEAR EQUATIONS IN TWO VARIABLES What You Should Learn Use slope to graph linear equations in two " variables. Find the slope of a line given two points on the line. Write linear equations in two variables.

More information

Application Note (A13)

Application Note (A13) Application Note (A13) Fast NVIS Measurements Revision: A February 1997 Gooch & Housego 4632 36 th Street, Orlando, FL 32811 Tel: 1 407 422 3171 Fax: 1 407 648 5412 Email: sales@goochandhousego.com In

More information

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures D.M. Rojas Castro, A. Revel and M. Ménard * Laboratory of Informatics, Image and Interaction (L3I)

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

Unit IV: Sensation & Perception. Module 19 Vision Organization & Interpretation

Unit IV: Sensation & Perception. Module 19 Vision Organization & Interpretation Unit IV: Sensation & Perception Module 19 Vision Organization & Interpretation Visual Organization 19-1 Perceptual Organization 19-1 How do we form meaningful perceptions from sensory information? A group

More information

Human Vision and Human-Computer Interaction. Much content from Jeff Johnson, UI Wizards, Inc.

Human Vision and Human-Computer Interaction. Much content from Jeff Johnson, UI Wizards, Inc. Human Vision and Human-Computer Interaction Much content from Jeff Johnson, UI Wizards, Inc. are these guidelines grounded in perceptual psychology and how can we apply them intelligently? Mach bands:

More information

AgilEye Manual Version 2.0 February 28, 2007

AgilEye Manual Version 2.0 February 28, 2007 AgilEye Manual Version 2.0 February 28, 2007 1717 Louisiana NE Suite 202 Albuquerque, NM 87110 (505) 268-4742 support@agiloptics.com 2 (505) 268-4742 v. 2.0 February 07, 2007 3 Introduction AgilEye Wavefront

More information

Supplementary Notes to. IIT JEE Physics. Topic-wise Complete Solutions

Supplementary Notes to. IIT JEE Physics. Topic-wise Complete Solutions Supplementary Notes to IIT JEE Physics Topic-wise Complete Solutions Geometrical Optics: Focal Length of a Concave Mirror and a Convex Lens using U-V Method Jitender Singh Shraddhesh Chaturvedi PsiPhiETC

More information

TWO DIMENSIONAL DESIGN CHAPTER 6: GRADATION. Dr. Hatem Galal A Ibrahim

TWO DIMENSIONAL DESIGN CHAPTER 6: GRADATION. Dr. Hatem Galal A Ibrahim TWO DIMENSIONAL DESIGN CHAPTER 6: GRADATION Dr. Hatem Galal A Ibrahim DEFINITION: Gradation is a daily visual experience. Things that are close to us appear large and those that are far from us appear

More information

Working with the BCC Jitter Filter

Working with the BCC Jitter Filter Working with the BCC Jitter Filter Jitter allows you to vary one or more attributes of a source layer over time, such as size, position, opacity, brightness, or contrast. Additional controls choose the

More information

Note to the Teacher. Description of the investigation. Time Required. Additional Materials VEX KITS AND PARTS NEEDED

Note to the Teacher. Description of the investigation. Time Required. Additional Materials VEX KITS AND PARTS NEEDED In this investigation students will identify a relationship between the size of the wheel and the distance traveled when the number of rotations of the motor axles remains constant. Students are required

More information

Eye Movements and the Selection of Optical Information for Catching

Eye Movements and the Selection of Optical Information for Catching ECOLOGICAL PSYCHOLOGY, 13(2), 71 85 Copyright 2001, Lawrence Erlbaum Associates, Inc. Eye Movements and the Selection of Optical Information for Catching Eric L. Amazeen, Polemnia G. Amazeen, and Peter

More information

PRACTICAL ASPECTS OF ACOUSTIC EMISSION SOURCE LOCATION BY A WAVELET TRANSFORM

PRACTICAL ASPECTS OF ACOUSTIC EMISSION SOURCE LOCATION BY A WAVELET TRANSFORM PRACTICAL ASPECTS OF ACOUSTIC EMISSION SOURCE LOCATION BY A WAVELET TRANSFORM Abstract M. A. HAMSTAD 1,2, K. S. DOWNS 3 and A. O GALLAGHER 1 1 National Institute of Standards and Technology, Materials

More information

CHAPTER 5 CONCEPTS OF ALTERNATING CURRENT

CHAPTER 5 CONCEPTS OF ALTERNATING CURRENT CHAPTER 5 CONCEPTS OF ALTERNATING CURRENT INTRODUCTION Thus far this text has dealt with direct current (DC); that is, current that does not change direction. However, a coil rotating in a magnetic field

More information

Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments

Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments IMI Lab, Dept. of Computer Science University of North Carolina Charlotte Outline Problem and Context Basic RAMP Framework

More information

Integral 3-D Television Using a 2000-Scanning Line Video System

Integral 3-D Television Using a 2000-Scanning Line Video System Integral 3-D Television Using a 2000-Scanning Line Video System We have developed an integral three-dimensional (3-D) television that uses a 2000-scanning line video system. An integral 3-D television

More information

SELF-BALANCING MOBILE ROBOT TILTER

SELF-BALANCING MOBILE ROBOT TILTER Tomislav Tomašić Andrea Demetlika Prof. dr. sc. Mladen Crneković ISSN xxx-xxxx SELF-BALANCING MOBILE ROBOT TILTER Summary UDC 007.52, 62-523.8 In this project a remote controlled self-balancing mobile

More information

Length of a Side (m)

Length of a Side (m) Quadratics Day 1 The graph shows length and area data for rectangles with a fixed perimeter. Area (m ) 450 400 350 300 50 00 150 100 50 5 10 15 0 5 30 35 40 Length of a Side (m) 1. Describe the shape of

More information

CPSC 425: Computer Vision

CPSC 425: Computer Vision 1 / 55 CPSC 425: Computer Vision Instructor: Fred Tung ftung@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2015/2016 Term 2 2 / 55 Menu January 7, 2016 Topics: Image

More information

Randomized Motion Planning for Groups of Nonholonomic Robots

Randomized Motion Planning for Groups of Nonholonomic Robots Randomized Motion Planning for Groups of Nonholonomic Robots Christopher M Clark chrisc@sun-valleystanfordedu Stephen Rock rock@sun-valleystanfordedu Department of Aeronautics & Astronautics Stanford University

More information

Motion Graphs Teacher s Guide

Motion Graphs Teacher s Guide Motion Graphs Teacher s Guide 1.0 Summary Motion Graphs is the third activity in the Dynamica sequence. This activity should be done after Vector Motion. Motion Graphs has been revised for the 2004-2005

More information

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Ashill Chiranjan and Bernardt Duvenhage Defence, Peace, Safety and Security Council for Scientific

More information

BECAUSE OF their low cost and high reliability, many

BECAUSE OF their low cost and high reliability, many 824 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 45, NO. 5, OCTOBER 1998 Sensorless Field Orientation Control of Induction Machines Based on a Mutual MRAS Scheme Li Zhen, Member, IEEE, and Longya

More information

Visual Physics Lab Project 1

Visual Physics Lab Project 1 Page 1 Visual Physics Lab Project 1 Objectives: The purpose of this Project is to identify sources of error that arise when using a camera to capture data and classify them as either systematic or random

More information

Physics 131 Lab 1: ONE-DIMENSIONAL MOTION

Physics 131 Lab 1: ONE-DIMENSIONAL MOTION 1 Name Date Partner(s) Physics 131 Lab 1: ONE-DIMENSIONAL MOTION OBJECTIVES To familiarize yourself with motion detector hardware. To explore how simple motions are represented on a displacement-time graph.

More information

Experiment 7. Thin Lenses. Measure the focal length of a converging lens. Investigate the relationship between power and focal length.

Experiment 7. Thin Lenses. Measure the focal length of a converging lens. Investigate the relationship between power and focal length. Experiment 7 Thin Lenses 7.1 Objectives Measure the focal length of a converging lens. Measure the focal length of a diverging lens. Investigate the relationship between power and focal length. 7.2 Introduction

More information

SMALL VOLUNTARY MOVEMENTS OF THE EYE*

SMALL VOLUNTARY MOVEMENTS OF THE EYE* Brit. J. Ophthal. (1953) 37, 746. SMALL VOLUNTARY MOVEMENTS OF THE EYE* BY B. L. GINSBORG Physics Department, University of Reading IT is well known that the transfer of the gaze from one point to another,

More information

IV: Visual Organization and Interpretation

IV: Visual Organization and Interpretation IV: Visual Organization and Interpretation Describe Gestalt psychologists understanding of perceptual organization, and explain how figure-ground and grouping principles contribute to our perceptions Explain

More information

On the intensity maximum of the Oppel-Kundt illusion

On the intensity maximum of the Oppel-Kundt illusion On the intensity maximum of the Oppel-Kundt illusion M a b c d W.A. Kreiner Faculty of Natural Sciences University of Ulm y L(perceived) / L0 1. Illusion triggered by a gradually filled space In the Oppel-Kundt

More information

Nao Devils Dortmund. Team Description for RoboCup Matthias Hofmann, Ingmar Schwarz, and Oliver Urbann

Nao Devils Dortmund. Team Description for RoboCup Matthias Hofmann, Ingmar Schwarz, and Oliver Urbann Nao Devils Dortmund Team Description for RoboCup 2014 Matthias Hofmann, Ingmar Schwarz, and Oliver Urbann Robotics Research Institute Section Information Technology TU Dortmund University 44221 Dortmund,

More information

12A Distance, Time, and Speed

12A Distance, Time, and Speed 12A How do scientists describe motion? The average speed is the ratio of the distance traveled divided by the time taken. This is an idea you already use. For example, if your car is moving at a speed

More information

Weld gap position detection based on eddy current methods with mismatch compensation

Weld gap position detection based on eddy current methods with mismatch compensation Weld gap position detection based on eddy current methods with mismatch compensation Authors: Edvard Svenman 1,3, Anders Rosell 1,2, Anna Runnemalm 3, Anna-Karin Christiansson 3, Per Henrikson 1 1 GKN

More information

Discrimination of Virtual Haptic Textures Rendered with Different Update Rates

Discrimination of Virtual Haptic Textures Rendered with Different Update Rates Discrimination of Virtual Haptic Textures Rendered with Different Update Rates Seungmoon Choi and Hong Z. Tan Haptic Interface Research Laboratory Purdue University 465 Northwestern Avenue West Lafayette,

More information

Comparison of filtering methods for crane vibration reduction

Comparison of filtering methods for crane vibration reduction Comparison of filtering methods for crane vibration reduction Anderson David Smith This project examines the utility of adding a predictor to a crane system in order to test the response with different

More information

C.2 Equations and Graphs of Conic Sections

C.2 Equations and Graphs of Conic Sections 0 section C C. Equations and Graphs of Conic Sections In this section, we give an overview of the main properties of the curves called conic sections. Geometrically, these curves can be defined as intersections

More information

Fundamentals of Servo Motion Control

Fundamentals of Servo Motion Control Fundamentals of Servo Motion Control The fundamental concepts of servo motion control have not changed significantly in the last 50 years. The basic reasons for using servo systems in contrast to open

More information

An SWR-Feedline-Reactance Primer Part 1. Dipole Samples

An SWR-Feedline-Reactance Primer Part 1. Dipole Samples An SWR-Feedline-Reactance Primer Part 1. Dipole Samples L. B. Cebik, W4RNL Introduction: The Dipole, SWR, and Reactance Let's take a look at a very common antenna: a 67' AWG #12 copper wire dipole for

More information

Experiments on the locus of induced motion

Experiments on the locus of induced motion Perception & Psychophysics 1977, Vol. 21 (2). 157 161 Experiments on the locus of induced motion JOHN N. BASSILI Scarborough College, University of Toronto, West Hill, Ontario MIC la4, Canada and JAMES

More information

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization Sensors and Materials, Vol. 28, No. 6 (2016) 695 705 MYU Tokyo 695 S & M 1227 Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization Chun-Chi Lai and Kuo-Lan Su * Department

More information

Takeharu Seno 1,3,4, Akiyoshi Kitaoka 2, Stephen Palmisano 5 1

Takeharu Seno 1,3,4, Akiyoshi Kitaoka 2, Stephen Palmisano 5 1 Perception, 13, volume 42, pages 11 1 doi:1.168/p711 SHORT AND SWEET Vection induced by illusory motion in a stationary image Takeharu Seno 1,3,4, Akiyoshi Kitaoka 2, Stephen Palmisano 1 Institute for

More information

Perception in Immersive Environments

Perception in Immersive Environments Perception in Immersive Environments Scott Kuhl Department of Computer Science Augsburg College scott@kuhlweb.com Abstract Immersive environment (virtual reality) systems provide a unique way for researchers

More information

Application Note #2442

Application Note #2442 Application Note #2442 Tuning with PL and PID Most closed-loop servo systems are able to achieve satisfactory tuning with the basic Proportional, Integral, and Derivative (PID) tuning parameters. However,

More information

A vibration is one back-and-forth motion.

A vibration is one back-and-forth motion. Basic Skills Students who go to the park without mastering the following skills have difficulty completing the ride worksheets in the next section. To have a successful physics day experience at the amusement

More information

Year 11 Graphing Notes

Year 11 Graphing Notes Year 11 Graphing Notes Terminology It is very important that students understand, and always use, the correct terms. Indeed, not understanding or using the correct terms is one of the main reasons students

More information

Removing Temporal Stationary Blur in Route Panoramas

Removing Temporal Stationary Blur in Route Panoramas Removing Temporal Stationary Blur in Route Panoramas Jiang Yu Zheng and Min Shi Indiana University Purdue University Indianapolis jzheng@cs.iupui.edu Abstract The Route Panorama is a continuous, compact

More information

Theoretical Aircraft Overflight Sound Peak Shape

Theoretical Aircraft Overflight Sound Peak Shape Theoretical Aircraft Overflight Sound Peak Shape Introduction and Overview This report summarizes work to characterize an analytical model of aircraft overflight noise peak shapes which matches well with

More information

Chapters 1 & 2. Definitions and applications Conceptual basis of photogrammetric processing

Chapters 1 & 2. Definitions and applications Conceptual basis of photogrammetric processing Chapters 1 & 2 Chapter 1: Photogrammetry Definitions and applications Conceptual basis of photogrammetric processing Transition from two-dimensional imagery to three-dimensional information Automation

More information

NAVIGATION OF MOBILE ROBOT USING THE PSO PARTICLE SWARM OPTIMIZATION

NAVIGATION OF MOBILE ROBOT USING THE PSO PARTICLE SWARM OPTIMIZATION Journal of Academic and Applied Studies (JAAS) Vol. 2(1) Jan 2012, pp. 32-38 Available online @ www.academians.org ISSN1925-931X NAVIGATION OF MOBILE ROBOT USING THE PSO PARTICLE SWARM OPTIMIZATION Sedigheh

More information

Improvement of Robot Path Planning Using Particle. Swarm Optimization in Dynamic Environments. with Mobile Obstacles and Target

Improvement of Robot Path Planning Using Particle. Swarm Optimization in Dynamic Environments. with Mobile Obstacles and Target Advanced Studies in Biology, Vol. 3, 2011, no. 1, 43-53 Improvement of Robot Path Planning Using Particle Swarm Optimization in Dynamic Environments with Mobile Obstacles and Target Maryam Yarmohamadi

More information

ABC Math Student Copy. N. May ABC Math Student Copy. Physics Week 13(Sem. 2) Name. Light Chapter Summary Cont d 2

ABC Math Student Copy. N. May ABC Math Student Copy. Physics Week 13(Sem. 2) Name. Light Chapter Summary Cont d 2 Page 1 of 12 Physics Week 13(Sem. 2) Name Light Chapter Summary Cont d 2 Lens Abberation Lenses can have two types of abberation, spherical and chromic. Abberation occurs when the rays forming an image

More information

Visual Effects of Light. Prof. Grega Bizjak, PhD Laboratory of Lighting and Photometry Faculty of Electrical Engineering University of Ljubljana

Visual Effects of Light. Prof. Grega Bizjak, PhD Laboratory of Lighting and Photometry Faculty of Electrical Engineering University of Ljubljana Visual Effects of Light Prof. Grega Bizjak, PhD Laboratory of Lighting and Photometry Faculty of Electrical Engineering University of Ljubljana Light is life If sun would turn off the life on earth would

More information

X rays X-ray properties Denser material = more absorption = looks lighter on the x-ray photo X-rays CT Scans circle cross-sectional images Tumours

X rays X-ray properties Denser material = more absorption = looks lighter on the x-ray photo X-rays CT Scans circle cross-sectional images Tumours X rays X-ray properties X-rays are part of the electromagnetic spectrum. X-rays have a wavelength of the same order of magnitude as the diameter of an atom. X-rays are ionising. Different materials absorb

More information

MEM380 Applied Autonomous Robots I Winter Feedback Control USARSim

MEM380 Applied Autonomous Robots I Winter Feedback Control USARSim MEM380 Applied Autonomous Robots I Winter 2011 Feedback Control USARSim Transforming Accelerations into Position Estimates In a perfect world It s not a perfect world. We have noise and bias in our acceleration

More information

EYE MOVEMENT STRATEGIES IN NAVIGATIONAL TASKS Austin Ducworth, Melissa Falzetta, Lindsay Hyma, Katie Kimble & James Michalak Group 1

EYE MOVEMENT STRATEGIES IN NAVIGATIONAL TASKS Austin Ducworth, Melissa Falzetta, Lindsay Hyma, Katie Kimble & James Michalak Group 1 EYE MOVEMENT STRATEGIES IN NAVIGATIONAL TASKS Austin Ducworth, Melissa Falzetta, Lindsay Hyma, Katie Kimble & James Michalak Group 1 Abstract Navigation is an essential part of many military and civilian

More information

Digital inertial algorithm for recording track geometry on commercial shinkansen trains

Digital inertial algorithm for recording track geometry on commercial shinkansen trains Computers in Railways XI 683 Digital inertial algorithm for recording track geometry on commercial shinkansen trains M. Kobayashi, Y. Naganuma, M. Nakagawa & T. Okumura Technology Research and Development

More information

Visual Perception Based Behaviors for a Small Autonomous Mobile Robot

Visual Perception Based Behaviors for a Small Autonomous Mobile Robot Visual Perception Based Behaviors for a Small Autonomous Mobile Robot Scott Jantz and Keith L Doty Machine Intelligence Laboratory Mekatronix, Inc. Department of Electrical and Computer Engineering Gainesville,

More information