Considerations for Use of Aerial Views In Remote Unmanned Ground Vehicle Operations

Size: px
Start display at page:

Download "Considerations for Use of Aerial Views In Remote Unmanned Ground Vehicle Operations"

Transcription

1 Considerations for Use of Aerial Views In Remote Unmanned Ground Vehicle Operations Roger A. Chadwick New Mexico State University Remote unmanned ground vehicle (UGV) operations place the human operator at a perceptual disadvantage. Adding aerial views can benefit the operator s spatial cognition by supplying the missing contextual information regarding the vehicle s position and relation to other objects in the space surrounding the vehicle. In order to benefit from this additional information the operator must control and integrate multiple viewpoints. In a series of experiments the use of aerial views was examined including control mode options and altitude for the aerial scene imaging. Results indicate that aerial views are beneficial in UGV search tasks and that auto-tracking aerial imaging control modes should be considered. Using unmanned ground vehicles (UGVs) for scouting in hazardous urban environments has obvious advantages, but the difficulties inherent in remote perception (e.g., Tittle, Roesler, & Woods, 2002),and especially with regard to spatial perception (Chadwick & Pazuchanics, 2007; Chadwick & Gillan, 2006; Darken, Kempster, & Peterson, 2001) may limit their usefulness. The advantage of sending in vehicles without human occupants is that the operators safety is assured. The disadvantages include the very real possibility of becoming spatially disoriented, lost, and ineffective. The problems caused by the consequences of poor remote perception can be attacked from two distinct directions. First, perception can be improved, to the extent possible, by expanding the amount of information provided by the vehicle (e.g., Voshell & Woods, 2005). Secondly, information from the vehicle can be augmented by additional spatial information. This report focuses on the augmentation of spatial information from additional sources in order to provide the UGV operator with contextual information required to maintain spatial awareness and improve effectiveness. There is a class of cases in which a great deal of detailed spatial information about the UGV mission environment can be obtained in advance. With modern satellite imagery, maps of mission areas can be prepared in advance which include details about the terrain and static structures. When coupled with information about the UGVs position and orientation, these maps can supply information about the environment and the vehicle s position within it. This technique, while providing a great deal of the total solution, has several drawbacks. Maps cannot contain information about dynamic objects (recent destruction as prevalent in war zones, moving vehicles, traffic, people, etc) in the environment, and even the use of Global Positioning Satellite (GPS) technology in the modern age cannot provide consistent and accurate positioning information under all circumstances. While GPS technology may be very effective in providing cruise missiles and other aerial vehicles with positioning information that meets their navigational needs, its use in UGVs may be limited by reliability and accuracy constraints (Chaimowicz et al., 2005). Being off by merely a meter or two in position matters little to a nuclear tipped cruise missile, but a positioning error on this scale could place a UGV in a ditch. Obviously UGVs cannot navigate in an open loop mode based solely on a map and their GPS position. UGVs must sense their local environment and augment the coarse spatial location information obtained with GPS receivers in order to avoid obstacles which are on a very small scale compared to that of GPS accuracy. But local sensing, due to the inherent low vantage point of a ground vehicle, is not easily integrated with map or global views of the space surrounding the UGV. Providing live aerial views by using multi-robot teams consisting of both ground and air vehicles to augment spatial cognition and provide global spatial information (i.e., Pazuchanics, Chadwick, Sapp, & Gillan, 2008) might be useful in providing some missing pieces for the spatial puzzle faced by the UGV operator. Live views can be beneficial because they reveal both static and dynamic objects in the environment and also can include a view from above, of the ground vehicle itself amidst surrounding context, allowing the operator an anchor point for the integration of these two viewpoints. An aerial view does not negate the need for accurate maps, but augments the static mapped information by providing a live update which includes dynamic features. It would be a mistake to assume that adding additional information would not in some cases hinder the human operator s cognition with the burden of an additional information source to integrate, and there are a host of variables which must be considered in deciding the best way to provide such views. In order to provide some empirical basis for making decisions on how best to provide aerial views we conducted a series of experiments. In the first experiment we examine the costs and benefits associated with providing an aerial view in a UGV search mission using miniature model environments (see, Evans et al., 2005). In the second experiment we examine an important variable associated with the use of aerial views, the optimum viewpoint (altitude) or amount of context necessary to facilitate target localization judgments when target objects viewed via UGV imagery are localized on a global map. In this second study we used computer generated three dimensional model environments, vehicles, and viewpoints.

2 EXPERIMENT 1 For a UGV operator to make effective use of an aerial view the benefits of having the additional dynamic spatial information available must outweigh the cognitive costs involved in controlling and integrating the two very different viewpoints. These costs involve the added demands of controlling the aerial asset, the limits of attention which require diversion of attention from the primary (UGV imagery) display in order to process the contextual air-view information, and the non-trivial cognitive task of integrating disparate viewpoints. Consider a very simplified cognitive task analysis of a remotely operated vehicle search task. The operator must navigate through the search area, keeping track of area covered and area yet to be searched while looking for possible targets. Once targets are found they might be further identified and usually localized by recording target position on a map view. In a typical UGV search mission, it might be the case that the addition of the aerial view, while providing an improvement in spatial comprehension (and therefore target localization), actually slows down the search and target object perception portion of the overall task. We hypothesized that the use of an aerial view would improve spatial cognition in terms of the ability to localize targets on a map, but would hinder the actual search process in terms of finding target objects amidst the debris of a cluttered search area. The use of an aerial view would therefore improve target localization at the expense of target identification. We further hypothesized that the difficulty of controlling the aerial view would impact its usefulness. In order to test these hypotheses we setup an experiment in which participants were tasked with searching for specific kinds of targets using a teleoperated UGV with and without the presence of an aerial view to assist them in their task. The difficulty of controlling the aerial view was also manipulated. were tasked with searching through each of the areas (designated area A and B) in two consecutive trials counterbalanced by area. Each area contained eight target objects (i.e., soldier action figures) and a number of distracting objects (e.g., cars, motorcycles, computers, weapons). Participants were given ten minutes to search each area using the remotely operated vehicle to find and then locate the targets on a computer displayed map. The participant worked from a location separated from the scale environments by a partition which provided visual but not auditory isolation. Three thirteen inch diagonal video display monitors at the participant workstation provided a ground view from the vehicle, map display, and aerial view. The aerial view was provided by a pan-tilt camera mounted in the ceiling above the search areas with the field of view fixed at approximately 1/12 of the search area. In order to view any particular portion of the search area the aerialview camera had to be re-positioned via a remote control unit which was actually controlled by the experimenter but directed by the participant, the exact method of direction depending on the air-view condition. This type of aerial view simulates a hovering air vehicle with a pointing camera. Vibration and motion of the hypothetical aerial vehicle were not simulated. The UGV operator had no responsibility for the aerial view other than directing where the camera was pointing, a reasonable situation in which the UGV operator is not tasked with flying an aerial vehicle, but simply tasked with coordinating the viewpoint of the aerial imagery obtained in order to assist with the UGV mission objectives. Method Participants. Although 54 undergraduate students at New Mexico State University participated in this experiment, the final sample included 48 participants, 25 men and 23 women, ranging in age from 18 to 39 years (M = 20.8, SD = 4.47). Six participants were dropped due to technical difficulties or the inability to pass pre-test criteria in remote vehicle operation. Apparatus. Two scale (1:17) miniature environments were constructed for exploration using a radio-frequency teleoperated scale ground vehicle (Plantronics Mini-Rover). The vehicle s camera provided a 53 degree horizontal viewing angle with a standard 3:4 aspect ratio and was fixed with respect to the vehicle itself. Vehicle camera imagery was transmitted to the operator s console video monitor using analog UHF television technology. The scale environments were constructed to resemble a war-torn disaster area and were laced with target objects consisting of soldier actionfigures (see Figure 1). The goal of each search was simply to find and locate as many targets (people of any type) as possible amidst the debris In the time allotted. Participants Figure 1. Participants control a teleoperated miniature UGV exploring a model environment searching for toy soldiers. The experiment was a 4 (aerial view mode) x 2 (area explored) mixed factorial design. The aerial view mode was a totally between subjects variable with the area explored (environment A or B) as a within subjects variable. Participants were randomly assigned to one of the four aerial view conditions mentioned previously. In the auto-tracking condition the aerial view camera was automatically positioned (by the experimenter) such that the ground vehicle was always in view. As the ground vehicle approached the edge of the display the experimenter moved the air-view camera such that the vehicle was centered in the display. While this was a manually operated simulation of an auto-tracking function subject to some inadvertent variability in performance, consistency was achieved by the experimenter waiting until

3 the vehicle was within one vehicle-length of the edge of the display and then re-centering the aerial view on the ground vehicle. In the simple-pointing condition the participant directed the pointing of the air-view by simply clicking on the map display at the desired view center. In the complexpointing condition the participant was required to correctly solve a simple two-digit math problem in order to complete the camera re-positioning as designated, as in the simple condition, by a click on the map display. In either of these two participant directed pointing modes the experimenter slewed the air-view camera to the position corresponding to the map position designated by the mouse click. The fourth condition was a no-airview control condition in which no aerial view was provided. Procedure. Prior to executing the experimental trials each participant ran through a brief series of practice and qualification pre-test exercises. The pre-test exercises gave participants some familiarity with the vehicle operation and allowed a quantification of skill. After a five minute practice session participants ran their UGV through a simple maze and had to cross a rather narrow and difficult bridge obstacle. The time (seconds) to complete the maze and successfully cross the bridge obstacle served as pre-test measurements of skill. Participants who failed to successfully negotiate the bridge obstacle after 15 tries were disqualified from the experiment. In addition to these two objective response time based pre-test measures the experimenter assigned each participant a subjective skill rating at the conclusion of the pre-test exercises on a simple 1(very low skill) to 7 (very high skill) scale. Participants were instructed to search the entire area and find as many targets (hidden toy soldiers) as possible within a ten minute trial period. They were encouraged to use the aerial view (when present) and tasked with marking the exact location of each found target on a computer display map. Each participant performed both search trials (in separate areas A and B) in the same randomly assigned aerial view condition. At the conclusion of the experimental session participants were fully debriefed regarding the intent of the study. Results and Discussion Pre-test measures allowed an assessment of the equality in skill level of the four randomly assigned aerial view participant groups. There were no statistically reliable differences in pre-test measures between the four groups of participants, multi-variate F(9,114) =.59, p >.80. The two primary measures used in this experiment were the number of targets found (out of eight possible targets per area) and the target localization error. The localization error was measured as the Euclidian distance (map display pixels) between the actual target s location and the participant designated location of each target. The localization error is discussed as a percentage of the maximum possible error which would be 1024 pixels for the 820 x 614 pixel map display used. The maximum possible error in localization would correspond with designating the target in the opposite corner of the rectangular map display. There were no statistically significant (alpha =.05 for all analyses) differences in the number of targets found across air-view conditions. There was a trend for participants to find more targets in the auto-tracking condition (M = 5.56, SD = 1.51 ) than either the no-air-view (M = 4.50, SD = 1.81), the simple-pointing (M = 4.46, SD = 1.54), or complex-pointing (M = 4.35, SD = 1.83) conditions. The amount of each area searched (actively explored by the UGV) was also measured using grid counts. There were no statistically reliable differences in search area covered, F(3,39) = 1.16, p =.34) or between exploration areas A and B (F(1,39) = 1.11, p =.30). Mean search coverage for the noairview, auto-track, simple-pointing, and complex-pointing airview conditions were 66% (SD = 14%), 68% (SD = 10%), 61% (SD = 13.5%), and 62% (SD = 15%), respectively. Figure 2. Participants showed a significant improvement in target localization when an aerial view was used, with the best performance achieved in the auto-track condition. The use of the aerial view improved target localization error, F(3,40) = 3.55, p <.05 (see Figure 2). Participants had the lowest error in the auto-track mode (M = 4.24% of diagonal, SE = 2.2%), followed by the complex-pointing (M = 5.57%, SE = 2.3%), simple-pointing (M = 6.03%, SE = 2.2%), and finally the no-air-view condition with the worst error (M = 13.6%, SE = 2.1%). These results indicate that using aerial views in a teleoperated UGV search scenario did improve spatial comprehension in terms of target localization performance as predicted and did not negatively impact other measured aspects of the task. We found no statistically reliable evidence that the additional cognitive workload associated with monitoring an aerial view degraded performance in the search operation and our hypothesis that participants using an aerial view would find fewer targets was not supported. Participants performed best when the aerial view was auto-tracking their vehicle. EXPERIMENT 2 Experiment 1 showed that aerial views can improve performance in some spatial reasoning components of a

4 remote vehicle search task. There are many conceivable ways to obtain aerial views and questions naturally arise as to what might be the best view to obtain. We hypothesize that aerial views facilitate spatial comprehension because they provide additional contextual information by showing the position of the vehicle itself in the environment being explored from a viewpoint that closely matches that of a two-dimensional map. Without an aerial view or an active map icon showing vehicle position, the information received from the ground view is often inadequate to disambiguate the features in the environment and match them with the features shown on a map (fully top-down) view. Aerial views not only provide additional context, but can also reveal that context from an angle that can reveal features from a somewhat intermediate view. This is because aerial views need not be taken from a 90 degree viewpoint with respect to the horizontal ground plane, but can be taken at somewhat lesser angles which reveal not only the top-down views of objects as depicted on maps, but also to some extent the sides of objects as viewable from the ground. In order to validate the conjecture that higher altitude aerial views revealing additional context at an angle closer to that of a top-down map view we conducted a second experiment in which we manipulated the altitude of the aerial view camera. We hypothesized that intermediate altitude aerial views which would reveal both ground view and map view contextual features would be most beneficial. Method Participants. A sample consisting of 65 undergraduates, 40 women and 25 men ranging in age from 18 to 38 years (M = 20.7, SD = 3.78 years) voluntarily participated in this study. Materials. Three dimensional modeling software (Google SketchUp Pro) was used to simulate ground and aerial views of a tank-like vehicle exploring urban terrain (see Figure 3). A model of a UGV and a group of target objects (soldiers) were placed into a three dimensional model of an actual building location (obtained online). A UGV ground view image was rendered at a position corresponding to the ground vehicle model. This UGV camera view included a portion of the front of the vehicle itself. Additional views were taken of the ground vehicle (roughly in the center of the image) and surrounding context from altitudes of 0, 300, 600, and 900 scale feet. Note that all images had the UGV at the focal point (center), and so the 0 altitude image was effectively another ground view taken at a positon behind the vehicle and revealing additional context. These aerial views were all taken from the same ground point several hundred feet behind the vehicle at an angle in-line with the vehicle s axis and simulated vehicle camera viewpoint. After the view was aligned with the vehicle the altitude was adjusted (0, 300, 600, or 900 ft) numerically and the view centered on the UGV. Sixteen distinct building models were used to create sets of images, resulting in 16 UGV views and 64 associated aerial views. In order to assess the impact of maps which are often arbitrarily rotated with respect to the ground vehicle, map views were taken at angles of 0, 60, 120, and 180 degree rotations relative to the vehicle s camera axis. Each participant was presented with each of the 16 scenes at one combination of altitudes and map angles. In order to mitigate against demand characteristics, these experimental trials were randomly mixed in with a set of sixteen filler trial image sets taken from somewhat arbitrary angles. Figure 3. Three dimensional modeled viewpoints. The lower panel is the camera view from the UGV itself, which is visible in the upper aerial view. We propose that distinct ground object features are used in a cognitive matching process. Procedure. Participants were presented with sixteen sets of images plus sixteen filler sets in a fully randomized fashion, with altitude and map angle fully counterbalanced across subjects for each scene, and each participant receiving one experimental trial at each altitude and map angle combination. Thus, for a given scene altitude and angle were varied between subjects, but across scenes varied within subjects. With each trial, participants were shown the UGV view image and the aerial image simultaneously, with the aerial image above the ground view image. At their discretion the participants clicked on the go to map icon on the screen and then the map image was displayed, by itself, on the display. Participants had two options on the map display, they could designate the target s location on the map or return to the ground and aerial view display for another look. The computer displaying the images stored the total response time necessary to complete a trial, the number of times the participants toggled between map and ground/aerial view displays and also the localization error of the participant designated target point (in comparison to the actual target location). Participants were debriefed upon completion of all 32 experimental plus filler trials. Results and Discussion

5 The results (see Figure 4) indicate that target localization error is lower when aerial images are taken from the higher altitudes, F(3,1024) = 42.2, p <.05 (GLM univariate analysis of individual observations with altitude and map angle as fixed factors). Map angle effects were not statistically reliable (p =.07). interpretation of the results, it appears that higher altitude more downward looking views are best, although separate manipulation of altitude (amount of context) and ground plane angle should be examined. There must be a maximum altitude beyond which the views become less useful, the point at which all of the relevant features of the ground are in view and irrelevant features begin to use up valuable display space, but this limit has not yet been determined empirically. It also seems logical that the aerial view is most useful when recognition of the ground vehicle in the aerial image is not difficult. These conjectures relevant to these variables related to obtaining the most useful aerial view require empirical validation. ACKNOWLEDGMENT Figure 4. Map angle effects are rather subtle, but the best performance was achieved using the higher altitude aerial images. The results of experiment 2 are somewhat limited in their interpretation by the confounding of altitude with view angle relative to the horizontal plane, but it does appear that the additional context provided by higher altitude views is helpful and does not depend greatly upon the rotation of such views relative to a map comparison view. This limited dependence on map angle is probably due to the mental matching of distinct features which does not greatly depend upon mental rotation. GENERAL DISCUSSION Using aerial views in conjunction with remote ground vehicle operations can benefit the human operators spatial judgments by supplying the necessary information and by bridging between contextually poor ground views and the two dimensional top down map views often incorporated into UGV operations. Because the use of this information is in place of alternative (and often inferior in performance) cognition regarding spatial locations, there is very little performance degradation with regard to other task components. The best performance was achieved under conditions of auto-tracking aerial views which minimized the operator s cognitive workload associated with controlling the aerial viewpoint The difference however, between autotracking and other more demanding modes of aerial view control, were not statistically reliable and further investigation into the benefits of auto-tracking options should be considered. The second experiment discussed in this report addresses issues regarding key variables associated with the aerial views used in conjunction with UGV operations. This study was very preliminary, and while caution is used in any Prepared through participation in the Advanced Decision Architectures collaborative Technology Alliance sponsored by the U.S. Army Research Laboratory under Cooperative Agreement DAAD The views and conclusions contained in this document are those of the author, and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory, or the U.S. Government. The author thanks Dr. Douglas Gillan, North Carolina State University, and Thomas Donahue, NMSU, for their valuable assistance on this project. REFERENCES Chadwick, R. A.,& Gillan, D. J. (2006, November) Strategies for the interpretative integration of ground and air views in UGV operations. Poster session presented at the 25th Army Science Conference, Orlando, FL. Chadwick, R. A., & Pazuchanics, S. (2007). Spatial disorientation in unmanned ground vehicle operations: Target localization errors. In The Proceedings of the Human Factors and Ergonomics Society Annual Proceedings (2007), CA: Santa Monica. (pp ). Chaimowicz, L., Cowley, A., Gomez-Ibanez, D., Grocholosky, B., Hsieh, M. A., Hsu, H., Keller, J. F., Kumar, V., Swaminathan, R., & Taylor, C. J. (2005). Deploying air-ground multi-robot teams in urban environments. In (L. E. Parker, F. E. Schneider, & A. C. Schultz, Eds.) Multi-Robot Systems: From Swarms to Intelligent Automata( Vol III, pp ). Netherlands: Springer. Darken, R., Kempster, K., & Peterson, B. (2001). Effects of Streaming Video Quality of Service on Spatial Comprehension in a Reconnaissance Task. In Proceedings of I/ITSEC, Orlando, FL. Evans, A. W., Hoeft, R. M., Rehfeld, S. A., Feldman, M., Curtis, M., Fincannon, T., Ottlnger, J. & Jentsch, F. (2005). Demonstration: Advancing robotics research through the use of a scale MOUT facility. In Proceedings of the Human Factors and Ergonomics Society 49th Annual Meeting (pp ). Santa Monica CA: Human Factors and Ergonomics Society. Pazuchanics, S. L., Chadwick, R. A., Sapp, M. V., & Gillan, D. J. (2008). Robots in space and time: The role of object, motion, and spatial perception in the control and monitoring of UGVs. In Barnes, M., & Jentsch, F. (Eds.) Human-Robot Interactions in Future Military Actions (accepted for publication). Tittle, J. S., Roesler A., & Woods, D. D. (2002). The remote perception problem. In Proceedings of the Human Factors and Ergonomics Society 46th annual meeting (pp. ). Santa Monica, CA: Human Factors and Ergonomics Society. Voshell, M., & Woods, D. (2005). Overcoming the keyhole in human-robot coordination: Simulation and evaluation. In Proceedings of the Human Factors and Ergonomics Society 49th Annual Meeting (pp ). Human Factors and Ergonomics Society, CA: Santa Monica.

EYE MOVEMENT STRATEGIES IN NAVIGATIONAL TASKS Austin Ducworth, Melissa Falzetta, Lindsay Hyma, Katie Kimble & James Michalak Group 1

EYE MOVEMENT STRATEGIES IN NAVIGATIONAL TASKS Austin Ducworth, Melissa Falzetta, Lindsay Hyma, Katie Kimble & James Michalak Group 1 EYE MOVEMENT STRATEGIES IN NAVIGATIONAL TASKS Austin Ducworth, Melissa Falzetta, Lindsay Hyma, Katie Kimble & James Michalak Group 1 Abstract Navigation is an essential part of many military and civilian

More information

Toward an Integrated Ecological Plan View Display for Air Traffic Controllers

Toward an Integrated Ecological Plan View Display for Air Traffic Controllers Wright State University CORE Scholar International Symposium on Aviation Psychology - 2015 International Symposium on Aviation Psychology 2015 Toward an Integrated Ecological Plan View Display for Air

More information

NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS

NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS Xianjun Sam Zheng, George W. McConkie, and Benjamin Schaeffer Beckman Institute, University of Illinois at Urbana Champaign This present

More information

EVALUATING VISUALIZATION MODES FOR CLOSELY-SPACED PARALLEL APPROACHES

EVALUATING VISUALIZATION MODES FOR CLOSELY-SPACED PARALLEL APPROACHES PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 49th ANNUAL MEETING 2005 35 EVALUATING VISUALIZATION MODES FOR CLOSELY-SPACED PARALLEL APPROACHES Ronald Azuma, Jason Fox HRL Laboratories, LLC Malibu,

More information

A Human Eye Like Perspective for Remote Vision

A Human Eye Like Perspective for Remote Vision Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 A Human Eye Like Perspective for Remote Vision Curtis M. Humphrey, Stephen R.

More information

Improving Emergency Response and Human- Robotic Performance

Improving Emergency Response and Human- Robotic Performance Improving Emergency Response and Human- Robotic Performance 8 th David Gertman, David J. Bruemmer, and R. Scott Hartley Idaho National Laboratory th Annual IEEE Conference on Human Factors and Power Plants

More information

PROGRESS ON THE SIMULATOR AND EYE-TRACKER FOR ASSESSMENT OF PVFR ROUTES AND SNI OPERATIONS FOR ROTORCRAFT

PROGRESS ON THE SIMULATOR AND EYE-TRACKER FOR ASSESSMENT OF PVFR ROUTES AND SNI OPERATIONS FOR ROTORCRAFT PROGRESS ON THE SIMULATOR AND EYE-TRACKER FOR ASSESSMENT OF PVFR ROUTES AND SNI OPERATIONS FOR ROTORCRAFT 1 Rudolph P. Darken, 1 Joseph A. Sullivan, and 2 Jeffrey Mulligan 1 Naval Postgraduate School,

More information

Game Mechanics Minesweeper is a game in which the player must correctly deduce the positions of

Game Mechanics Minesweeper is a game in which the player must correctly deduce the positions of Table of Contents Game Mechanics...2 Game Play...3 Game Strategy...4 Truth...4 Contrapositive... 5 Exhaustion...6 Burnout...8 Game Difficulty... 10 Experiment One... 12 Experiment Two...14 Experiment Three...16

More information

Target Range Analysis for the LOFTI Triple Field-of-View Camera

Target Range Analysis for the LOFTI Triple Field-of-View Camera Critical Imaging LLC Tele: 315.732.1544 2306 Bleecker St. www.criticalimaging.net Utica, NY 13501 info@criticalimaging.net Introduction Target Range Analysis for the LOFTI Triple Field-of-View Camera The

More information

A simple embedded stereoscopic vision system for an autonomous rover

A simple embedded stereoscopic vision system for an autonomous rover In Proceedings of the 8th ESA Workshop on Advanced Space Technologies for Robotics and Automation 'ASTRA 2004' ESTEC, Noordwijk, The Netherlands, November 2-4, 2004 A simple embedded stereoscopic vision

More information

Evolution of Sensor Suites for Complex Environments

Evolution of Sensor Suites for Complex Environments Evolution of Sensor Suites for Complex Environments Annie S. Wu, Ayse S. Yilmaz, and John C. Sciortino, Jr. Abstract We present a genetic algorithm (GA) based decision tool for the design and configuration

More information

A NEW SIMULATION FRAMEWORK OF OPERATIONAL EFFECTIVENESS ANALYSIS FOR UNMANNED GROUND VEHICLE

A NEW SIMULATION FRAMEWORK OF OPERATIONAL EFFECTIVENESS ANALYSIS FOR UNMANNED GROUND VEHICLE A NEW SIMULATION FRAMEWORK OF OPERATIONAL EFFECTIVENESS ANALYSIS FOR UNMANNED GROUND VEHICLE 1 LEE JAEYEONG, 2 SHIN SUNWOO, 3 KIM CHONGMAN 1 Senior Research Fellow, Myongji University, 116, Myongji-ro,

More information

RISE OF THE HUDDLE SPACE

RISE OF THE HUDDLE SPACE RISE OF THE HUDDLE SPACE November 2018 Sponsored by Introduction A total of 1,005 international participants from medium-sized businesses and enterprises completed the survey on the use of smaller meeting

More information

How Many Pixels Do We Need to See Things?

How Many Pixels Do We Need to See Things? How Many Pixels Do We Need to See Things? Yang Cai Human-Computer Interaction Institute, School of Computer Science, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA ycai@cmu.edu

More information

Relative Cost and Performance Comparison of GEO Space Situational Awareness Architectures

Relative Cost and Performance Comparison of GEO Space Situational Awareness Architectures Relative Cost and Performance Comparison of GEO Space Situational Awareness Architectures Background Keith Morris Lockheed Martin Space Systems Company Chris Rice Lockheed Martin Space Systems Company

More information

Effective Iconography....convey ideas without words; attract attention...

Effective Iconography....convey ideas without words; attract attention... Effective Iconography...convey ideas without words; attract attention... Visual Thinking and Icons An icon is an image, picture, or symbol representing a concept Icon-specific guidelines Represent the

More information

Intelligent driving TH« TNO I Innovation for live

Intelligent driving TH« TNO I Innovation for live Intelligent driving TNO I Innovation for live TH«Intelligent Transport Systems have become an integral part of the world. In addition to the current ITS systems, intelligent vehicles can make a significant

More information

Design of Simulcast Paging Systems using the Infostream Cypher. Document Number Revsion B 2005 Infostream Pty Ltd. All rights reserved

Design of Simulcast Paging Systems using the Infostream Cypher. Document Number Revsion B 2005 Infostream Pty Ltd. All rights reserved Design of Simulcast Paging Systems using the Infostream Cypher Document Number 95-1003. Revsion B 2005 Infostream Pty Ltd. All rights reserved 1 INTRODUCTION 2 2 TRANSMITTER FREQUENCY CONTROL 3 2.1 Introduction

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

A Human Factors Guide to Visual Display Design and Instructional System Design

A Human Factors Guide to Visual Display Design and Instructional System Design I -W J TB-iBBT»."V^...-*.-^ -fc-. ^..-\."» LI»." _"W V"*. ">,..v1 -V Ei ftq Video Games: CO CO A Human Factors Guide to Visual Display Design and Instructional System Design '.- U < äs GL Douglas J. Bobko

More information

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image Background Computer Vision & Digital Image Processing Introduction to Digital Image Processing Interest comes from two primary backgrounds Improvement of pictorial information for human perception How

More information

Instruction Manual for HyperScan Spectrometer

Instruction Manual for HyperScan Spectrometer August 2006 Version 1.1 Table of Contents Section Page 1 Hardware... 1 2 Mounting Procedure... 2 3 CCD Alignment... 6 4 Software... 7 5 Wiring Diagram... 19 1 HARDWARE While it is not necessary to have

More information

Investigating the Usefulness of Soldier Aids for Autonomous Unmanned Ground Vehicles, Part 2

Investigating the Usefulness of Soldier Aids for Autonomous Unmanned Ground Vehicles, Part 2 Investigating the Usefulness of Soldier Aids for Autonomous Unmanned Ground Vehicles, Part 2 by A William Evans III, Susan G Hill, Brian Wood, and Regina Pomranky ARL-TR-7240 March 2015 Approved for public

More information

Countering Weapons of Mass Destruction (CWMD) Capability Assessment Event (CAE)

Countering Weapons of Mass Destruction (CWMD) Capability Assessment Event (CAE) Countering Weapons of Mass Destruction (CWMD) Capability Assessment Event (CAE) Overview 08-09 May 2019 Submit NLT 22 March On 08-09 May, SOFWERX, in collaboration with United States Special Operations

More information

Mixed-Initiative Interactions for Mobile Robot Search

Mixed-Initiative Interactions for Mobile Robot Search Mixed-Initiative Interactions for Mobile Robot Search Curtis W. Nielsen and David J. Bruemmer and Douglas A. Few and Miles C. Walton Robotic and Human Systems Group Idaho National Laboratory {curtis.nielsen,

More information

Spatial Judgments from Different Vantage Points: A Different Perspective

Spatial Judgments from Different Vantage Points: A Different Perspective Spatial Judgments from Different Vantage Points: A Different Perspective Erik Prytz, Mark Scerbo and Kennedy Rebecca The self-archived postprint version of this journal article is available at Linköping

More information

Situational Awareness A Missing DP Sensor output

Situational Awareness A Missing DP Sensor output Situational Awareness A Missing DP Sensor output Improving Situational Awareness in Dynamically Positioned Operations Dave Sanderson, Engineering Group Manager. Abstract Guidance Marine is at the forefront

More information

Facilitating Human System Integration Methods within the Acquisition Process

Facilitating Human System Integration Methods within the Acquisition Process Facilitating Human System Integration Methods within the Acquisition Process Emily M. Stelzer 1, Emily E. Wiese 1, Heather A. Stoner 2, Michael Paley 1, Rebecca Grier 1, Edward A. Martin 3 1 Aptima, Inc.,

More information

Gilbert Peterson and Diane J. Cook University of Texas at Arlington Box 19015, Arlington, TX

Gilbert Peterson and Diane J. Cook University of Texas at Arlington Box 19015, Arlington, TX DFA Learning of Opponent Strategies Gilbert Peterson and Diane J. Cook University of Texas at Arlington Box 19015, Arlington, TX 76019-0015 Email: {gpeterso,cook}@cse.uta.edu Abstract This work studies

More information

A Distributed Virtual Reality Prototype for Real Time GPS Data

A Distributed Virtual Reality Prototype for Real Time GPS Data A Distributed Virtual Reality Prototype for Real Time GPS Data Roy Ladner 1, Larry Klos 2, Mahdi Abdelguerfi 2, Golden G. Richard, III 2, Beige Liu 2, Kevin Shaw 1 1 Naval Research Laboratory, Stennis

More information

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit)

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit) Vishnu Nath Usage of computer vision and humanoid robotics to create autonomous robots (Ximea Currera RL04C Camera Kit) Acknowledgements Firstly, I would like to thank Ivan Klimkovic of Ximea Corporation,

More information

The MARS Helicopter and Lessons for SATCOM Testing

The MARS Helicopter and Lessons for SATCOM Testing The MARS Helicopter and Lessons for SATCOM Testing Innovation: Kratos Defense Byline NASA engineers dreamed up an ingenious solution to this problem: pair the rover with a flying scout that can peer over

More information

Collective Robotics. Marcin Pilat

Collective Robotics. Marcin Pilat Collective Robotics Marcin Pilat Introduction Painting a room Complex behaviors: Perceptions, deductions, motivations, choices Robotics: Past: single robot Future: multiple, simple robots working in teams

More information

Adaptable User Interface Based on the Ecological Interface Design Concept for Multiple Robots Operating Works with Uncertainty

Adaptable User Interface Based on the Ecological Interface Design Concept for Multiple Robots Operating Works with Uncertainty Journal of Computer Science 6 (8): 904-911, 2010 ISSN 1549-3636 2010 Science Publications Adaptable User Interface Based on the Ecological Interface Design Concept for Multiple Robots Operating Works with

More information

COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE

COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE COGNITIVE MODEL OF MOBILE ROBOT WORKSPACE Prof.dr.sc. Mladen Crneković, University of Zagreb, FSB, I. Lučića 5, 10000 Zagreb Prof.dr.sc. Davor Zorc, University of Zagreb, FSB, I. Lučića 5, 10000 Zagreb

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

Using Dynamic Views. Module Overview. Module Prerequisites. Module Objectives

Using Dynamic Views. Module Overview. Module Prerequisites. Module Objectives Using Dynamic Views Module Overview The term dynamic views refers to a method of composing drawings that is a new approach to managing projects. Dynamic views can help you to: automate sheet creation;

More information

3D Modelling Is Not For WIMPs Part II: Stylus/Mouse Clicks

3D Modelling Is Not For WIMPs Part II: Stylus/Mouse Clicks 3D Modelling Is Not For WIMPs Part II: Stylus/Mouse Clicks David Gauldie 1, Mark Wright 2, Ann Marie Shillito 3 1,3 Edinburgh College of Art 79 Grassmarket, Edinburgh EH1 2HJ d.gauldie@eca.ac.uk, a.m.shillito@eca.ac.uk

More information

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department EE631 Cooperating Autonomous Mobile Robots Lecture 1: Introduction Prof. Yi Guo ECE Department Plan Overview of Syllabus Introduction to Robotics Applications of Mobile Robots Ways of Operation Single

More information

Evaluation of an Enhanced Human-Robot Interface

Evaluation of an Enhanced Human-Robot Interface Evaluation of an Enhanced Human-Robot Carlotta A. Johnson Julie A. Adams Kazuhiko Kawamura Center for Intelligent Systems Center for Intelligent Systems Center for Intelligent Systems Vanderbilt University

More information

Objective Data Analysis for a PDA-Based Human-Robotic Interface*

Objective Data Analysis for a PDA-Based Human-Robotic Interface* Objective Data Analysis for a PDA-Based Human-Robotic Interface* Hande Kaymaz Keskinpala EECS Department Vanderbilt University Nashville, TN USA hande.kaymaz@vanderbilt.edu Abstract - This paper describes

More information

Improving the Detection of Near Earth Objects for Ground Based Telescopes

Improving the Detection of Near Earth Objects for Ground Based Telescopes Improving the Detection of Near Earth Objects for Ground Based Telescopes Anthony O'Dell Captain, United States Air Force Air Force Research Laboratories ABSTRACT Congress has mandated the detection of

More information

Lab 7: Introduction to Webots and Sensor Modeling

Lab 7: Introduction to Webots and Sensor Modeling Lab 7: Introduction to Webots and Sensor Modeling This laboratory requires the following software: Webots simulator C development tools (gcc, make, etc.) The laboratory duration is approximately two hours.

More information

NAVIGATION is an essential element of many remote

NAVIGATION is an essential element of many remote IEEE TRANSACTIONS ON ROBOTICS, VOL.??, NO.?? 1 Ecological Interfaces for Improving Mobile Robot Teleoperation Curtis Nielsen, Michael Goodrich, and Bob Ricks Abstract Navigation is an essential element

More information

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception

More information

VISUALIZING CONTINUITY BETWEEN 2D AND 3D GRAPHIC REPRESENTATIONS

VISUALIZING CONTINUITY BETWEEN 2D AND 3D GRAPHIC REPRESENTATIONS INTERNATIONAL ENGINEERING AND PRODUCT DESIGN EDUCATION CONFERENCE 2 3 SEPTEMBER 2004 DELFT THE NETHERLANDS VISUALIZING CONTINUITY BETWEEN 2D AND 3D GRAPHIC REPRESENTATIONS Carolina Gill ABSTRACT Understanding

More information

Perceived Image Quality and Acceptability of Photographic Prints Originating from Different Resolution Digital Capture Devices

Perceived Image Quality and Acceptability of Photographic Prints Originating from Different Resolution Digital Capture Devices Perceived Image Quality and Acceptability of Photographic Prints Originating from Different Resolution Digital Capture Devices Michael E. Miller and Rise Segur Eastman Kodak Company Rochester, New York

More information

Photo Scale The photo scale and representative fraction may be calculated as follows: PS = f / H Variables: PS - Photo Scale, f - camera focal

Photo Scale The photo scale and representative fraction may be calculated as follows: PS = f / H Variables: PS - Photo Scale, f - camera focal Scale Scale is the ratio of a distance on an aerial photograph to that same distance on the ground in the real world. It can be expressed in unit equivalents like 1 inch = 1,000 feet (or 12,000 inches)

More information

Haptic presentation of 3D objects in virtual reality for the visually disabled

Haptic presentation of 3D objects in virtual reality for the visually disabled Haptic presentation of 3D objects in virtual reality for the visually disabled M Moranski, A Materka Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, Lodz, POLAND marcin.moranski@p.lodz.pl,

More information

Immersive Simulation in Instructional Design Studios

Immersive Simulation in Instructional Design Studios Blucher Design Proceedings Dezembro de 2014, Volume 1, Número 8 www.proceedings.blucher.com.br/evento/sigradi2014 Immersive Simulation in Instructional Design Studios Antonieta Angulo Ball State University,

More information

TRAFFIC SIGN DETECTION AND IDENTIFICATION.

TRAFFIC SIGN DETECTION AND IDENTIFICATION. TRAFFIC SIGN DETECTION AND IDENTIFICATION Vaughan W. Inman 1 & Brian H. Philips 2 1 SAIC, McLean, Virginia, USA 2 Federal Highway Administration, McLean, Virginia, USA Email: vaughan.inman.ctr@dot.gov

More information

A Comparison Between Camera Calibration Software Toolboxes

A Comparison Between Camera Calibration Software Toolboxes 2016 International Conference on Computational Science and Computational Intelligence A Comparison Between Camera Calibration Software Toolboxes James Rothenflue, Nancy Gordillo-Herrejon, Ramazan S. Aygün

More information

ROBOTICS ENG YOUSEF A. SHATNAWI INTRODUCTION

ROBOTICS ENG YOUSEF A. SHATNAWI INTRODUCTION ROBOTICS INTRODUCTION THIS COURSE IS TWO PARTS Mobile Robotics. Locomotion (analogous to manipulation) (Legged and wheeled robots). Navigation and obstacle avoidance algorithms. Robot Vision Sensors and

More information

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts

Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Traffic Control for a Swarm of Robots: Avoiding Group Conflicts Leandro Soriano Marcolino and Luiz Chaimowicz Abstract A very common problem in the navigation of robotic swarms is when groups of robots

More information

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL

More information

Customer Showcase > Defense and Intelligence

Customer Showcase > Defense and Intelligence Customer Showcase Skyline TerraExplorer is a critical visualization technology broadly deployed in defense and intelligence, public safety and security, 3D geoportals, and urban planning markets. It fuses

More information

Autonomous Control for Unmanned

Autonomous Control for Unmanned Autonomous Control for Unmanned Surface Vehicles December 8, 2016 Carl Conti, CAPT, USN (Ret) Spatial Integrated Systems, Inc. SIS Corporate Profile Small Business founded in 1997, focusing on Research,

More information

Microsoft Scrolling Strip Prototype: Technical Description

Microsoft Scrolling Strip Prototype: Technical Description Microsoft Scrolling Strip Prototype: Technical Description Primary features implemented in prototype Ken Hinckley 7/24/00 We have done at least some preliminary usability testing on all of the features

More information

2006 CCRTS THE STATE OF THE ART AND THE STATE OF THE PRACTICE. Network on Target: Remotely Configured Adaptive Tactical Networks. C2 Experimentation

2006 CCRTS THE STATE OF THE ART AND THE STATE OF THE PRACTICE. Network on Target: Remotely Configured Adaptive Tactical Networks. C2 Experimentation 2006 CCRTS THE STATE OF THE ART AND THE STATE OF THE PRACTICE Network on Target: Remotely Configured Adaptive Tactical Networks C2 Experimentation Alex Bordetsky Eugene Bourakov Center for Network Innovation

More information

CS/NEUR125 Brains, Minds, and Machines. Due: Wednesday, February 8

CS/NEUR125 Brains, Minds, and Machines. Due: Wednesday, February 8 CS/NEUR125 Brains, Minds, and Machines Lab 2: Human Face Recognition and Holistic Processing Due: Wednesday, February 8 This lab explores our ability to recognize familiar and unfamiliar faces, and the

More information

Ground Robotics Market Analysis

Ground Robotics Market Analysis IHS AEROSPACE DEFENSE & SECURITY (AD&S) Presentation PUBLIC PERCEPTION Ground Robotics Market Analysis AUTONOMY 4 December 2014 ihs.com Derrick Maple, Principal Analyst, +44 (0)1834 814543, derrick.maple@ihs.com

More information

Comments of Shared Spectrum Company

Comments of Shared Spectrum Company Before the DEPARTMENT OF COMMERCE NATIONAL TELECOMMUNICATIONS AND INFORMATION ADMINISTRATION Washington, D.C. 20230 In the Matter of ) ) Developing a Sustainable Spectrum ) Docket No. 181130999 8999 01

More information

Controlling vehicle functions with natural body language

Controlling vehicle functions with natural body language Controlling vehicle functions with natural body language Dr. Alexander van Laack 1, Oliver Kirsch 2, Gert-Dieter Tuzar 3, Judy Blessing 4 Design Experience Europe, Visteon Innovation & Technology GmbH

More information

See highlights on pages 1, 2 and 5

See highlights on pages 1, 2 and 5 See highlights on pages 1, 2 and 5 Dowell, S.R., Foyle, D.C., Hooey, B.L. & Williams, J.L. (2002). Paper to appear in the Proceedings of the 46 th Annual Meeting of the Human Factors and Ergonomic Society.

More information

Color and More. Color basics

Color and More. Color basics Color and More In this lesson, you'll evaluate an image in terms of its overall tonal range (lightness, darkness, and contrast), its overall balance of color, and its overall appearance for areas that

More information

CS594, Section 30682:

CS594, Section 30682: CS594, Section 30682: Distributed Intelligence in Autonomous Robotics Spring 2003 Tuesday/Thursday 11:10 12:25 http://www.cs.utk.edu/~parker/courses/cs594-spring03 Instructor: Dr. Lynne E. Parker ½ TA:

More information

Experimental Cooperative Control of Fixed-Wing Unmanned Aerial Vehicles

Experimental Cooperative Control of Fixed-Wing Unmanned Aerial Vehicles Experimental Cooperative Control of Fixed-Wing Unmanned Aerial Vehicles Selcuk Bayraktar, Georgios E. Fainekos, and George J. Pappas GRASP Laboratory Departments of ESE and CIS University of Pennsylvania

More information

Using Computational Cognitive Models to Build Better Human-Robot Interaction. Cognitively enhanced intelligent systems

Using Computational Cognitive Models to Build Better Human-Robot Interaction. Cognitively enhanced intelligent systems Using Computational Cognitive Models to Build Better Human-Robot Interaction Alan C. Schultz Naval Research Laboratory Washington, DC Introduction We propose an approach for creating more cognitively capable

More information

Workshop on Intelligent System and Applications (ISA 17)

Workshop on Intelligent System and Applications (ISA 17) Telemetry Mining for Space System Sara Abdelghafar Ahmed PhD student, Al-Azhar University Member of SRGE Workshop on Intelligent System and Applications (ISA 17) 13 May 2017 Workshop on Intelligent System

More information

Adding Content and Adjusting Layers

Adding Content and Adjusting Layers 56 The Official Photodex Guide to ProShow Figure 3.10 Slide 3 uses reversed duplicates of one picture on two separate layers to create mirrored sets of frames and candles. (Notice that the Window Display

More information

Usability Evaluation of Multi- Touch-Displays for TMA Controller Working Positions

Usability Evaluation of Multi- Touch-Displays for TMA Controller Working Positions Sesar Innovation Days 2014 Usability Evaluation of Multi- Touch-Displays for TMA Controller Working Positions DLR German Aerospace Center, DFS German Air Navigation Services Maria Uebbing-Rumke, DLR Hejar

More information

Photogrammetry. Lecture 4 September 7, 2005

Photogrammetry. Lecture 4 September 7, 2005 Photogrammetry Lecture 4 September 7, 2005 What is Photogrammetry Photogrammetry is the art and science of making accurate measurements by means of aerial photography: Analog photogrammetry (using films:

More information

The Representational Effect in Complex Systems: A Distributed Representation Approach

The Representational Effect in Complex Systems: A Distributed Representation Approach 1 The Representational Effect in Complex Systems: A Distributed Representation Approach Johnny Chuah (chuah.5@osu.edu) The Ohio State University 204 Lazenby Hall, 1827 Neil Avenue, Columbus, OH 43210,

More information

Here are some things to consider to achieve good quality photographic documentation for engineering reports.

Here are some things to consider to achieve good quality photographic documentation for engineering reports. Photography for Engineering Documentation Introduction Photographs are a very important engineering tool commonly used to document explorations, observations, laboratory and field test results and as-built

More information

OFFensive Swarm-Enabled Tactics (OFFSET)

OFFensive Swarm-Enabled Tactics (OFFSET) OFFensive Swarm-Enabled Tactics (OFFSET) Dr. Timothy H. Chung, Program Manager Tactical Technology Office Briefing Prepared for OFFSET Proposers Day 1 Why are Swarms Hard: Complexity of Swarms Number Agent

More information

Adaptive Action Selection without Explicit Communication for Multi-robot Box-pushing

Adaptive Action Selection without Explicit Communication for Multi-robot Box-pushing Adaptive Action Selection without Explicit Communication for Multi-robot Box-pushing Seiji Yamada Jun ya Saito CISS, IGSSE, Tokyo Institute of Technology 4259 Nagatsuta, Midori, Yokohama 226-8502, JAPAN

More information

Passive Radars as Sources of Information for Air Defence Systems

Passive Radars as Sources of Information for Air Defence Systems Passive Radars as Sources of Information for Air Defence Systems Wiesław Klembowski *, Adam Kawalec **, Waldemar Wizner *Saab Technologies Poland, Ostrobramska 101, 04 041 Warszawa, POLAND wieslaw.klembowski@saabgroup.com

More information

Consumer Behavior when Zooming and Cropping Personal Photographs and its Implications for Digital Image Resolution

Consumer Behavior when Zooming and Cropping Personal Photographs and its Implications for Digital Image Resolution Consumer Behavior when Zooming and Cropping Personal Photographs and its Implications for Digital Image Michael E. Miller and Jerry Muszak Eastman Kodak Company Rochester, New York USA Abstract This paper

More information

A DIALOGUE-BASED APPROACH TO MULTI-ROBOT TEAM CONTROL

A DIALOGUE-BASED APPROACH TO MULTI-ROBOT TEAM CONTROL A DIALOGUE-BASED APPROACH TO MULTI-ROBOT TEAM CONTROL Nathanael Chambers, James Allen, Lucian Galescu and Hyuckchul Jung Institute for Human and Machine Cognition 40 S. Alcaniz Street Pensacola, FL 32502

More information

CHAPTER 5. Image Interpretation

CHAPTER 5. Image Interpretation CHAPTER 5 Image Interpretation Introduction To translate images into information, we must apply a specialized knowlage, image interpretation, which we can apply to derive useful information from the raw

More information

APPLICATIONS OF VIRTUAL REALITY TO NUCLEAR SAFEGUARDS

APPLICATIONS OF VIRTUAL REALITY TO NUCLEAR SAFEGUARDS APPLICATIONS OF VIRTUAL REALITY TO NUCLEAR SAFEGUARDS Sharon Stansfield Sandia National Laboratories Albuquerque, NM USA ABSTRACT This paper explores two potential applications of Virtual Reality (VR)

More information

Haptic control in a virtual environment

Haptic control in a virtual environment Haptic control in a virtual environment Gerard de Ruig (0555781) Lourens Visscher (0554498) Lydia van Well (0566644) September 10, 2010 Introduction With modern technological advancements it is entirely

More information

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Davis Ancona and Jake Weiner Abstract In this report, we examine the plausibility of implementing a NEAT-based solution

More information

Knowledge Enhanced Electronic Logic for Embedded Intelligence

Knowledge Enhanced Electronic Logic for Embedded Intelligence The Problem Knowledge Enhanced Electronic Logic for Embedded Intelligence Systems (military, network, security, medical, transportation ) are getting more and more complex. In future systems, assets will

More information

Navigation Styles in QuickTime VR Scenes

Navigation Styles in QuickTime VR Scenes Navigation Styles in QuickTime VR Scenes Christoph Bartneck Department of Industrial Design Eindhoven University of Technology Den Dolech 2, 5600MB Eindhoven, The Netherlands christoph@bartneck.de Abstract.

More information

The Importance of Spatial Resolution in Infrared Thermography Temperature Measurement Three Brief Case Studies

The Importance of Spatial Resolution in Infrared Thermography Temperature Measurement Three Brief Case Studies The Importance of Spatial Resolution in Infrared Thermography Temperature Measurement Three Brief Case Studies Dr. Robert Madding, Director, Infrared Training Center Ed Kochanek, Presenter FLIR Systems,

More information

Air-to-Ground Data Link: Proof of Concept Test Report. CoE

Air-to-Ground Data Link: Proof of Concept Test Report. CoE Scope of the Report Air-to-Ground Data Link: Proof of Concept Test Report CoE-17-003.1 The Center of Excellence for Advanced Technology Aerial Firefighting (CoE) is charged with researching, testing, and

More information

Human-Robot Interaction (HRI): Achieving the Vision of Effective Soldier-Robot Teaming

Human-Robot Interaction (HRI): Achieving the Vision of Effective Soldier-Robot Teaming U.S. Army Research, Development and Engineering Command Human-Robot Interaction (HRI): Achieving the Vision of Effective Soldier-Robot Teaming S.G. Hill, J. Chen, M.J. Barnes, L.R. Elliott, T.D. Kelley,

More information

ABSTRACT 1. INTRODUCTION

ABSTRACT 1. INTRODUCTION Preprint Proc. SPIE Vol. 5076-10, Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XIV, Apr. 2003 1! " " #$ %& ' & ( # ") Klamer Schutte, Dirk-Jan de Lange, and Sebastian P. van den Broek

More information

Relationship to theory: This activity involves the motion of bodies under constant velocity.

Relationship to theory: This activity involves the motion of bodies under constant velocity. UNIFORM MOTION Lab format: this lab is a remote lab activity Relationship to theory: This activity involves the motion of bodies under constant velocity. LEARNING OBJECTIVES Read and understand these instructions

More information

Automated Planetary Terrain Mapping of Mars Using Image Pattern Recognition

Automated Planetary Terrain Mapping of Mars Using Image Pattern Recognition Automated Planetary Terrain Mapping of Mars Using Image Pattern Recognition Design Document Version 2.0 Team Strata: Sean Baquiro Matthew Enright Jorge Felix Tsosie Schneider 2 Table of Contents 1 Introduction.3

More information

Senior Design I. Fast Acquisition and Real-time Tracking Vehicle. University of Central Florida

Senior Design I. Fast Acquisition and Real-time Tracking Vehicle. University of Central Florida Senior Design I Fast Acquisition and Real-time Tracking Vehicle University of Central Florida College of Engineering Department of Electrical Engineering Inventors: Seth Rhodes Undergraduate B.S.E.E. Houman

More information

SUGAR fx. LightPack 3 User Manual

SUGAR fx. LightPack 3 User Manual SUGAR fx LightPack 3 User Manual Contents Installation 4 Installing SUGARfx 4 What is LightPack? 5 Using LightPack 6 Lens Flare 7 Filter Parameters 7 Main Setup 8 Glow 11 Custom Flares 13 Random Flares

More information

Primer on GPS Operations

Primer on GPS Operations MP Rugged Wireless Modem Primer on GPS Operations 2130313 Rev 1.0 Cover illustration by Emma Jantz-Lee (age 11). An Introduction to GPS This primer is intended to provide the foundation for understanding

More information

ContextCapture Quick guide for photo acquisition

ContextCapture Quick guide for photo acquisition ContextCapture Quick guide for photo acquisition ContextCapture is automatically turning photos into 3D models, meaning that the quality of the input dataset has a deep impact on the output 3D model which

More information

Be aware that there is no universal notation for the various quantities.

Be aware that there is no universal notation for the various quantities. Fourier Optics v2.4 Ray tracing is limited in its ability to describe optics because it ignores the wave properties of light. Diffraction is needed to explain image spatial resolution and contrast and

More information

ISTAR Concepts & Solutions

ISTAR Concepts & Solutions ISTAR Concepts & Solutions CDE Call Presentation Cardiff, 8 th September 2011 Today s Brief Introduction to the programme The opportunities ISTAR challenges The context Requirements for Novel Integrated

More information

Robotic Systems. Jeff Jaster Deputy Associate Director for Autonomous Systems US Army TARDEC Intelligent Ground Systems

Robotic Systems. Jeff Jaster Deputy Associate Director for Autonomous Systems US Army TARDEC Intelligent Ground Systems Robotic Systems Jeff Jaster Deputy Associate Director for Autonomous Systems US Army TARDEC Intelligent Ground Systems Robotics Life Cycle Mission Integrate, Explore, and Develop Robotics, Network and

More information

Blue-Bot TEACHER GUIDE

Blue-Bot TEACHER GUIDE Blue-Bot TEACHER GUIDE Using Blue-Bot in the classroom Blue-Bot TEACHER GUIDE Programming made easy! Previous Experiences Prior to using Blue-Bot with its companion app, children could work with Remote

More information

Polarization Gratings for Non-mechanical Beam Steering Applications

Polarization Gratings for Non-mechanical Beam Steering Applications Polarization Gratings for Non-mechanical Beam Steering Applications Boulder Nonlinear Systems, Inc. 450 Courtney Way Lafayette, CO 80026 USA 303-604-0077 sales@bnonlinear.com www.bnonlinear.com Polarization

More information