Breaking the Keyhole in Human-Robot Coordination: Method and Evaluation Martin G. Voshell, David D. Woods
|
|
- Brittany Wilkins
- 6 years ago
- Views:
Transcription
1 Breaking the Keyhole in Human-Robot Coordination: Method and Evaluation Martin G. Voshell, David D. Woods Abstract When environment access is mediated through robotic sensors, field experience and naturalistic studies show robot handlers have difficulties comprehending remote environments - they experience what domain practitioners often call a `soda straw'. This illustrates the keyhole effect in HRI, a phenomena studied in the context of large virtual data space interfaces and the current research seeks to reduce this effect. A simulation for human-robot search and rescue was created in a virtual NIST arena based on WTC response experiences. Pilot studies showed that traditional measures of performance were inadequate to analyze control and exploration tasks in these environments. New measures were developed based on fractal path analysis describing the tortuosity of these goal-directed paths. New concepts for helping remote observers pick up the environment affordances were tested using the simulation and evaluation measures. These studies look to concepts based on Gibsonian principles to reduce keyhole effects, enhancing functional presence. Introduction: The Keyhole Effect In HRI There are many challenges in designing successful human-robot systems (Woods, Tittle, Feil, Roesler, 2004). When the robot is a stand in for a remote human observer, the natural dynamic relationship between properties of the scene being explored and the human perceptual system is broken. The decoupling undermines the remote observers perception of affordances in the scene (Gibson, 1986) which is illustrated by recent cases of HRI where remote observers experience various difficulties in understanding the environment being traversed by a robotic system (Murphy, 2003, 2004). For example, practitioners often complain about what they call the `soda straw effect' due to the limited angular view available (Casper and Murphy, 2003). This is an example of the keyhole effect. Typical consequences of the keyhole effect include missing new events, increased difficulty in navigating novel environments, gaps or incoherent models of the explored space (Woods and Watts, 1997). Keyhole problems arise from the fact that typical virtual environments sever the coordination between the foveal field of view/focal attention and the orienting perceptual functions that help people fluently know where to look next, despite the potential for new interesting events to intrude on ongoing activities. Thus, keyhole problems cannot be solved simply by expanding the size of the field of view. When stuck looking through the `soda straw' operators have a difficult time understanding spatial layout. Operators easily miss alleys, landmarks are tough to discern, and the handler is usually forced to manually switch and integrate multiple views. Safe and successful navigation is more than just looking where one is going. In the natural
2 world humans tend to sample everywhere around where they are going in a context sensitive way. This reminds us that the mechanisms that allow people to coordinate direction of gaze and direction of movement as they move in a changing scene are removed when viewing a scene through a robot s camera (Hughes and Lewis, 2004). Human gaze control is tuned to anticipate future movements and conditions of interest. Contrast how you would direct gaze as you turn to climb a stairs with scattered debris on it and with various items or activities of interest at the top of the stairs versus how robotic platforms position their cameras during the same maneuver. Generally, the robot camera either points at each step one at a time or remains pointed at the ceiling as the robot climbs, whereas people direct and shift gaze in tight coordination with the affordances present in the situation given their purposes and context (e.g., when to look at the activities heading for the top of the stairs and when to look at potential obstacles along the stairs). The challenge for HRI is how to breakdown the keyhole and enhance a remote observer s understanding of the environment being traversed by the robotic system? This research addresses the question by creating a simulation of HRI in a search and rescue situation, developing new measures of HRI performance based on fractal path analysis, and testing new concepts for breaking down the keyhole in HRI. From Field work to Simulation Field experience in search and rescue tasks have shown that keyhole effects and other problems in remote perception significantly reduce the potential benefits of rescue robots. For example, see Casper and Murphy's experiences with rescue robots at the World Trade Center (Casper and Murphy, 2003). The first step in the research program on keyhole effects in HRI was to develop a simulation that captured the difficulties encountered in the field and allow for detailed testing of HRI performance. To accomplish this, a virtual human-operated robot assisted search and rescue simulation was created based on Murphy s experiences from the WTC response and set within a virtual reconstruction of the NIST physical HRI test arena (National Institute for Standards and Technology). Michael Lewis started using Unreal Tournament videogame based simulations for USAR applications citing the benefits of realistic physics and high-fidelity graphics (Wang, Lewis, and Gennari, 2003). We collaborated with Lewis and started work with a model of the NIST Orange Reference Test Facility for Autonomous Robots. We heavily modified this environment to add many new obstacles, ambiguities, and world geometry. Within the simulation, operators had to perform multiple exploratory tasks analogous to actual search and rescue missions while facing many of the same physical and visual constraints seen in the field. The C/S/E/L Unreal code allows an investigator to introduce various robotic platforms, sensors, and the ability to rapidly develop and test new interfaces and camera arrangements. A highlevel quantitative analysis framework was built into the code so that event information and three dimensional position data could be exported for evaluation.
3 Novel Analysis Methods What became evident from pilot studies with simulated search and rescue tasks was that traditional measures for analyzing the robot handler s performance in a complex environment with multiple tasks was not very informative). Gathering completion times and items collected simply did not capture the intricacies and problems inherent in search and rescue in remote exploration. We needed much richer ways of looking at goal-oriented paths in relation to the environments characteristics and challenges. To meaningfully evaluate HRI activity, we borrowed techniques from two diverse fields ecological perception and physiological entymology. Visual guided translation is well researched in perceptual psychology and the most valuable data to focus on are the environment affordances (Gibson, 1986). Optic flow during translation contributes greatly in virtually any visually guided behavior (Warren, Morris, and Kalish, 1988) and one immediate application of this in robot control is the handler/robot's ability of perceiving aperture affordances. For this, we chose to look at path approach velocity transitions in relation to the `pasability' of apertures and obstacles in the remote environment. The other metric of analysis looks at the fractal dimension of the handler/robot's path. This method was first used by Dicke and Burrough (Dicke and Burrough, 1988) to characterize the tortuosity of spider mite spatial exploration. Deviations in the fractal dimension of the robot/handler's goal directed path through a complex environment in relation to various ambiguities and obstacles provide a metric that captures a very rich set of descriptive behaviors. Changes in the fractal dimension of the path provide significant insight into the handler/robot's search efficiency, path tortuosity, and overall space utilization in relation to handler goals and overall characteristics of the environment (see Figure 2). Bringing Affordances to the Remote Observers The ability to expand the keyhole is the goal of the research program. Using the knowledge gained from the simulation and these ecological measures, the next step was to discover new interface concepts for helping remote observers pick up the affordances in the environment and re-couple perception and action cycles. One design concept proposed to help overcome the keyhole and make these affordances more salient is Perspective Folding (see Figure 1). By folding the display screen around the remote observer and mapping camera sensors output, we re-embed the observer in the scene and for example re-introduce peripheral optic low cues that signal movement relative to the environment. We have developed a multi-camera version of Perspective Folding (illustrated in Figure 1) and a continuous spherical folding based on a single fixed camera called the Dynamic Perceptor Sphere (Feil et al., 2004). In contrast to a single wide angle camera or an interface to switch among different camera feeds, in the 5-fold version of Perspective Folding an array of five cameras, each oriented at different angles, provides the wrap around effect. By doing this, we are attempting to perceptually better integrate multiple views into a global reference frame. The camera orientations are preserved on the robot handler's display, presented
4 in depth, and do not just provide a larger field of view but serve as a frame of reference around the robot attempting to heighten spatial awareness and robot-body size without image distortion. By integrating each of the local cameras the operator immediately sees that areas usually in peripheral vision now explicitly surround the `robot' in the display. By doing this, we are attempting capture and present some of the many cues that peripheral vision and optic flow contribute to locomotion, perception of selfmotion, information about other moving objects, and the general 3D environment (Warren, 2003). The focus of expansion is mimicked in the arrangement of the planes folded inward and the ground surface is continuously uninterrupted with the fixed camera pointing downward. In a Gibsonian sense, all of this information is fundamentally necessary (and available in the raw data) for the handler/robot ensemble to exhibit successful visually controlled behavior in starting and stopping, steering toward goals, slowing down, and avoiding obstacles in the remote environment (Gibson, 1986, Warren, 1988). For example, in tele-operation, the ground plane provides flow cues to motion that enhance the ability to perceive approach to an obstacle and smoothly steer around it while picking up other relevant aspects of the environment. In supervisory control the question becomes does the new concept help the remote observer sense when the robot algorithms are going to have difficulty traversing the environment. Study Environments were rendered using the C/S/E/L modified version of Unreal Tournament 2004 on a Pentium 4 based computer using an ATI RADEON graphics accelerator card. The virtual environment was displayed on a 19'' monitor at a resolution of 1600 x 1200 pixels. Observers sat in front of the monitor (approx. 57 cm) and used a dual-analog joystick to navigate the environments as a tele-operator of the simulated robot. Interface trials were randomly selected and data was then collected and analyzed using an Apple Macintosh G4 series computer. Participants navigated a virtual robot through a modified version of the National Institute of Standards and Technology s Orange reference test arena for autonomous mobile robots. A series of obstacles, ambiguities, and world geometry were added to make the navigation more substantial. Participants were inserted into a training environment and given one of three interfaces (single camera at \degrees{45}, or five single `flat' cameras at \degrees{45} each totaling \degrees{135} HV, or five single cameras slanted in perspective, `Perspective Folding', each at \degrees{45} totaling \degrees{135} HV) to familiarize them with the interface, teach them how to identify goal items (`hazardous objects'), and learn the joystick control. Upon completion they were brought into the modified NIST arena with the same interface they were trained on and were instructed to find as many `hazardous objects' as quickly as possible. Subjects had 1 hour to complete the task before the `robot' lost all battery power. Fifteen Ohio State University students and staff participated in the study. All individuals were paid volunteers. All volunteers were informed the purpose of the study was to
5 learn how individuals navigate and how different arrangements of cameras in interfaces could affect navigation. Upon an auditory cue signifying the end of the trial, participants were asked about their experience, were given a debriefing statement, and had the chance to ask questions. Findings and Discussion A graphical overview of the results for a portion of the study are shown in Figure 2. The entire three dimensional path is displayed for each participant in the single-flat and the perspective folding interfaces. This is just a snap shot from an interactive graphic. Mean velocities were computed for each participant and the arena was divided into separate transit and `special' stages. Velocity transitions translating toward goals and obstacles were extracted (starting/stopping). Fractal tortuosity was also calculated to indicate how efficient or tortuous a path (lower fractal number the more efficient or direct the path to a target or around an obstacle). The two measures should be complementary if they are tapping into the quality of navigation, i.e., efficient paths as determined by fractal number should also have smooth velocity transitions near targets and obstacles. As an example of the value of the new measures consider this aspect of the simulated environment. One specific area consists of chairs and rubble in the center of a passageway. To one side is a glass wall that can be seen through. In the back corner there is a small opening allowing access to a small unexplored area. Participants who determined that the transparent wall was in fact a wall tended to show efficient fractal score, smooth linear paths around the central obstacles, and maintain velocity toward the small opening. Other individuals had a significant amount of trouble realizing the see-through wall was impassable. Their robots tended to start and stop abruptly, reorient multiple times, and this control difficulty was reflected in convoluted paths and tortuous fractal scores. Traditional measures were not informative either in general or in identifying patterns within a trial. On the other hand, the fractal tortuosity scores coupled with velocity transitions showed how well different observers handled the difficulties in the environment. The study succeeded in demonstrating the value of these new measures. It also provided an opportunity to begin to examine new concepts to break down the keyhole. The new measures revealed different behavioral signatures for observers using the flat single camera view versus the 5-fold view. Focusing on the more difficult portions of the environment, single camera viewers tended to move quickly to the next visual cue, stop, identify the next point to move to and again move quickly to that visual reference. This pattern was shown as a few rapid velocity transitions translating toward goals, around obstacles, and approaching apertures. The fractal score showed these participants followed more tortuous paths for exploring the environment. Participants using the 5-fold view showed more gradual velocity transitions as they approached targets,
6 obstacles, and apertures indicating a more natural navigation pattern. They also had somewhat lower fractal scores. These results are pointers to further studies. We are at work adding features to the environment to introduce new classes of perceptual ambiguities, for example, related to navigating around stairways, and with extended sightlines where the observer can see areas but it is not obvious how to reach those distant parts of the space. The simulation is being expanded to enable more complicated situations and tasks which allow testing of how well participants build up a model of the space they are traversing. For example, after the robot has moved through a portion of the space, a collapse will occur which blocks some passages, and introducing new goals that require a return to previously encountered key points. With these expanded capabilities, future studies will be able to examine the difference between tele-operating the robot and supervisory control schemes. The measures and use of the simulated search and rescue environment provide a means to examine the perception of affordances through video feeds form a robotic platform. Measures sensitive to these attributes show potential to help test new concepts for reducing ambiguities and breaking down the keyhole in human-robot navigation. Acknowledgments Prepared through collaborative participation in the Advanced Decision Architectures Consortium sponsored by the U. S. Army Research Laboratory under the Collaborative Technology Alliance Program, Cooperative Agreement DAAD References Casper, J., and Murphy, R. R. (2003). Human-Robot Interaction during the RobotAssisted Urban Search and Rescue Response at the World Trade Center. IEEE SMC B, 33, Murphy, R. R. (2004). Human-Robot Interaction in Rescue Robotics. IEEE SMC Part C, 34(2), Dicke, M., Burrough, A. P. (1988). Using fractal dimensions for characterizing tortuosity of animal trails. Physiological Entymology, Vol. 13, Gibson, J. J. (1986). The ecological approach to visual perception. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. (Original work published 1979). Hughes, S., Lewis, M. (2004). Robotic camera control for remote exploration In CHI '04: Proceedings of the 2004 conference on Human factors in computing systems, (pp. ( ). Vienna, Austria: ACM Press. Wang, J., Lewis, M., and Gennari, J. (2003). USAR: A Game-Based Simulation for Teleoperation. Proceedings of the 47th Annual Meeting of the Human Factors and Ergonomics Society Denver, CO, Oct Warren, W. H. (1988). Action modes and laws of control for the visual guidance of action. In O. Meijer K. and Roth (Eds.) Movement behavior: The motor-action controversy (pp ). Amsterdam: North- Holland.
7 Warren, W. H. (2003) Optic Flow In L. Chalupa and J. Werner (Eds.) The Visual Neurosciences. Cambridge, MA: MIT Press. Warren, W. H., Morris, M. W., Kalish, M. (1988). Perception of translational heading from optical flow Journal of Experimental Psychology: Human Perception and Performance Vol. 14 (4), Warren, W. H., and Whang, S. (1987). Visual guidance of walking through apertures: Body scaled information for affordances. Journal of Experimental Psychology: Human Perception and Performance, 13, Woods, D. D., Tittle, J., Feil, M., Roesler, A. (2004) Envisioning Human-Robot Coordination in Future Operations IEEE Transactions on Systems, Man, and Cybernetics- Part C, 34(2): Woods, D. D. and Watts J. C. (1997). How Not To Have To Navigate Through Too Many Displays. In Helander, M.G., Landauer, T.K. and Prabhu, P. (Eds.) Handbook of Human-Computer Interaction, 2nd edition. Amsterdam, The Netherlands: Elsevier Science, Figure 1 Remote Environment Handler Display Handler Display Handlers Perform Action At a Distance The simplified perception action cycles looking at different sensor feeds from two robots in a remote environment. The robot on the left has a single camera and is sending back information to the operator on
8 the left. The robot on the right is utilizing five cameras to send information back to a Perspective Folding display and is handled by the operator on the right. Figure 2 Velocity field transitions and fractal path. The blown up section shows a specific transit stage and an optimal path taken through it with a fractal dimension of L(δ) = kδ 1 D We are using a similar dividers method as Dicke and Burrough [3] as developed by Dr. Flip Phillips (Skidmore College), where L is the path length, δ is the step size, k is a constant, and D is the fractal dimension.
A Human Eye Like Perspective for Remote Vision
Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 A Human Eye Like Perspective for Remote Vision Curtis M. Humphrey, Stephen R.
More informationFusing Multiple Sensors Information into Mixed Reality-based User Interface for Robot Teleoperation
Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Fusing Multiple Sensors Information into Mixed Reality-based User Interface for
More informationEffective Iconography....convey ideas without words; attract attention...
Effective Iconography...convey ideas without words; attract attention... Visual Thinking and Icons An icon is an image, picture, or symbol representing a concept Icon-specific guidelines Represent the
More informationEvaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment
Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment Helmut Schrom-Feiertag 1, Christoph Schinko 2, Volker Settgast 3, and Stefan Seer 1 1 Austrian
More informationGravity-Referenced Attitude Display for Teleoperation of Mobile Robots
PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY 48th ANNUAL MEETING 2004 2662 Gravity-Referenced Attitude Display for Teleoperation of Mobile Robots Jijun Wang, Michael Lewis, and Stephen Hughes
More informationPsychophysics of night vision device halo
University of Wollongong Research Online Faculty of Health and Behavioural Sciences - Papers (Archive) Faculty of Science, Medicine and Health 2009 Psychophysics of night vision device halo Robert S Allison
More informationUSAR: A GAME BASED SIMULATION FOR TELEOPERATION. Jijun Wang, Michael Lewis, and Jeffrey Gennari University of Pittsburgh Pittsburgh, Pennsylvania
Wang, J., Lewis, M. and Gennari, J. (2003). USAR: A Game-Based Simulation for Teleoperation. Proceedings of the 47 th Annual Meeting of the Human Factors and Ergonomics Society, Denver, CO, Oct. 13-17.
More informationLOCAL OPERATOR INTERFACE. target alert teleop commands detection function sensor displays hardware configuration SEARCH. Search Controller MANUAL
Strategies for Searching an Area with Semi-Autonomous Mobile Robots Robin R. Murphy and J. Jake Sprouse 1 Abstract This paper describes three search strategies for the semi-autonomous robotic search of
More informationAn Agent-Based Architecture for an Adaptive Human-Robot Interface
An Agent-Based Architecture for an Adaptive Human-Robot Interface Kazuhiko Kawamura, Phongchai Nilas, Kazuhiko Muguruma, Julie A. Adams, and Chen Zhou Center for Intelligent Systems Vanderbilt University
More informationInteracting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)
Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception
More informationEvaluation of Human-Robot Interaction Awareness in Search and Rescue
Evaluation of Human-Robot Interaction Awareness in Search and Rescue Jean Scholtz and Jeff Young NIST Gaithersburg, MD, USA {jean.scholtz; jeff.young}@nist.gov Jill L. Drury The MITRE Corporation Bedford,
More informationAutonomy Mode Suggestions for Improving Human- Robot Interaction *
Autonomy Mode Suggestions for Improving Human- Robot Interaction * Michael Baker Computer Science Department University of Massachusetts Lowell One University Ave, Olsen Hall Lowell, MA 01854 USA mbaker@cs.uml.edu
More informationEE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department
EE631 Cooperating Autonomous Mobile Robots Lecture 1: Introduction Prof. Yi Guo ECE Department Plan Overview of Syllabus Introduction to Robotics Applications of Mobile Robots Ways of Operation Single
More informationA New Simulator for Botball Robots
A New Simulator for Botball Robots Stephen Carlson Montgomery Blair High School (Lockheed Martin Exploring Post 10-0162) 1 Introduction A New Simulator for Botball Robots Simulation is important when designing
More informationUsing Augmented Virtuality to Improve Human- Robot Interactions
Brigham Young University BYU ScholarsArchive All Theses and Dissertations 2006-02-03 Using Augmented Virtuality to Improve Human- Robot Interactions Curtis W. Nielsen Brigham Young University - Provo Follow
More informationA Lego-Based Soccer-Playing Robot Competition For Teaching Design
Session 2620 A Lego-Based Soccer-Playing Robot Competition For Teaching Design Ronald A. Lessard Norwich University Abstract Course Objectives in the ME382 Instrumentation Laboratory at Norwich University
More informationArbitrating Multimodal Outputs: Using Ambient Displays as Interruptions
Arbitrating Multimodal Outputs: Using Ambient Displays as Interruptions Ernesto Arroyo MIT Media Laboratory 20 Ames Street E15-313 Cambridge, MA 02139 USA earroyo@media.mit.edu Ted Selker MIT Media Laboratory
More informationVIRTUAL REALITY FOR NONDESTRUCTIVE EVALUATION APPLICATIONS
VIRTUAL REALITY FOR NONDESTRUCTIVE EVALUATION APPLICATIONS Jaejoon Kim, S. Mandayam, S. Udpa, W. Lord, and L. Udpa Department of Electrical and Computer Engineering Iowa State University Ames, Iowa 500
More informationCOPYRIGHTED MATERIAL. Overview
In normal experience, our eyes are constantly in motion, roving over and around objects and through ever-changing environments. Through this constant scanning, we build up experience data, which is manipulated
More informationNAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS
NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS Xianjun Sam Zheng, George W. McConkie, and Benjamin Schaeffer Beckman Institute, University of Illinois at Urbana Champaign This present
More informationCOPYRIGHTED MATERIAL OVERVIEW 1
OVERVIEW 1 In normal experience, our eyes are constantly in motion, roving over and around objects and through ever-changing environments. Through this constant scanning, we build up experiential data,
More informationChapter 9. Conclusions. 9.1 Summary Perceived distances derived from optic ow
Chapter 9 Conclusions 9.1 Summary For successful navigation it is essential to be aware of one's own movement direction as well as of the distance travelled. When we walk around in our daily life, we get
More informationAGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira
AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS Nuno Sousa Eugénio Oliveira Faculdade de Egenharia da Universidade do Porto, Portugal Abstract: This paper describes a platform that enables
More informationThe Perception of Optical Flow in Driving Simulators
University of Iowa Iowa Research Online Driving Assessment Conference 2009 Driving Assessment Conference Jun 23rd, 12:00 AM The Perception of Optical Flow in Driving Simulators Zhishuai Yin Northeastern
More informationMeasuring Coordination Demand in Multirobot Teams
PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 53rd ANNUAL MEETING 2009 779 Measuring Coordination Demand in Multirobot Teams Michael Lewis Jijun Wang School of Information sciences Quantum Leap
More informationElizabeth A. Schmidlin Keith S. Jones Brian Jonhson. Texas Tech University
Elizabeth A. Schmidlin Keith S. Jones Brian Jonhson Texas Tech University ! After 9/11, researchers used robots to assist rescue operations. (Casper, 2002; Murphy, 2004) " Marked the first civilian use
More informationNAVIGATION is an essential element of many remote
IEEE TRANSACTIONS ON ROBOTICS, VOL.??, NO.?? 1 Ecological Interfaces for Improving Mobile Robot Teleoperation Curtis Nielsen, Michael Goodrich, and Bob Ricks Abstract Navigation is an essential element
More informationUSARsim for Robocup. Jijun Wang & Michael Lewis
USARsim for Robocup Jijun Wang & Michael Lewis Background.. USARsim was developed as a research tool for an NSF project to study Robot, Agent, Person Teams in Urban Search & Rescue Katia Sycara CMU- Multi
More informationTeams for Teams Performance in Multi-Human/Multi-Robot Teams
Teams for Teams Performance in Multi-Human/Multi-Robot Teams We are developing a theory for human control of robot teams based on considering how control varies across different task allocations. Our current
More informationE90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright
E90 Project Proposal 6 December 2006 Paul Azunre Thomas Murray David Wright Table of Contents Abstract 3 Introduction..4 Technical Discussion...4 Tracking Input..4 Haptic Feedack.6 Project Implementation....7
More informationImmersive Simulation in Instructional Design Studios
Blucher Design Proceedings Dezembro de 2014, Volume 1, Número 8 www.proceedings.blucher.com.br/evento/sigradi2014 Immersive Simulation in Instructional Design Studios Antonieta Angulo Ball State University,
More informationConsiderations for Use of Aerial Views In Remote Unmanned Ground Vehicle Operations
Considerations for Use of Aerial Views In Remote Unmanned Ground Vehicle Operations Roger A. Chadwick New Mexico State University Remote unmanned ground vehicle (UGV) operations place the human operator
More informationObject Perception. 23 August PSY Object & Scene 1
Object Perception Perceiving an object involves many cognitive processes, including recognition (memory), attention, learning, expertise. The first step is feature extraction, the second is feature grouping
More informationINTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT
INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT TAYSHENG JENG, CHIA-HSUN LEE, CHI CHEN, YU-PIN MA Department of Architecture, National Cheng Kung University No. 1, University Road,
More informationRobotic Systems ECE 401RB Fall 2007
The following notes are from: Robotic Systems ECE 401RB Fall 2007 Lecture 14: Cooperation among Multiple Robots Part 2 Chapter 12, George A. Bekey, Autonomous Robots: From Biological Inspiration to Implementation
More informationT I P S F O R I M P R O V I N G I M A G E Q U A L I T Y O N O Z O F O O T A G E
T I P S F O R I M P R O V I N G I M A G E Q U A L I T Y O N O Z O F O O T A G E Updated 20 th Jan. 2017 References Creator V1.4.0 2 Overview This document will concentrate on OZO Creator s Image Parameter
More informationEcological Interfaces for Improving Mobile Robot Teleoperation
Brigham Young University BYU ScholarsArchive All Faculty Publications 2007-10-01 Ecological Interfaces for Improving Mobile Robot Teleoperation Michael A. Goodrich mike@cs.byu.edu Curtis W. Nielsen See
More informationReal-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments
Real-time Adaptive Robot Motion Planning in Unknown and Unpredictable Environments IMI Lab, Dept. of Computer Science University of North Carolina Charlotte Outline Problem and Context Basic RAMP Framework
More informationMid-term report - Virtual reality and spatial mobility
Mid-term report - Virtual reality and spatial mobility Jarl Erik Cedergren & Stian Kongsvik October 10, 2017 The group members: - Jarl Erik Cedergren (jarlec@uio.no) - Stian Kongsvik (stiako@uio.no) 1
More informationDistribution Statement A (Approved for Public Release, Distribution Unlimited)
www.darpa.mil 14 Programmatic Approach Focus teams on autonomy by providing capable Government-Furnished Equipment Enables quantitative comparison based exclusively on autonomy, not on mobility Teams add
More informationPerceiving Aperture Widths During Teleoperation
Clemson University TigerPrints All Theses Theses 7-2008 Perceiving Aperture Widths During Teleoperation Suzanne Butler Clemson University, suzannenb@gmail.com Follow this and additional works at: https://tigerprints.clemson.edu/all_theses
More informationLAB 5: Mobile robots -- Modeling, control and tracking
LAB 5: Mobile robots -- Modeling, control and tracking Overview In this laboratory experiment, a wheeled mobile robot will be used to illustrate Modeling Independent speed control and steering Longitudinal
More informationTowards affordance based human-system interaction based on cyber-physical systems
Towards affordance based human-system interaction based on cyber-physical systems Zoltán Rusák 1, Imre Horváth 1, Yuemin Hou 2, Ji Lihong 2 1 Faculty of Industrial Design Engineering, Delft University
More informationPI: Rhoads. ERRoS: Energetic and Reactive Robotic Swarms
ERRoS: Energetic and Reactive Robotic Swarms 1 1 Introduction and Background As articulated in a recent presentation by the Deputy Assistant Secretary of the Army for Research and Technology, the future
More informationComparing the Usefulness of Video and Map Information in Navigation Tasks
Comparing the Usefulness of Video and Map Information in Navigation Tasks ABSTRACT Curtis W. Nielsen Brigham Young University 3361 TMCB Provo, UT 84601 curtisn@gmail.com One of the fundamental aspects
More informationpreface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...
v preface Motivation Augmented reality (AR) research aims to develop technologies that allow the real-time fusion of computer-generated digital content with the real world. Unlike virtual reality (VR)
More informationMobile Robot Navigation Contest for Undergraduate Design and K-12 Outreach
Session 1520 Mobile Robot Navigation Contest for Undergraduate Design and K-12 Outreach Robert Avanzato Penn State Abington Abstract Penn State Abington has developed an autonomous mobile robotics competition
More informationChapter 18 Optical Elements
Chapter 18 Optical Elements GOALS When you have mastered the content of this chapter, you will be able to achieve the following goals: Definitions Define each of the following terms and use it in an operational
More informationMixed-Initiative Interactions for Mobile Robot Search
Mixed-Initiative Interactions for Mobile Robot Search Curtis W. Nielsen and David J. Bruemmer and Douglas A. Few and Miles C. Walton Robotic and Human Systems Group Idaho National Laboratory {curtis.nielsen,
More informationREPORT ON THE CURRENT STATE OF FOR DESIGN. XL: Experiments in Landscape and Urbanism
REPORT ON THE CURRENT STATE OF FOR DESIGN XL: Experiments in Landscape and Urbanism This report was produced by XL: Experiments in Landscape and Urbanism, SWA Group s innovation lab. It began as an internal
More informationAutonomous Mobile Robot Design. Dr. Kostas Alexis (CSE)
Autonomous Mobile Robot Design Dr. Kostas Alexis (CSE) Course Goals To introduce students into the holistic design of autonomous robots - from the mechatronic design to sensors and intelligence. Develop
More informationMEM380 Applied Autonomous Robots I Winter Feedback Control USARSim
MEM380 Applied Autonomous Robots I Winter 2011 Feedback Control USARSim Transforming Accelerations into Position Estimates In a perfect world It s not a perfect world. We have noise and bias in our acceleration
More informationApplying CSCW and HCI Techniques to Human-Robot Interaction
Applying CSCW and HCI Techniques to Human-Robot Interaction Jill L. Drury Jean Scholtz Holly A. Yanco The MITRE Corporation National Institute of Standards Computer Science Dept. Mail Stop K320 and Technology
More informationABSTRACT. Figure 1 ArDrone
Coactive Design For Human-MAV Team Navigation Matthew Johnson, John Carff, and Jerry Pratt The Institute for Human machine Cognition, Pensacola, FL, USA ABSTRACT Micro Aerial Vehicles, or MAVs, exacerbate
More informationEvaluation of an Enhanced Human-Robot Interface
Evaluation of an Enhanced Human-Robot Carlotta A. Johnson Julie A. Adams Kazuhiko Kawamura Center for Intelligent Systems Center for Intelligent Systems Center for Intelligent Systems Vanderbilt University
More informationExploring Surround Haptics Displays
Exploring Surround Haptics Displays Ali Israr Disney Research 4615 Forbes Ave. Suite 420, Pittsburgh, PA 15213 USA israr@disneyresearch.com Ivan Poupyrev Disney Research 4615 Forbes Ave. Suite 420, Pittsburgh,
More informationApplications of Flash and No-Flash Image Pairs in Mobile Phone Photography
Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography Xi Luo Stanford University 450 Serra Mall, Stanford, CA 94305 xluo2@stanford.edu Abstract The project explores various application
More informationVisual compass for the NIFTi robot
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY IN PRAGUE Visual compass for the NIFTi robot Tomáš Nouza nouzato1@fel.cvut.cz June 27, 2013 TECHNICAL REPORT Available at https://cw.felk.cvut.cz/doku.php/misc/projects/nifti/sw/start/visual
More informationPSYCHOLOGICAL SCIENCE. Research Report
Research Report RETINAL FLOW IS SUFFICIENT FOR STEERING DURING OBSERVER ROTATION Brown University Abstract How do people control locomotion while their eyes are simultaneously rotating? A previous study
More informationSalient features make a search easy
Chapter General discussion This thesis examined various aspects of haptic search. It consisted of three parts. In the first part, the saliency of movability and compliance were investigated. In the second
More informationConstructing Representations of Mental Maps
Constructing Representations of Mental Maps Carol Strohecker Adrienne Slaughter Originally appeared as Technical Report 99-01, Mitsubishi Electric Research Laboratories Abstract This short paper presents
More informationConstructing Representations of Mental Maps
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Constructing Representations of Mental Maps Carol Strohecker, Adrienne Slaughter TR99-01 December 1999 Abstract This short paper presents continued
More informationROBOTICS ENG YOUSEF A. SHATNAWI INTRODUCTION
ROBOTICS INTRODUCTION THIS COURSE IS TWO PARTS Mobile Robotics. Locomotion (analogous to manipulation) (Legged and wheeled robots). Navigation and obstacle avoidance algorithms. Robot Vision Sensors and
More informationTerrain Classification for Autonomous Robot Mobility
Terrain Classification for Autonomous Robot Mobility from Safety, Security, Rescue Robotics to Planetary Exploration Andreas Birk, Todor Stoyanov, Yashodhan Nevatia, Rares Ambrus, Jann Poppinga, and Kaustubh
More informationCS594, Section 30682:
CS594, Section 30682: Distributed Intelligence in Autonomous Robotics Spring 2003 Tuesday/Thursday 11:10 12:25 http://www.cs.utk.edu/~parker/courses/cs594-spring03 Instructor: Dr. Lynne E. Parker ½ TA:
More informationRobotic Technology for USAR
Robotic Technology for USAR 16-899D Lecture Slides Role of Robotics in USAR Lower latency of first entry HAZMAT scheduling, preparation Structural analysis and approval Lower very high human risk Increase
More informationA Virtual Environments Editor for Driving Scenes
A Virtual Environments Editor for Driving Scenes Ronald R. Mourant and Sophia-Katerina Marangos Virtual Environments Laboratory, 334 Snell Engineering Center Northeastern University, Boston, MA 02115 USA
More informationTime-Lapse Panoramas for the Egyptian Heritage
Time-Lapse Panoramas for the Egyptian Heritage Mohammad NABIL Anas SAID CULTNAT, Bibliotheca Alexandrina While laser scanning and Photogrammetry has become commonly-used methods for recording historical
More informationPerceptual Rendering Intent Use Case Issues
White Paper #2 Level: Advanced Date: Jan 2005 Perceptual Rendering Intent Use Case Issues The perceptual rendering intent is used when a pleasing pictorial color output is desired. [A colorimetric rendering
More informationProjection Based HCI (Human Computer Interface) System using Image Processing
GRD Journals- Global Research and Development Journal for Volume 1 Issue 5 April 2016 ISSN: 2455-5703 Projection Based HCI (Human Computer Interface) System using Image Processing Pankaj Dhome Sagar Dhakane
More informationKey-Words: - Fuzzy Behaviour Controls, Multiple Target Tracking, Obstacle Avoidance, Ultrasonic Range Finders
Fuzzy Behaviour Based Navigation of a Mobile Robot for Tracking Multiple Targets in an Unstructured Environment NASIR RAHMAN, ALI RAZA JAFRI, M. USMAN KEERIO School of Mechatronics Engineering Beijing
More informationUsing Computational Cognitive Models to Build Better Human-Robot Interaction. Cognitively enhanced intelligent systems
Using Computational Cognitive Models to Build Better Human-Robot Interaction Alan C. Schultz Naval Research Laboratory Washington, DC Introduction We propose an approach for creating more cognitively capable
More informationImage Characteristics and Their Effect on Driving Simulator Validity
University of Iowa Iowa Research Online Driving Assessment Conference 2001 Driving Assessment Conference Aug 16th, 12:00 AM Image Characteristics and Their Effect on Driving Simulator Validity Hamish Jamson
More informationToward an Integrated Ecological Plan View Display for Air Traffic Controllers
Wright State University CORE Scholar International Symposium on Aviation Psychology - 2015 International Symposium on Aviation Psychology 2015 Toward an Integrated Ecological Plan View Display for Air
More informationTHE RELATIVE IMPORTANCE OF PICTORIAL AND NONPICTORIAL DISTANCE CUES FOR DRIVER VISION. Michael J. Flannagan Michael Sivak Julie K.
THE RELATIVE IMPORTANCE OF PICTORIAL AND NONPICTORIAL DISTANCE CUES FOR DRIVER VISION Michael J. Flannagan Michael Sivak Julie K. Simpson The University of Michigan Transportation Research Institute Ann
More information3D and Sequential Representations of Spatial Relationships among Photos
3D and Sequential Representations of Spatial Relationships among Photos Mahoro Anabuki Canon Development Americas, Inc. E15-349, 20 Ames Street Cambridge, MA 02139 USA mahoro@media.mit.edu Hiroshi Ishii
More informationSpatio-Temporal Retinex-like Envelope with Total Variation
Spatio-Temporal Retinex-like Envelope with Total Variation Gabriele Simone and Ivar Farup Gjøvik University College; Gjøvik, Norway. Abstract Many algorithms for spatial color correction of digital images
More informationENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS
BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of
More informationMarineSIM : Robot Simulation for Marine Environments
MarineSIM : Robot Simulation for Marine Environments P.G.C.Namal Senarathne, Wijerupage Sardha Wijesoma,KwangWeeLee, Bharath Kalyan, Moratuwage M.D.P, Nicholas M. Patrikalakis, Franz S. Hover School of
More informationImproving Emergency Response and Human- Robotic Performance
Improving Emergency Response and Human- Robotic Performance 8 th David Gertman, David J. Bruemmer, and R. Scott Hartley Idaho National Laboratory th Annual IEEE Conference on Human Factors and Power Plants
More informationDeveloping a Testbed for Studying Human-Robot Interaction in Urban Search and Rescue
Developing a Testbed for Studying Human-Robot Interaction in Urban Search and Rescue Michael Lewis University of Pittsburgh Pittsburgh, PA 15260 ml@sis.pitt.edu Katia Sycara and Illah Nourbakhsh Carnegie
More informationThe Mona Lisa Effect: Perception of Gaze Direction in Real and Pictured Faces
Studies in Perception and Action VII S. Rogers & J. Effken (Eds.)! 2003 Lawrence Erlbaum Associates, Inc. The Mona Lisa Effect: Perception of Gaze Direction in Real and Pictured Faces Sheena Rogers 1,
More informationPerception. Introduction to HRI Simmons & Nourbakhsh Spring 2015
Perception Introduction to HRI Simmons & Nourbakhsh Spring 2015 Perception my goals What is the state of the art boundary? Where might we be in 5-10 years? The Perceptual Pipeline The classical approach:
More informationBias errors in PIV: the pixel locking effect revisited.
Bias errors in PIV: the pixel locking effect revisited. E.F.J. Overmars 1, N.G.W. Warncke, C. Poelma and J. Westerweel 1: Laboratory for Aero & Hydrodynamics, University of Technology, Delft, The Netherlands,
More informationDiscriminating direction of motion trajectories from angular speed and background information
Atten Percept Psychophys (2013) 75:1570 1582 DOI 10.3758/s13414-013-0488-z Discriminating direction of motion trajectories from angular speed and background information Zheng Bian & Myron L. Braunstein
More informationEnhancing Robot Teleoperator Situation Awareness and Performance using Vibro-tactile and Graphical Feedback
Enhancing Robot Teleoperator Situation Awareness and Performance using Vibro-tactile and Graphical Feedback by Paulo G. de Barros Robert W. Lindeman Matthew O. Ward Human Interaction in Vortual Environments
More information2/3/2016. How We Move... Ecological View. Ecological View. Ecological View. Ecological View. Ecological View. Sensory Processing.
How We Move Sensory Processing 2015 MFMER slide-4 2015 MFMER slide-7 Motor Processing 2015 MFMER slide-5 2015 MFMER slide-8 Central Processing Vestibular Somatosensation Visual Macular Peri-macular 2015
More informationUniversity of Geneva. Presentation of the CISA-CIN-BBL v. 2.3
University of Geneva Presentation of the CISA-CIN-BBL 17.05.2018 v. 2.3 1 Evolution table Revision Date Subject 0.1 06.02.2013 Document creation. 1.0 08.02.2013 Contents added 1.5 12.02.2013 Some parts
More informationTeams for Teams Performance in Multi-Human/Multi-Robot Teams
PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 54th ANNUAL MEETING - 2010 438 Teams for Teams Performance in Multi-Human/Multi-Robot Teams Pei-Ju Lee, Huadong Wang, Shih-Yi Chien, and Michael
More informationHaptic Cueing of a Visual Change-Detection Task: Implications for Multimodal Interfaces
In Usability Evaluation and Interface Design: Cognitive Engineering, Intelligent Agents and Virtual Reality (Vol. 1 of the Proceedings of the 9th International Conference on Human-Computer Interaction),
More informationFigure 1 HDR image fusion example
TN-0903 Date: 10/06/09 Using image fusion to capture high-dynamic range (hdr) scenes High dynamic range (HDR) refers to the ability to distinguish details in scenes containing both very bright and relatively
More informationFuzzy-Heuristic Robot Navigation in a Simulated Environment
Fuzzy-Heuristic Robot Navigation in a Simulated Environment S. K. Deshpande, M. Blumenstein and B. Verma School of Information Technology, Griffith University-Gold Coast, PMB 50, GCMC, Bundall, QLD 9726,
More informationOFFensive Swarm-Enabled Tactics (OFFSET)
OFFensive Swarm-Enabled Tactics (OFFSET) Dr. Timothy H. Chung, Program Manager Tactical Technology Office Briefing Prepared for OFFSET Proposers Day 1 Why are Swarms Hard: Complexity of Swarms Number Agent
More informationLearning and Using Models of Kicking Motions for Legged Robots
Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract
More informationHaptic presentation of 3D objects in virtual reality for the visually disabled
Haptic presentation of 3D objects in virtual reality for the visually disabled M Moranski, A Materka Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, Lodz, POLAND marcin.moranski@p.lodz.pl,
More information1) Complexity, Emergence & CA (sb) 2) Fractals and L-systems (sb) 3) Multi-agent systems (vg) 4) Swarm intelligence (vg) 5) Artificial evolution (vg)
1) Complexity, Emergence & CA (sb) 2) Fractals and L-systems (sb) 3) Multi-agent systems (vg) 4) Swarm intelligence (vg) 5) Artificial evolution (vg) 6) Virtual Ecosystems & Perspectives (sb) Inspired
More informationEFFECT OF INERTIAL TAIL ON YAW RATE OF 45 GRAM LEGGED ROBOT *
EFFECT OF INERTIAL TAIL ON YAW RATE OF 45 GRAM LEGGED ROBOT * N.J. KOHUT, D. W. HALDANE Department of Mechanical Engineering, University of California, Berkeley Berkeley, CA 94709, USA D. ZARROUK, R.S.
More informationCOLOR IMAGE QUALITY EVALUATION USING GRAYSCALE METRICS IN CIELAB COLOR SPACE
COLOR IMAGE QUALITY EVALUATION USING GRAYSCALE METRICS IN CIELAB COLOR SPACE Renata Caminha C. Souza, Lisandro Lovisolo recaminha@gmail.com, lisandro@uerj.br PROSAICO (Processamento de Sinais, Aplicações
More informationH2020 RIA COMANOID H2020-RIA
Ref. Ares(2016)2533586-01/06/2016 H2020 RIA COMANOID H2020-RIA-645097 Deliverable D4.1: Demonstrator specification report M6 D4.1 H2020-RIA-645097 COMANOID M6 Project acronym: Project full title: COMANOID
More informationIntroduction to Humans in HCI
Introduction to Humans in HCI Mary Czerwinski Microsoft Research 9/18/2001 We are fortunate to be alive at a time when research and invention in the computing domain flourishes, and many industrial, government
More information