An Integrated Immersive Simulator for the Dismounted Soldier

Similar documents
ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit)

PRESS RELEASE EUROSATORY 2018

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

Physical Presence in Virtual Worlds using PhysX

FULL MISSION REHEARSAL & SIMULATION SOLUTIONS

Mission-focused Interaction and Visualization for Cyber-Awareness!

Application of 3D Terrain Representation System for Highway Landscape Design

Microsoft ESP Developer profile white paper

Design of a Remote-Cockpit for small Aerospace Vehicles

Perception in Immersive Environments

MANPADS VIRTUAL REALITY SIMULATOR

Chapter 1 Virtual World Fundamentals

A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems

Sikorsky S-70i BLACK HAWK Training

OFFensive Swarm-Enabled Tactics (OFFSET)

Fish4Knowlege: a Virtual World Exhibition Space. for a Large Collaborative Project

The LVCx Framework. The LVCx Framework An Advanced Framework for Live, Virtual and Constructive Experimentation

vstasker 6 A COMPLETE MULTI-PURPOSE SOFTWARE TO SPEED UP YOUR SIMULATION PROJECT, FROM DESIGN TIME TO DEPLOYMENT REAL-TIME SIMULATION TOOLKIT FEATURES

STE Standards and Architecture Framework TCM ITE

A Distributed Virtual Reality Prototype for Real Time GPS Data

Virtual Environments. Ruth Aylett

UNIT-III LIFE-CYCLE PHASES

H2020 RIA COMANOID H2020-RIA

Railway Training Simulators run on ESRI ArcGIS generated Track Splines

Improved Methods for the Generation of Full-Ship Simulation/Analysis Models NSRP ASE Subcontract Agreement

Robotics Institute. University of Valencia

Modeling and Simulation: Linking Entertainment & Defense

Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment

University of Geneva. Presentation of the CISA-CIN-BBL v. 2.3

NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS

TECHNOLOGY COMMONALITY FOR SIMULATION TRAINING OF AIR COMBAT OFFICERS AND NAVAL HELICOPTER CONTROL OFFICERS

Haptic presentation of 3D objects in virtual reality for the visually disabled

Air Marshalling with the Kinect

SIU-CAVE. Cave Automatic Virtual Environment. Project Design. Version 1.0 (DRAFT) Prepared for. Dr. Christos Mousas JBU.

New Developments in VBS3 GameTech 2014

Sagittarius Evolution Product Line

What will the robot do during the final demonstration?

VIRTUAL REALITY APPLICATIONS IN THE UK's CONSTRUCTION INDUSTRY

Mid-term report - Virtual reality and spatial mobility

STATE OF THE ART 3D DESKTOP SIMULATIONS FOR TRAINING, FAMILIARISATION AND VISUALISATION.

Virtual and Augmented Reality for Cabin Crew Training: Practical Applications

Craig Barnes. Previous Work. Introduction. Tools for Programming Agents

EVALUATION OF DESIGN AND OPERABILITY IN AN IMMERSIVE 3D SIMULATOR

Using VRML and Collaboration Tools to Enhance Feedback and Analysis of Distributed Interactive Simulation (DIS) Exercises

Prospective Teleautonomy For EOD Operations

EMPLOYING VIRTUAL REALITY SIMULATION TO TRAIN FOR PREVENTION, DETERRENCE, RESPONSE, AND RECOVERY FOR CHEM BIO EVENTS

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

TEAM JAKD WIICONTROL

Welcome to this course on «Natural Interactive Walking on Virtual Grounds»!

Psychophysics of night vision device halo

Instrumentation and Control

Inertial Doppler Radio Locator (IDRL) for DoD Test Range Applications

Individual Test Item Specifications

The Army s Future Tactical UAS Technology Demonstrator Program

Immersive Visualization and Collaboration with LS-PrePost-VR and LS-PrePost-Remote

Virtual Reality as Innovative Approach to the Interior Designing

USER-ORIENTED INTERACTIVE BUILDING DESIGN *

Visualization and Simulation for Research and Collaboration. An AVI-SPL Tech Paper. (+01)

Attorney Docket No Date: 25 April 2008

Configuring Multiscreen Displays With Existing Computer Equipment

DepthTouch: Using Depth-Sensing Camera to Enable Freehand Interactions On and Above the Interactive Surface

Effects of Visual-Vestibular Interactions on Navigation Tasks in Virtual Environments

Advanced Tools for Graphical Authoring of Dynamic Virtual Environments at the NADS

HeroX - Untethered VR Training in Sync'ed Physical Spaces

Virtual Reality Devices in C2 Systems

Nao Devils Dortmund. Team Description for RoboCup Matthias Hofmann, Ingmar Schwarz, and Oliver Urbann

SPQR RoboCup 2016 Standard Platform League Qualification Report

Context-Aware Interaction in a Mobile Environment

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

Intelligent driving TH« TNO I Innovation for live

Head-Movement Evaluation for First-Person Games

PROGRESS ON THE SIMULATOR AND EYE-TRACKER FOR ASSESSMENT OF PVFR ROUTES AND SNI OPERATIONS FOR ROTORCRAFT

Engineered Resilient Systems NDIA Systems Engineering Conference October 29, 2014

Development of a Novel Low-Cost Flight Simulator for Pilot Training

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

IMPROVING TOWER DEFENSE GAME AI (DIFFERENTIAL EVOLUTION VS EVOLUTIONARY PROGRAMMING) CHEAH KEEI YUAN

REQUEST FOR INFORMATION (RFI) United States Marine Corps Experimental Forward Operating Base (ExFOB) 2014

CSE 165: 3D User Interaction. Lecture #14: 3D UI Design

EVALUATION OF. SECURITY FORCES APPLICATIONS IN THE CONTEXT OF VIRTUAL REALITY AND MOBILE LEARNING Prof. Gonca Telli Yamamoto

Waves Nx VIRTUAL REALITY AUDIO

Introduction to Systems Engineering

About 3D perception. Experience & Innovation: Powered by People

Interactive Simulation: UCF EIN5255. VR Software. Audio Output. Page 4-1


Prototyping interactive cockpit applications

Virtual Environments. CSCI 420 Computer Graphics Lecture 25. History of Virtual Reality Flight Simulators Immersion, Interaction, Real-time Haptics

Directions in Modeling, Virtual Environments and Simulation (MOVES) / presentation

Tangible interaction : A new approach to customer participatory design

VIRTUAL REALITY FOR NONDESTRUCTIVE EVALUATION APPLICATIONS

Virtual Reality in Satellite Integration and Testing

Virtual Environments. Virtual Reality. History of Virtual Reality. Virtual Reality. Cinerama. Cinerama

Assessments of Grade Crossing Warning and Signalization Devices Driving Simulator Study

1/22/13. Virtual Environments. Virtual Reality. History of Virtual Reality. Virtual Reality. Cinerama. Cinerama

SITUATED DESIGN OF VIRTUAL WORLDS USING RATIONAL AGENTS

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

A Virtual Environments Editor for Driving Scenes

Edward Waller Joseph Chaput Presented at the IAEA International Conference on Physical Protection of Nuclear Material and Facilities

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department

Transcription:

An Integrated Immersive Simulator for the Dismounted Soldier Carolina Cruz-Neira, Dirk Reiners, Jan P. Springer, Carsten Neumann, Christian N.S. Odom University of Louisiana at Lafayette Lafayette, LA {carolina, dirk, jan.springer, carsten, cno4461}@louisiana.edu Kathy Kehring U.S. Army Research Laboratory Aberdeen Proving Ground MD kathy.l.kehring.civ@mail.mil ABSTRACT Immersive military training simulators have been available for over thirty years; but, most of these training simulators have been targeted at training forces on vehicle operations and missions (e.g., flight simulators). These simulators typically use a combination of physical devices, such as cockpits or cabins, with some large display, such as a dome or a tiled wall, to present the scenario to the trainees. However, the use of similar setups for the training of dismounted Soldiers has not yet been widely deployed. This is primarily due to the fact that in a vehicle simulator the trainee is stationary with respect to the physical mock-up, while for a dismounted Soldier the simulator must provide the means for the Soldier to physically move in the virtual space. Furthermore, the simulator must also provide the ability for the Soldiers to experience the physical exertion of the exercise. An additional level of complexity when developing immersive simulators for dismounted Soldiers is the creation of complex scenarios. The level of detail and fidelity is significantly more demanding than those for vehicle simulations as well as the wide variety of scenarios within the same area that the Soldiers need to be trained on. We present an immersive system for the dismounted Soldier with two major components. First, a combination of a physical interface, an omni-directional treadmill, with a newly designed surround-screen stereoscopic display to enable Soldiers to walk, run, crawl, and shoot in a virtual space. Second, a software framework for the rapid creation, execution, and monitoring of training scenarios. The integration of these two components provides a unique environment to perform training studies required for a variety of scenarios and physical exertion of the trainees. ABOUT THE AUTHORS Dr. Carolina Cruz-Neira is the W. Hansen Hall and Mary Officer Hall/BORSF Chair in Computer Engineering and the Chief Scientist of LITE at the University of Louisiana at Lafayette (UL Lafayette). She is the co-inventor of the CAVE and the developer of the CAVELibs. Her research is on software engineering for modeling and simulation, applications and usability studies of virtual environments. She is an ACM SIGGRAPH Computer Graphics Pioneer, holds the 2007 Virtual Reality Technical Achievement Award from the IEEE Visualization and Graphics Technical Committee, and the 2009 International Digital Media and the Arts Association Distinguished Career Award. Dr. Dirk P. Reiners is an Assistant Professor at the Center for Advanced Computer Studies at UL Lafayette. His work focuses on interactive 3D graphics and virtual reality technology and applications. He has more than 15 years of experience in these fields both in academic and industrial settings. He is the project lead of the Open Source OpenSG scene-graph project, one of the leading scene-graph systems today. He received the I/ITSEC Best Paper Award in 2006 and the IEEE Virtual Reality Best Paper Award in 2009. Dr. Jan P. Springer is a Senior Research Scientist at UL Lafayette. Until 2008 he was with the Virtual Reality Systems Group at Bauhaus-Universität Weimar in Germany. His work is on virtual reality systems for modeling and simulation, in areas such as cluster-based displays, multi-viewer stereo, software frameworks, and interactive high-quality rendering. Carsten Neumann holds a M.Sc. in mathematics form Technische Universität Darmstadt, Germany and is currently a research scientist at UL Lafayette working on the 3rd Generation Omnidirectional Treadmill Simulator. He has eight years of experience developing the Open Source scene-graph system OpenSG, four as a core developer. His main areas of interest are real-time 3D graphics and distributed immersive systems. Christian N.S. Odom is currently a Ph.D. candidate and a research scientist at UL Lafayette. His main areas of interest are interactive systems and human computer interaction. Kathy L. Kehring is an Electronics Engineer with over 20 years experience working at the U.S. Army Research Laboratory (ARL). She also served as a Science Advisor to the Commanding General of V Corps in Heidelberg, Germany from 19971999. Currently, she manages ARL's Tactical Environment Simulation Facility where she has been the technical lead in the development of an immersive simulator that incorporates a state-of-the-art mobility interface. She also develops methodologies, scenarios, and data-collection protocols for immersive simulation-based research on issues affecting the cognitive and physical performance of dismounted Soldiers. 2011 Paper No. 11320 Page 1 of 11

An Integrated Immersive Simulator for the Dismounted Soldier Carolina Cruz-Neira, Dirk Reiners, Jan P. Springer, Carsten Neumann, Christian N.S. Odom University of Louisiana at Lafayette Lafayette, LA {carolina, dirk, jan.springer, carsten, cno4461}@louisiana.edu Kathy Kehring U.S. Army Research Laboratory Aberdeen Proving Ground MD kathy.l.kehring.civ@mail.mil INTRODUCTION BACKGROUND Immersive military training simulators have been available for over thirty years. However, most of the training simulators have been vehicle-centric, such as flight simulators. Vehicle-centric simulators typically use a stationary portion of the real vehicle, such as a cockpit or a cabin, placed inside a large display like a dome or tiled wall to present the scenario to the trainees. In these simulators, the trainee is stationary with respect to the physical mockup, and all movement occurs in the virtual space through the simulation of the vehicle dynamics in the environment. Dismounted Soldiers are the military forces specifically trained for the role of close proximity combat engaging enemy forces face to face. Part of their role is to gain access to and operate in areas that cannot be reached by vehicles. They are the branch of the military requiring the most physically demanding training in addition to cognitive training in decision-making, tactical skills, and situational awareness. The use of similar virtual environments (VEs) for the training of dismounted Soldiers has not yet been as widely deployed. The reason for this is that there are significantly different requirements for the simulation capabilities to support training for dismounted Soldiers to those of Soldiers in vehicles. A key differentiating feature of training simulators for dismounted Soldiers is that the simulator must provide the means for the Soldier to physically move in the virtual space, as well as the ability for the Soldiers to feel the physical exertion of the movement. Another key differentiating factor for dismounted Soldier training is the need for a wide variety of rapidly evolving scenarios that reflect the operational conditions of a mission. These scenarios need to allow for significant variation during different training sessions because the goal is not to get proficient on manipulating a series of controls, like in vehicle simulators, but to learn and improve both cognitive and physical skills to handle a situation. This paper presents the results of a collaboration between the University of Louisiana at Lafayette and the Human Research and Engineering Directorate (HRED), U.S. Army Research Laboratory (ARL) at Aberdeen Proving Ground to develop a Dismounted Soldier Training System that supports physical motions like walking, jogging, and crawling within a VE. The system integrates software as well as hardware by introducing an omni-directional treadmill into a CAVE-like environment with software that enables the dynamic construction of diverse scenarios. 2011 Paper No. 11320 Page 2 of 11 There are two approaches currently used to train dismounted Soldiers: real-life facilities and computer simulators. Real-life facilities, such as the U.S. Army Yuma Proving Ground (Yuma, 2011), provide large spaces, buildings, props, and infrastructure resembling real-world locations and combat conditions. These approaches place the Soldier in situations very similar to those found in actual missions. They are limited in how many Soldiers can be trained, are labor-intensive to prepare and maintain, and are inflexible in the sense that they cannot be easily set up for different types of terrain, weather conditions, or urban landscapes (Knerr et Al., 2002). However, these real-life training environments allow to train motor skills and to prepare for the necessary physical demands of on-foot missions. Computer-based simulators for the dismounted Soldier have proliferated in the last years due to the availability of affordable graphics systems and the increasingly high quality of interactive graphics. Desktop-based simulators (DVTE, 2011), (SVS, 2001) are widely deployed across the military branches because they use standard PC technology without any specific requirements in terms of space, devices, or specialized maintenance. In these simulators, Soldiers do not have any physical mobility capabilities. Locomotion is achieved through interactive devices such as a mouse or a joystick. Immersive simulators are recently starting to become available, such as Flatworld (Pair et Al., 2003) which provides large rear-projection screens embedded into physical props to simulate room interiors, views to the outside, or a building's exterior. These systems are more flexible than real-life training grounds by enabling the development of a variety of missions and situations

within the same physical space. However, due to space requirements and the use of physical props, these kinds of immersive simulators have a limited range of scenarios. Other immersive simulators recreate the scenarios and missions entirely in the virtual world through the use of surrounding projection screens or head-mounted displays (Intelligent Decisions, 2011) (FITE, 2011) (Virtra, 2011). These simulators have the advantage of allowing a wider range of scenarios under different (simulated) environmental conditions, such as weather, time of day, or weapon damage. In general, computer-based simulators have significant advantages over live simulators (Fletcher, 2009) because they enable the representation of different scenarios and missions, provide a safe training environment to immerse Soldiers in hazardous situations, do not require live actors or large props, and only use small physical spaces compared to a live simulator facility. Most computer-based simulators have usually been focused on training the dismounted Soldier on tactical, situational awareness, coordination skills, decision-making, communications, and without incorporating a physical training element. Recent studies (Knerr, 2009) show that Soldiers training in virtual simulators reported improvements of their tactical, coordination, and communication skills, but, at the same time, felt limited in improving the physical skills required for the exercise. Experience and studies (Knerr, 2007) show that computerbased simulators have great potential to train the dismounted Soldier but there is still a lack of simulators that also provide training of physical and motor skills needed for on-foot missions. DISMOUNTED SOLDIER TRAINING NEEDS The Army Strategic Planning Guidance 2005 (U.S. Army, 2005) and the Soldier CATT ORD recognize that the dismounted Soldier remains the centerpiece of the US Army and that training is critical for today s non-linear battlefields. The document presents the broad training requirements for the dismounted Soldier by emphasizing the need for: adaptability and capability to respond to rapidly changing situations and the need to train not only to react to changes but also on how to manipulate the environment to create the best possible results. Familiarity with the culture, history, and language of the area of operations. Rapidly and accurately assessing the evolving situation in the area of operations. Situation understanding beyond the tactical level. 2011 Paper No. 11320 Page 3 of 11 Conducting close(-quarter) combat operations in difficult terrain and weather conditions. Training in a diverse range of environments, terrains, and situations. Frequent and repetitive complex task training. Most of today s computer-based simulators address these issues but exhibit limited or no capabilities to provide training in simulators that can provide realistic perception of movement and the physical exertion associated with that movement. This is a critical component of training for the dismounted Soldier because in most of their operations they will have to walk, run, crawl, kneel, crouch and go prone. Our work addresses the training needs for the dismounted Soldier as listed above, in particular the ability to physically move through the VE. Our system is intended to serve as a platform to evaluate virtual training and dismounted Soldier performance under different stress conditions. In the long term, the framework presented can support the development of training exercises incorporating physical exertion. PREVIOUS WORK WITH THE ODT HRED currently operates an omni-directional treadmill (ODT) system embedded in a 4-wall CAVE in their Tactical Environment Simulation Facility (TESF). The screens surround the user for 360º immersion while moving on the ODT. The system is primarily used for performing human factors research related to Soldier performance and training of the dismounted Soldier as well as evaluation of new technologies for the Soldiers. HRED has been conducting research in the TESF for several years now and through their experience has identified a set of challenges and limitations of the current experimental environment that led to the definition of the requirements for the Dismounted Soldier Training System (DSTS) project. HRED System Requirements Ability to rapidly create basic scenery that resembles the operating area for a specific training task A significant limitation faced by HRED researchers is the difficulty on building their own base scenery to create the scenarios. They have a preset group of static scenes representing specific areas, such as a model of the Aberdeen Proving Grounds, a neighborhood in an American city, and an open field. Creating new base scenery (e.g., a village in the Middle East), requires 3D modeling and programming to develop the new environment. This approach is expensive in terms of effort, time, and cost and limits the researchers to what is placed in the scenario by the modeling experts. Therefore, there is a need for a system that enables HRED researchers to build base scenery with the looks of the environment needed for training and

experimentation. This translates into more flexibility as well as lower costs for creating new experiments. Ability to introduce changes during a training exercise Another important need is to monitor and control a scenario while the Soldiers are in the VE. The trainer and/or psychologist needs to be able to introduce changes to the environment and to the situation during an exercise. Today s battlefields present rapidly changing situations and Soldiers must be ready to understand and respond to those unexpected changes. Researchers need to be able to control a training exercise and introduce unexpected changes resembling those found in real situations to evaluate how Soldiers perform under unexpected and stressful conditions. Simplification of the operation of the immersive space Most scenarios are first designed in a regular desktop environment and then visualized in the immersive space. It is important that the process of designing the scenario includes a way to connect the desktop-based design and editing environment with the viewing and manipulation of the scenario as it is being built inside the immersive space. Researchers must be able to gradually increase the complexity, visual quality, and environmental details with immediate feedback of their scene and scenario edits. Furthermore, the initialization and operation of the immersive environment must be done from the desktop system, without requiring complex manipulation of the immersive equipment. Integration of functional weapons in the virtual space Hand-held weapons are integral to the dismounted Soldier, so it is necessary to perform training with weapons that are as close to reality as possible. Current solutions are based on modified real weapons and are subject to the same regulations and handling. In addition these modifications either do not provide for information feedback about the weapon to the controlling computer system or, if they do, usually use a proprietary communication protocol. Researchers need solutions for integrating arms into the virtual world that allow for easy use of different weapons based on instrumentation and an open communication protocol. Enable monitoring and data collection during training exercises In the existing system the collection of physiological and other physical Soldier data for after-action review is decoupled from the virtual environment, making the correlation between events in the virtual space (like attacks or explosions) and the Soldier s reactions a manual and error prone process. A more robust approach is to integrate the data collection into the actual simulation framework to allow synchronized and coordinated data collection 2011 Paper No. 11320 Page 4 of 11 Use of a non-proprietary framework Most simulators use proprietary software which creates dependency on a single vendor and does not allow extensions or interconnections to other software and/or hardware systems. Furthermore, simulators have limited lifetime and may not be upgradable due to product discontinuation or the vendor being no longer in business. HRED needs a software framework for their experiments and training that is Open Source and is designed with the appropriate software methodologies, so it can be used and expanded based on the needs of the research program. This would allow for multiple collaborators to work together and to capitalize on new developments. DISMOUNTED SOLDIER TRAINING SYSTEM The DSTS is a next generation system using the ODT to support HRED s experimental activities. Our DSTS addresses the specific systems requirements defined by HRED, emphasizing flexibility of scenario design and simplicity of use. Our system has two major components: a combination of a physical interface, an omnidirectional treadmill, with a newly designed low-cost stereoscopic display composed of a set of surround screens to enable Soldiers to walk, jog, crawl, and shoot in a virtual space and a software framework for the rapid creation, execution, and monitoring of training scenarios. These two components are integrated to provide a seamless environment to perform dismounted-soldier human-factors studies requiring a variety of scenarios and data monitoring capabilities. Figure 1 shows the DSDS system located at UL Lafayette. The user is body-tracked in the environment to ensure that the correct visual, auditory, and motion feedback is provided. The hardware setup is controlled by a software framework that allows for the creation, execution, and monitoring of the virtual reality scenarios. The following sections describe the components of the DSTS in technical detail. Figure 1. Dismounted Soldier Training System at UL Lafayette.

Immersive System One general limitation of surround-screen projection-based systems, such as a CAVE, is their high cost, which can range anywhere from $400K up to more than a million dollars. A significant portion of this is the design cost, as each system is one-of-a-kind based on the space available to build the system. Additionally, these systems require rather complex procedures for calibration, maintenance, and operation as well as specialized and certified personnel to handle the projectors. Although the group at HRED are not looking to replace their immersive system, we took the opportunity to set up our system at the University to address the issues of cost, space, maintenance, and operation, For the DSTS project, our budget for the projection system (including the computer cluster) was limited to $75K, which made it impossible to work with any of the commercially available solutions. We decided to explore the possibility of designing and building a projection system using affordable off-the-shelf components for the projectors, frame structure, and screens to address the issues of cost and the complexity of management. Furthermore, our design addressed other limiting issues of immersive projection-based systems but making them the driving design constraints: Eliminate the recurrent costs of maintenance plans: we wanted small off-the-shelf projectors, like those used in regular conference rooms, which are easy to set up and calibrate. Avoid having a monolithic frame structure and difficult to upgrade individual components of the system: we explored designing a system in which each screen frame is self-supporting to avoid the problem of screens sticking to each other after a while. We also minimized the corner seam. Eliminate the need for special cooling and power requirements: Our room could not be remodeled to bring additional power or higher voltage and we could not control the temperature. This meant we needed a system that could operate under cooling and power conditions found in a typical office space. Do not have more than one projector per screen: Multiple projectors per screen add extra complexity for calibration and maintenance as well as cooling and power consumption. We wanted to provide 3D depth perception using a single projector per screen. Do not have a customized design: there was a potential need to replicate our design in other facilities, so we wanted a design that could adapt easily to different room sizes and space layouts. With these constraints, we knew we had to do some compromises between the quality of the display and overall simplicity of the installation. However, we were striving for a solution that is good enough for the majority of 2011 Paper No. 11320 Page 5 of 11 applications and scenarios that we wanted to deploy. After reviewing currently available projectors, we decided to use the DepthQ -WXGA HD 3D video projector (DepthQ, 2011). This projector has a resolution of 1280x720 pixels, which is a little lower than those used in most existing CAVES (1280x1024) but still provides a good resolution for training exercises. Additionally, this projector supports active stereo allowing us to provide depth perception with a single projector per screen using active 3D glasses. Our main challenge using this particular projector was that it was designed to be placed on a flat surface and allow to project an image upwards without the need for major adjustments. Therefore, the projection path is not straight from the center, but somewhat skewed to compensate for the table surface. We solved this problem by designing an adjustable projector stand and placing the projectors upsidedown in it. The stand also provides additional alignment and calibration possibilities for the system. The room in which we were installing the projection system could not support anything attached to the ceiling, walls, or floor, so we had to design a self- supporting structure to hold the screens. This frame structure was raised two feet above ground so as to align the bottom of the screens with the ODT's surface. For the frame materials we used standard T-slotted aluminum framing material. For the screens we deployed a flexible rear-projection screen material strong enough to withstand the tension when wrapped around the frame without sagging. Each screen frame rests on another screen frame at their corners, which provides tight corner seams. This design does not apply any pressure to the seams to avoid the fusion of screen material, which happens over time when flexible screen materials are tightly placed together.

Once these two sketches are done, the user, through a configuration GUI shown in Figure 5, specifies the parameters that define the details for the scenario. In our example, the user will select a Middle East desert look for the entire scene. The user also specifies the Figure 6: Final Scene: a) multiview; b) close up size of the scenario in square miles or kilometers and the maximum terrain height. The user can also specify other details such as: the type of buildings: high single family homes, stores, temples; the type of vehicles on the streets and roads; additional props such as trees, electrical poles, debris, barricades, etc. Once the civilization and height maps are sketched and configured, our system takes the information and generates the detailed scenario. Terrain Generation The first step is to build the terrain with the correct scaling in size and height from the configuration. We create a tiled ground model so we can provide better performance, especially better intersection performance for when the user is walking the terrain in the ODT. Next our algorithm generates the urban areas by computing the placement of the different buildings in the scene. The building placement algorithm is as follows: Go through civilization map, find housing area Find house set for this housing area Put first house along the edge Fits completely in patch? Ground level enough? Walk around the edge of the patch trying to put other house until circle completed (optional) Put second row of houses behind the first to close gaps Generate house shadow image and road mask As the target users of this simulation are foot-based Soldiers, the level of detail of the roads and ground needs to be high enough for a realistic experience. In many systems roads are created geometrically, which comes at a high cost in terms of geometric complexity both from the roads themselves as well as the need to cut them out of the terrain to avoid the ground sticking out through the road. To avoid these problems we decided to use a texture-based approach. Given that we have the civilization image as an input, we can easily define a mask that differentiates road from nonroad pixels. This mask is used in a shader to interpolate 2011 Paper No. 11320 Page 8 of 11 between different kinds of ground cover (sand, asphalt etc.) that are represented by tiled samples. To avoid the typical tiled floor look that many ground textures suffer from we dynamically resize and rotate the road and earth tiles. The final components are the house shadows. To avoid the overhead and quality limitations in large scenes of dynamic shadows we precalculate a shadow mask, based on the houses footprints on the ground, that is used to darken the results of the previously described ground shader. Figure 6 shows the final scene in a a) multiview of the scene editor (note the urban areas corresponding to the colored areas in the initial sketch, the terrain height, and the roads); b) close up of the environment. Scenario Editor and Execution The management of the DSTS system is done from our scenario authoring tool, the Scenario Editor. It is a 2D GUI front end to create, place, and manipulate 3D objects, as well as attaching behaviors to those objects. Users can automatically generate the base scenarios using the terraingeneration method explained above or they can manually build the scenario with the editor. Both automated and manual scenarios can be refined by manually adding more details in the editor. We provide a library of 3D models, including virtual humans, to populate the scenarios and create the dynamic features of specific tasks assigned to the dismounted Soldier. Furthermore, the Scenario Editor can also be used as a live connection to the immersive simulator to launch, monitor, and manipulate the execution of the scenarios. Additional details about the Scenario Editor can be found in (Springer 2012).

makes it fairly straightforward to asynchronously execute the processing for those subsystems. The three major subsystems in our system are simulation, dynamics, and rendering. Simulation is tasked with evaluating behaviors of dynamic_objects as well as processing all logic_objects. This produces a set of updates that are passed to the dynamics system which performs basic physics simulation and collision detection. The dynamics system therefore either validates the simulation update and passes it on or replaces it with a corrected update to be passed on instead. In either case the final updates are received by the rendering subsystem which applies them to the visual scene representation. For display using a cluster only the visual representation needs to be distributed. By basing our implementation on OpenSG, a scene-graph system that has built-in support for cluster distribution, we achieve this consistency without additional implementation effort. When establishing a connection to an immersive cluster system, the scenario editor distributes the graphical representation of the scenario to the rendering nodes so that a user in the immersive VE sees the exact same state of the scenario as the user operating the editor application. Besides the editor application we also provide a scenario viewer application that uses the exact same underlying software components, but with reduced interaction options that only allow running a scenario, but not manipulating it. The intention is that this application can be used to run actual experiments and record the results, once construction of a scenario is completed. generation is based on a height map and a civilization map. Our method can easily be extended to allow for adding more layers (e.g., vegetation, water ways). This can further enhance the fast and automatic generation of terrains. The software architecture currently provides an object hierarchy that is tailored for scenarios targeted at the military sector. However, by using an ontology that provides a more general world view we can support practically any use case. This change, while requiring some efforts in changing the software infrastructure, would also open the framework for user-specified extensions to the object system. Figure 8. Complex DSTS built in the Scene Editor DISCUSSION CONCLUSIONS The DSTS software framework and weapon system is in the process of being deployed at HRED, but we have been using it in our University facility for almost a year now. The design of the immersive system has proven to be very robust and stable. We have frequent visitors coming for demonstrations and we have been able to be always ready. The calibration and alignment is simple and can be done quickly, which is also a great advantage of our design. The perceived quality is better than we expected, and most users and visitors do not realize we are using a slightly lower resolution than most immersive projection systems. We have presented our work on a next generation dismounted Soldier training system. The system consists of a low-cost high-quality immersive projection system combined with an omni-directional treadmill, flexible instrumentation of hardware for training weapons as well as an open communication format between the instrumented weapon and a computer control system. Finally, we also presented a software architecture and prototype framework that provides a pipeline for constructing scenarios for experiments related to dismounted Soldier performance. We have been using the Scenario Editor to build and run our own scenarios for several months now. Since our design is so flexible we have been able to use it in a variety of projects beyond the dismounted Soldier project. Our psychology faculty collaborators have been able to build several scenarios for their cognitive experiments without problems. They are delighted to have the ability to manipulate the scene themselves and tweak the details as needed. Figure 8 shows a more complex scenario we have built using the DSTS system. Currently, the terrain 2011 Paper No. 11320 Page 10 of 11 The DSTS system provides a novel approach to scenario construction, manipulation, and execution for dismounted Soldier training. It simplifies and streamlines the tasks of scenario creation by providing a semi-automated framework as well as a live connection to the immersive simulation. During the actual training exercises the DSTS system allows for monitoring and manipulation of the simulation, enabling researchers to introduce random events and keep track of the Soldier s performance. The use of offthe-shelf hardware components as well as the ability of

researchers to create and manipulate scenarios based on simple model building blocks is a flexible and costeffective way compared to predefined static scenarios from 3rd-party vendors. FUTURE WORK We are planning on continuing to work on the DSTS system and enhancing its capabilities. We need to extend our scenario-generation algorithm to allow for road creation with western-style markings. We also want to extend our virtual humans capabilities to incorporate autonomous behavior in contrast with the scripted behavior supported now. Finally, we have to investigate ways to guarantee interactive performance even for very large and geometrically complex scenes, scenes with many thousands of objects (static as well as dynamic), and under rendering conditions that support more realistic visual appearance of the scene (such as lighting and environmental conditions). ACKNOWLEDGEMENTS Research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-07-2-0025. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon. Special thanks to Donald Gremillion and Marsha Miller for their assistance and support of the project. REFERENCES Cruz-Neira, C., Reiners, D.P., Springer, J.P. An Affordable Surround-Screen Virtual Reality Display. Journal of the Society of Information Displays, 18(10):836 843, October 2010. Deployable Virtual Training Environment (DVTE), Lockheed Martin. Retrieved June 14, 2011 from 2011 Paper No. 11320 Page 11 of 11 http://www.lockheedmartin.com/products/deployablevirtualtraini ngenvironment/index.html DepthQ projector, LightSpeed Design. Retrieved May 29, 2011 from http://www.depthq.com/ Fletcher, J.D. Education and Training Technology in the Military. Science 2, Vol 323, no 5910, 2009 Fully Immersive Virtual Simulation Training System, Intelligent Decisions. Retrieved June 14, 2011 from http://www.intelligent.net Future Immersive Training Environment (FITE), U.S. Army PEO STRI. Retrieved June 14, 2011 from http://www.peostri.army.mil/products/fite-jctd Knerr, B.W., Lampton, D.R., Thomas, M., et. Al. Virtual Environments for Dismounted Soldier Simulation, Training, and Mission Rehearsal: Results of the FY 2002 Culminating Event. Technical Report 1138. United States Army Research Institute for the Behavioral and Social Sciences, September 2003. Knerr, B.W. Current Issues in the Use of Virtual Simulation for Dismounted Soldier Training. In Virtual Media for Military Applications (pp. 21-1 21-12). Meeting Proceedings RTO-MPHFM-136, Paper 21. Neuilly-sur-Seine, France: RTO. Knerr, B.W., Immersive Simulation Training for the Dismounted Soldier. Report 2007-01. U.S. Army Research Institute for the Behavioral and Social Sciences. 2007. [Pair, J., Neumann, U., Piepol, D. and Swartout, B. FlatWorld: Combining Hollywood Set-Design Techniques with VR. IEEE Computer Graphics and Applications (January/February 2003) Soldier Visualization Station (SVS), Advanced Interactive Systems. Retrieved June 14, 2001 from http://www.aissim.com/training_products/training_products_svs.htm Springer, J.P. Neumann, C., Reiners, D.P., and Cruz-Neira, C.. An Integrated Pipeline to Create and Experience Compelling Scenarios in Virtual Reality. In IS&T/SPIE Electronic Imaging 2011. SPIE, 2011. U.S.Army Strategic Planning Guide 2005. Retrieved June 2, 2011 from http://www.armystudyguide.com/content/army_board_stud y_guide_topics/the_army_plan/army-strategic-planningg.shtml Virtra 300ML, Virtra systems. Retrieved June 14, 2011 from http://www.virtrasystems.com/military-training Yuma Proving Grounds, Retrieved June 14, 2011 from http://www.yuma.army.mil.