AUGGMED (Automated Serious Game Scenario Generator for Mixed Reality Training) D3.2 Report on the final version of the simulation environment

Size: px
Start display at page:

Download "AUGGMED (Automated Serious Game Scenario Generator for Mixed Reality Training) D3.2 Report on the final version of the simulation environment"

Transcription

1 AUGGMED (Automated Serious Game Scenario Generator for Mixed Reality Training) D3.2 Report on the final version of the simulation environment Report on the final version of the simulation environment

2 D3.2 Report on the final version of the simulation environment Version 4 Deliverable No. D3.2 Workpackage No. WP3 Workpackage Title VR and MR Environments Task No. T3.1, 3.2 Activity Title VR and MR Environments Authors (per company, if more than one company provide it together) Jochen Meis (GEO) Lead Author Status (F: final; D: draft; RD: revised draft): File Name: F AUGGMED_D3_2 Report on Simulation environment_final.docx Project start date, duration 01 June 2015, 36 Months 2

3 Executive Summary This deliverable constitutes the final report on the simulation environments used in the AUGGMED system. It provides detailed technical explanations on the various models that are being used to simulate: Realistic venues as a backdrop for the serious game; Credible renderings of agents and non-playable characters; Crowd behaviour that approximates reality as much as possible; Interactions between agents and members of the crowd; Explosions and their effects on agents, members of the crowd and the structure of the venue; Shooting and injuries caused by it; Fire; Telecommunication capabilities; Haptic and thermal feedback on the human players using a tailor-made piece of hardware (referred to as a haptic vest ). For each of the above points, we explain the underlying assumptions, the approach taken in AUGGMED and the final state of the implementation. 3

4 Contents 1 Overview Simulations of 3D environments and infrastructures Provisioning and integration of CAD drawings for EXODUS Models of network infrastructures Simulations of agents Simulation of agents in EXODUS Pilot 1: Simulation of agents in EXODUS Pilot 2: Simulation of agents in EXODUS Pilot 3: Simulation of agents in EXODUS Simulation of simple gestures impact Behavioural crowd modelling Modelling impact of explosive devices on simulated agents Modelling impact of explosive devices on structure Integrating explosion data with UNITY Modelling shooting injuries on simulated agents Representation methods of 3D Graphics for crowd and scene rendering Curves and Surfaces ([Prautzsch, 2002], [Shene, 1997]) Real-time crowd rendering Simulations of threats Fire Simulation Simulations of the telecommunication capabilities of security units Constraints and disruption caused by human intervention Interfaces and Devices for Multimodal Interaction General possibilities of devices Integrate of VR-Glasses Development of Haptic Vest Vest requirements Stimuli generator hardware in Vest Actuators distribution Haptic Vest Integration in Unity

5 6 Conclusions and final statement References

6 List of tables Table 1 FBD values and corresponding injuries Table 2 Characteristics of vibrotactile actuators Table 3 Characteristics of Peltier cell used during experiments

7 List of figures Figure 1 The original basic EXODUS airport geometry (M=Main Entrance/Exit, E=Emergency Exit, G1- G6=Boarding Gates) Figure 2 The updated EXODUS airport geometry complete with additional artefacts Figure 3 Layout of Muntaner metro station used for Pilot Figure 4 View of the Muntaner metro station in EXODUS Figure 5 Layout of PPA terminal Figure 6 PPA structure layout as it appears within the EXODUS tool Figure 7 Cruise terminal Figure 8 Layout of the building Figure 9 Measurement results Figure 10 Specification of the hardware for measurement Figure 11 The itineraries randomly assigned to agents entering via the two main entrances Figure 12 The itineraries randomly assigned to agents upon arriving at the check in desks (CIDs) Figure 13 The itineraries randomly assigned to agents entering via the gates G1-G Figure 14 Pilot 2 geometry layout Figure 15 Pilot 3 geometry layout highlighting the various parts of the terminal Figure 16 Initial locations of agents shown in EXODUS population density mode Figure 17 Popularity of paths taken by the agents during the circulation phase and prior to the run of the Pilot 3 scenarios Figure 18 The influence areas when an agent tries to communicate verbally or visually with other agents Figure 19 The probability of an agent following a command is a function of distance to the person issuing the command Figure 20 Screen shot from EXODUS showing the agents that follow a command (coloured red) as opposed to those that do not (coloured grey). The agent issuing the command is situated at the centre of the yellow circle (coloured cyan) Figure 21 The Threat and Awareness Radii Defining a Given Threat (i.e. Fire) Figure 22 An Agent Exposed to a) Single, and b) Multiple Threats (i.e. Fires) Figure 23 Two methods of calculating the threat area Figure 24 Graphical representation of the FBD function, note that the FBD is capped when mobility reaches Figure 25 Explosion injury radii Figure 26 Damage radii caused by an explosion within a structure Figure 27 Agent injury and structural damage rendering within UNITY (left: Trainers view, right: Trainees view) Figure 28 Hypothetical and possible model for shooting injury effects on individuals Figure 29 Comparison of Representation Methods [Beacco, 2015] Figure 30 Hardware specification of FortiGate 140D-PoE Figure 31 Technical characteristics of FortiToken Figure 32 Technical characteristics of TVT TD2716-AE Figure 33 Plan view of the airport terminal scenario used within pilot 1 as shown in SMARTFIRE (Possible fire locations shown in red) Figure 34 Smoke spread at various stages within UNITY

8 Figure 35 The different hazard data output option for each zone for both EXODUS and UNITY Figure 36 The location of the land side and air side fires Figure 37 Fuel generation rate produced by SMARTFIRE Figure 38 Smoke concentration and temperature levels as reported by SMARTFIRE at 300 seconds into the fire simulation Figure 39 Spread of fire hazards as indicated by heat and smoke contours in EXODUS Figure 40 Available input and output as an example for the categories Figure 41 VR-glasses: Oculus rift (left), HTC Vive (middle) and HTC Vive controller (right) Figure 42 HTC Vive used for mixed reality (experimental version) Figure 43 Vest Prototype Figure 44 Motors placed on the vest Figure 45 Peltier cells placed on the vest Figure 46 Impact actuator Figure 47 Distribution of vibration motors Figure 48 Unity Interface for Tests Figure 49 Vest integration in Unity

9 1 Overview This document contains detailed information about the simulation environment for AUGGMED. The development of the simulation environment is done in close collaboration with the development of the serious game. There is a fluent integration of all components throughout the whole project. The simulation environment needs the model of the place, where all activities are taken part. Additionally, the model of the physics is needed to get a more realistic simulation. Smart devices like the vest deliver a direct feedback of the virtual environment to the trainees. The geometry of the training areas like airport, station or harbour terminal is defined within AUGGMED and exchanged among Unity and EXODUS. To share the same model is crucial for the exchange of people s position, explosions and damage radius. Different training aspects can be done within each scenario also supported with additional force feedback devices. The trainee is flexible by using different devices and simulations. This deliverable provides the following content: Section 2 a description about the 3D environments and infrastructures provided by the simulation environment Section 3 a description about the simulation of the agents and fires with their specific behaviours and the explosion models Section 4 a list of threats as part of the training sessions Section 5 an overview about devices for multimodal interaction and the MR/VR view. Section 6 a conclusion and final statement 9

10 2 Simulations of 3D environments and infrastructures The AUGGMED project aims to provide a serious game platform for the training of first responders reacting to terrorist and organised crime threats. The use of virtual and mixed reality technology allows users (i.e. players) to immerse themselves into the game and allowing them an experience that is as close to reality as is currently technically feasible. Essential to this approach is the underlying 3D model of the venues in which the scenarios are played out (in particular airports and train stations). These venues constitute the environment in which the players move and with which they interact. In AUGGMED, it is crucial to provide both the game engine Unity and the crowd simulation software EXODUS with exactly the same geometric set-up; inconsistencies can lead to unrealistic behaviour of agents (like walking through obstacles which are in the wrong place), thus detracting from the players experience. The geometries of the environments are interchanged using CAD DXF files. In the following, we describe in some detail the geometries used for the pilots, the interchange of the geometry between the relevant modules and the set-up of the network infrastructure. 2.1 Provisioning and integration of CAD drawings for EXODUS For pilot 1 (i.e. the second version) an airport geometry was required of suitable complexity and size. The requirements were that it needed to be large enough to simultaneously hold at least several hundred people, with a structure clearly resembling that of an airport, namely with a clear separation of land and airside with movement between the two areas being achieved via security control, land side check in desks, passport checks and baggage collection, in addition to waiting areas and shops located on both land and airside. Since no partners were able to obtain CAD DXF files of suitable real airports due to security concerns, the decision was made in conjunction with AUGGMED partners to instead utilise a fictitious/hypothetical airport (measuring approximately 150m by 70m) that UoG had already modelled within EXODUS (see Figure 1). As a result of using this existing geometry, CAD DXF files were therefore not directly used in the modelling of the basic geometry within EXODUS. Instead the existing outline of the fictional EXODUS airport (defining the limits of the circulation space, i.e. walls, seats, desks, check in queue separators etc.) was exported, using the export utility provided by EXODUS, to a single CAD DXF file, thereby enabling the basic structure to be circulated to AUGGMED partners, primarily to UoB who were responsible for reproducing the structure within UNITY. Information not available within the CAD DXF file was separately provided, detailing both the assumed height of the overall structure as well as the heights of all the lines within the DXF file. The DXF lines represent the various objects and components present within the structure. In this manner, the different heights of the walls, desks etc. could be accurately reproduced within UNITY, thereby ensuring consistency between the EXODUS and UNITY geometries. 10

11 Figure 1 The original basic EXODUS airport geometry (M=Main Entrance/Exit, E=Emergency Exit, G1-G6=Boarding Gates) The CAD DXF file provided to AUGGMED partners consisted only of the basic overall outline of the structure (i.e. the circulation space), see Figure 1. As a result, the provisional geometry effectively represented an empty structure free from the typical artefacts that would be found within a real airport (i.e. seating benches, baggage/suitcases, luggage trolleys, the internal structure of the shops complete with shelves and stock etc.). To ensure that these types of realistic artefacts were present within the overall airport structure used within Pilot 1, the decision was made to enable UoB to add them where necessary within the UNITY geometry. The size and location of these added artefacts were then marked on the original CAD DXF file provided to AUGGMED partners, and then sent back to UoG. The updated CAD DXF files was then loaded back into the original EXODUS version of the airport, in order to incorporate the airport type artefacts into the EXODUS model (see Figure 2). In this manner, the updated airport geometry complete with furniture and fittings was accurately represented within EXODUS, once again ensuring consistency between the EXODUS and UNITY versions of the airport geometry, and hence ensuring that agents within EXODUS took into account the presence of the newly introduced artefacts. Figure 2 The updated EXODUS airport geometry complete with additional artefacts. 11

12 For Pilot 2 the requirement was to select an appropriate underground station, part of the FGC network in Barcelona, that would provide adequate complexity by offering multiple escape routes, included a central platform separating two metro lines with at least two levels (i.e. platform level and ticket hall level). UOG examined several plans corresponding to the stations of Can Roca, Intercanviador, Vallparadis, Universitat Autonoma and Muntaner. The station that was selected was Muntaner (see Figure 3) as this station met the criteria specified previously. The approximate dimensions of the station are 115m x 25m. Both ends of the platform level lead up to two completely independent ticket hall levels where each in turn links to street level. The platform level is linked to the ticket hall levels via staircases, escalators and lifts (these were not utilised during the Pilot 2 run). The ticket hall levels are linked to street level via staircases. Figure 3 Layout of Muntaner metro station used for Pilot 2 Upon receiving the CAD diagram from FGC, UOG started building the Pilot 2 geometry in EXODUS. This required preparation of the CAD file by cleaning, a process which involved removing any artefacts that were deemed to be superfluous (text, door lines, directional or dimensional lines, etc.). The aim is to provide a clean layout of the structure that can then be meshed, in EXODUS, so that the entire navigable space that the agents can use is represented. Preliminary EXODUS model tests, with basic and simplified scenarios, were performed to confirm consistency with the station plans. The updated DXF files of the metro station were then provided to UOG for integration within UNITY. Upon completion of this task a comparison was made to confirm that both EXODUS and UNITY representations were accurate. Any differences were highlighted and rectified. Given the two level nature of the Pilot 2 geometry it was necessary for UOG to enhance and calibrate the agents movement in 3D space such as when the agents use stairs and escalators. UOG incorporated updates highlighted from UOB. Views of the final representation within EXODUS are shown in Figure 4. 12

13 Figure 4 View of the Muntaner metro station in EXODUS For Pilot 3 the PPA sea-port terminal was used to run the relevant scenarios. Due to the nature of the structure and its use the circulation behaviour of the agents, that was in several ways very similar to that of Pilot 1. As Pilot 3 was the final exercise to be conducted it incorporated all previously developed behaviours and features that were present in Pilots 1 and 2. While the passenger terminal is part of a larger multi-level structure, the focus for the Pilot 3 scenarios will be only on the actual terminal which consists of a relatively small part of the entire building. The layout of the sea-port terminal is depicted in Figure 5. The size of the passenger terminal is approximately 108m X 65m and it is split into three main areas. The west side corresponds to the main land-side entry area and incorporates a shopping area. The rest of the structure is then split into two main sections the Arrivals and Departure sections. Both are designed for public use, with the first being used by disembarked ship passengers while the second is used for those wanting to embark on a ship. Both areas incorporate a large number of offices and security facilities. Baggage Collection LAND SIDE Shopping area Arrivals Passport Control Duty Free Shop SEA SIDE Departures Check-in desks Passport Control Security Check Figure 5 Layout of the PPA terminal 13

14 Upon receiving the CAD DXF files from PPA the UOG team stripped and tidied the files by removing and modifying superfluous information such as door lines, flow direction information, shop fittings and other artefacts that were unrelated to the outline of the PPA structure. UOG built and meshed the EXODUS geometry, added the necessary features of the structure such as exits, check-in desks, security checks, etc. Some preliminary and basic scenarios were simulated in EXODUS to test the designed geometry for compliance to the DXF layout. The updated DXF file was then sent to UOB for building the UNITY model. UOB further modified the DXF file to better match photographic evidence collected during a technical visit to the PPA premises. UOB then sent the modified DXF file back to UOG for updating the evacuation model and to incorporate all fixtures and fittings that were not represented in the original DXF diagrams such as shop shelves and products, queue barriers, minor modifications to the geometry, etc. This required the modification of the available space for passengers, location of queues, check-in desks, fittings within the shops, etc. Figure 6 shows a depiction of the final PPA structure within EXODUS. Figure 6 PPA structure layout as it appears within the EXODUS tool 2.2 Models of network infrastructures The communication infrastructure provided for AUGGEMD software platforms to exchange data is the Wi-Fi. Modelling the radio network infrastructure and its wireless transmissions in complex indoor environments requires high-fidelity 3D models and advanced electromagnetic calculations. The indoor prediction models customized by INTEGRATION POWER can with significant high level of precision the received signal strengths and attenuation parameters of the actual Wi-Fi radio coverage footprint in the most demanding and challenging multi-room environment which may contain different walls made of different material (e.g. concrete, wood, steel, composite), walls of different thickness ranging from a few millimetres up to tenths of centimetres. 14

15 Standard propagation and simulation models take in to account parameters that are globally known for their best practice and provide relatively accurate results. In the vast majority of simulation of radio propagation of Wi-Fi in open and outdoor spaces is accurate and provides conditional precision of results ranging from 87 95% of actual measurements. However, the challenge begins when the simulation is trying to predict with typical values non typical multi-room indoor environment rational to the environment of sea ports and airports used in AUGGMED exercises. The ambiguity, uncertainty and artefacts created by the multi-room environment, the materials, wall structure, internal geometry and orientation of objects in the room, are the most typical parameters which lower the level of precision and accuracy of predictions from 40 60% of the actual measurements. In order to generate accurate radio Wi-Fi maps it is crucial to test the simulation models with digital floor plans of the points of interest. In our case the testbed of the radio communication infrastructure is the cruise terminal MIAOULIS of PPA where the building is composed of ground and first floor with at least the following areas i.e. passenger terminal, security checkpoints, stairs, escalators, offices, conference and exhibition rooms, reception desks, toilets, etc. The layout of the overall building where the cruise terminal is hosted is provided in Figure 7, while the layout of the building for the two different floors is provided Figure 8. 15

16 Figure 7 Cruise terminal 16

17 Figure 8 Layout of the building The simulation models will be executed in the ibwave graphical user interface custom configured for AUGGMED project rational to professional Wi-Fi radio network infrastructure equipment. The equipment selected and its hardware characteristics are (see Figure 9 and Figure 10): Wireless LAN access point FortiAP 222C, supporting ac technology with association rate of up to 1.3 Gbps, enabling for wireless mesh configuration, bridging or point-tomultipoint coverage. Also the system support Automatic Radio Resource Provisioning (ARRP) for throughput optimization, Layer 7 application control in order to prioritize user traffic. X-POL antennas operating on E plane and H plane. 17

18 Figure 9 Measurement results Figure 10 Specification of the hardware for measurement 18

19 The proposed system design shall provide to AUGGMED users can access security overlay featuring comprehensive security services and application control. Another system design proposal will be the Wi-Fi channel management architecture known as Virtual Cell which enables a number of compelling quality of service and quality of experience advantages compared to existing network infrastructures enabling for voice and video mobility management due to zero-handoff roaming delays as well as more reliable user connections due to real-time load balancing based on the actual traffic for every different application. This system implementation enables mission critical traffic isolation on dedicated spectrum which can be considered a physical form of internal segmentation. This approach provides the highest level of performance and accomplishes significant power against cyberattack protection. The proposed communication infrastructure has extra memory and twice the processing power of typical thin access points, therefore is capable to execute real time security processing on the Access Point level. The custom configuration of the firmware of the access point provided by INTEGRATION POWER enables AUGGMED users to identify individual users, their devices and the applications they are using. Access to the communication infrastructure will be provided to AUGGMED users through captive portals. Furthermore, network traffic analysis will allow AUGGMED information exchange to be discriminated secure and non-secure traffic by separating access to different VLANs that will eliminate the possibility of cyber-attacks from the LAN or WAN environment. During PPA exercise AUGGMED project will conduct a comparison of prediction results against real measurements for several areas inside rooms and other indoor locations. The results obtained from the real measurements will let AUGGMED partners know exactly how much is the deviation with regard to performance and signal coverage footprint. 19

20 3 Simulations of agents 3.1 Simulation of agents in EXODUS Within the AUGGMED platform the movement of the crowd (i.e. non-user controlled avatars) throughout the structure during the scenario is controlled by EXODUS. EXODUS is an evacuation and circulation modelling tool part of the family of micro simulation tools where simulated agents are modelled individually, with each having distinct attributes, characteristics and abilities. The geometry modelled the domain where the simulated agents can navigate - is discretised in a fine nodal grid [GALEA2016]. Furthermore, EXODUS is rule based as it uses a set of rules or heuristics to simulate human behaviour. Some of these rules are stochastic (e.g. the outcome of conflict resolution between agents) while other are deterministic and are based on data collected from live trials (e.g. usage of gates, response times, etc.) or the research literature (e.g. travel speeds on stairs). EXODUS incorporates various types of adaptive behaviours such as smoke avoidance, exit selection, congestion avoidance, itineraries, signage interaction, communication between agents, use of lifts/escalators/travellators etc. The data that governs the movement of agents within the EXODUS model is typically derived from literature, experiments or studies of real events or incidents. It was assumed that any terrorist attack on the airport used within Pilot 1 or the sea port building used within Pilot 3 would occur while people were routinely moving throughout the structure. Consequently, it was necessary to represent as accurately as possible the general circulation movement of agents within EXODUS. Being able to realistically model this type of behaviour is one of the main benefits for the training platform. Circulation behaviour includes agents entering the structure, waiting, visiting shops, queuing at check in desks, queuing to pass through security and passport control, collecting baggage and finally exiting the structure, either via the main entrance or alternatively queuing in order to board a plane for the airport case in Pilot 1 or a cruise ship in the case of the sea port in Pilot 3. It is worth noting that behaviours developed for the purposes of Pilot 2 that involved an underground station are also present in the final version for Pilot 3 so those developments will be discussed along other behaviours developed specifically for Pilot Pilot 1: Simulation of agents in EXODUS The circulating population was generated at run time representing the normal use of an airport where individuals continuously enter or leave the structure. The scenario was setup in such a way so that the simulated agents were generated at the required entry points at user defined rates, and for user defined periods of time. In all cases, the agents entering the structure were assumed to be drawn from the default EXODUS population consisting of several sub-populations, differentiated according to age and gender, with each sub-population also representing a different percentage of the overall population. Each agent upon entering the structure was assigned a predefined itinerary, instructing them to go to a given location (e.g. check-in desk, shop, etc.) and perform a given action (i.e. wait, queue etc.). Once agents had arrived at these locations they could then in turn pick up additional itineraries, instructing them to go to a different location and perform another action etc. In this manner, the movement of agents throughout the structure effectively followed a probabilistic (i.e. Bayesian) decision tree, thereby ensuring that movement of agents throughout the structure was not the same every time the simulation was run, and hence not repetitive or predictable by any users playing as either a blue or red team member. 20

21 Agents entering via the two main entrances were assigned to either go to one of the five waiting areas in front of the check in desks (W1-5), or alternatively to go to the land side bar or shops (see Figure 11). Agents who were assigned to either the bar or each of the two shops went to their assigned location and waited (thereby simulating the agents shopping/drinking etc.), before then in turn moving to one of the five land side waiting areas (W1-5). Agents who were assigned to go to one of the five wait areas (either directly or after visiting the bar/shops) moved to their designated location and then waited there, before then in turn being assigned to go either to one of the sixteen available check-in desks (CIDs). Figure 11 The itineraries randomly assigned to agents entering via the two main entrances. Agents assigned to one of the available check in desks will then queue there until they reach the front of queue and are served. Upon reaching the front of the queue agents will once again wait (thereby simulating the agent checking in etc.), before in turn moving to, and queuing if necessary, at the passport/security desks. Once at the front of the security queue agents will then move to one of the five security desks where they will again wait (thereby simulating their luggage being scanned etc.). Agents will then move over into air side where they will then in turn be assigned either to one of the nine air side waiting areas (W1-9), or to one of the four available air side shops (see Figure 12). Agents who have been assigned to one of the nine air side waiting areas (W1-9) will once again wait there, before being assigned to one of the three available departure gates (i.e. gates 5, 6 or 7, or to one of the four air side shops (see Figure 12). Agents assigned to the air side shops will go there and wait (thereby simulating the agents shopping), before being assigned to one of the nine air side waiting areas, or directly to one of the three available departure gates (i.e. gates 5, 6 and 7). Once agents are assigned to a given departure gate, they will queue up at it, before then exiting through it once it becomes available (i.e. for a 10 minute period every half an hour). 21

22 Figure 12 The itineraries randomly assigned to agents upon arriving at the check in desks (CIDs). In addition to agents entering the airport at land side, agents are also assumed to arrive at air side, as the result of passengers disembarking from arriving planes. These agents are assumed to arrive airside via four available arrival gates (i.e. gates 1-4, see Figure 13). Upon arriving via one of the four arrival gates agents are assigned either to one of the nine air side waiting areas (in order to represent agents merely transferring between planes), or to the passport control area. Agents assigned to one of the nine air side waiting areas then behave in the same manner outlined previously, namely they will once again wait there, before then either being assigned to one of the three available departure gates (i.e. gates 5, 6 or 7, see Figure 12), or alternatively to one of the four air side shops. Agents assigned to go to passport control will move to the assigned room before then queuing up to go through passport control via one of the five available queues. Agents will queue until they reach the front of the queue where upon they will again wait (thereby simulating the time taken by the passport officer to check their credentials etc.). Once through passport control agents will then move to the baggage collection area, where they will once again wait (thereby simulating the agent collecting their baggage from the luggage carousel). Once these agents have collected their baggage they will then be assigned to exit the airport via one of the two main entrances land side. 22

23 Figure 13 The itineraries randomly assigned to agents entering via the gates G1-G Pilot 2: Simulation of agents in EXODUS The Pilot 2 geometry represented the Muntaner underground station in Barcelona. The layout of the structure is shown in Figure 14. It consists of a platform level with two trail lines passing through and a central platform separating the two train lines. At both ends of the platform level stairs and escalators link to the upper ticket hall levels. These are two independent constructions, both leading further up to street level. EXIT EXIT EXIT EXIT Platform 2 E1 Central Platform 1 Figure 14 Pilot 2 geometry layout 23

24 Unlike the other two Pilots (i.e. Pilots 1 and 3) a circulation population was not defined within Pilot 2. Consequently no flow rates into the structure were defined for this scenario, and hence all agents were instead merely placed on the three platforms prior to the simulation commencing. The conditions that these three population levels are listed below: Not Busy: 115 agents are present within the structure (i.e. on the platforms), 60 on the central platform, 20 on Platform 1 and 35 on Platform 2. Busy: 172 agents being present within the structure (i.e. on the platforms), 90 on the central platform, 30 on Platform 1 and 52 on Platform 2. Very Busy: 230 agents being present within the structure (i.e. on the platforms), 120 on the central platform, 40 on Platform 1 and 70 on Platform 2. In each case the agents initially located on both Platforms 1 and 2 were given long response times to ensure that they remained stationary once the simulation started, thereby replicating the behaviour of individuals waiting for trains. In contrast those agents initially located on the central platform were given response times of 0 seconds, thereby ensuring that they would start heading for the nearest exits as soon as the simulation started. The behaviour of these agents was intended to represent agents who had just disembarked from a train and were therefore attempting to exit the station. Two scenarios were run during Pilot 2: (1) a hot-bag scenario (2) post-explosion scenario. For scenario 1 the trainer loaded the underground station geometry and placed the hot bag(s) at various locations within the train station. The hot-bags represent suspect packages that require staff members to investigate. Once staff member arrives at a suspect package they then had the capability to open it within UNITY, and determine whether the contents were suspicious. In cases where the contents were deemed to be suspicious staff members could then decide to initiate and manage the evacuation of the station. The staff would attempt to evacuate the station by instructing the agents to move away from the suspect package and towards the exits. Scenario 2 represents a post-explosion scenario. Initially the scenario is run for 30 seconds allowing agents to start moving towards the exits. This time period is sufficient to allow a small crowd to develop around the base of the escalators when a bomb placed at the base of an escalator explodes. This represents a possible worst case scenario and will provide the initial conditions for scenario 2. So, at 30 seconds a 2kg bomb is detonated at the base of the escalator (see Figure 14Error! Reference source not found.). The explosion injures and kills a number of agents and causes damage to the station. The simulation then continues to run for a further 8 minutes representing the arrival time of the security and emergency bodies and thus producing the initial conditions for scenario 2. This time period was specified by FGC and is meant to represent the expected arrival time to the station of the security and emergency bodies such as the firefighters, police and medical personell as well as the start of the process for the emergency bodies to locate and attend to injured agents. During this time most of the agents who are mobile will have started the evacuation process. The agents attempt to evacuate moving away from the epicentre of the explosion, however, some mobile agents locate those agents who are immobile and unable to move and try and provide 24

25 assistance. The trainees are then added, to the simulation, with the task to manage the evacuation, manage the injured but mobile agents, locate the immobile agents performing triage and categorise them based on their injury level Pilot 3: Simulation of agents in EXODUS The Pilot 3 scenarios are conducted within a sea port structure (see Figure 15 Due to the nature of the structure, it uses a circulation behaviour pattern which is similar to the one utilised in Pilot 1. As Pilot 3 is the final exercise to be conducted it incorporates all previously developed EXODUS behaviours and features that were present in both Pilot 1 and Pilot 2. Baggage Collection LAND SIDE EXIT EXIT Shopping area Arrivals Departure Check-in desks Passport Control Passport Control Security Check EXIT Duty Free Shop EXIT SEA SIDE Figure 15 Pilot 3 geometry layout highlighting the various parts of the terminal Three scenarios were examined for Pilot 3 (1) terrorist attacking the departure area injuring or fatally wounding people by guns, knives and hand grenades (2) terrorists attacking the departure and arrivals area injuring or fatally wounding people with guns, knives, hand grenades and a 5kg bomb, (3) terrorists attack port building with a 100kg bomb and then proceed to attack the departure area with guns, knives, a 5kg explosive hidden within a rucksack and hand grenades. As with Pilot 1 the agents within the model are generated during a pre-scenario simulation run. The simulation is run to reach the appropriate initial conditions in terms of population number and distribution within the structure which for the very busy population scenario takes approximately 1 hour 25 minutes of simulation time (see Figure 16). The base circulation scenario represents agents entering, moving through and leaving the structure (see Figure 17). Two main flows are modelled: Departures: The agents enter the building from land side through the two available entrances. They then can then proceed first to the shops located within the shopping area or directly to the departures area. While within the departures area they queue in front of the check in desks to complete the check in process. They then move towards passport control where they may again queue for the check to take place. From there they continue through the security check and then once they pass that point they can either visit the duty free shop or exit the building on sea side where they can proceed to board a ship. Arrivals: The agents enter the building from the sea side and proceed towards arrivals. They first encounter passport control where they queue for the check to take place. From there they continue into the Arrivals hall where they may move towards luggage collection or exit the Arrivals hall and 25

26 enter the shopping area. Once they complete their shopping they will exit the port via the main exits. The circulation pattern utilised while the state of the population reaches the initial conditions for the training to start is shown in Figure 17. At each point where the agents have to queue they incur delays representing the average amount of time needed for the relevant tasks to be performed e.g. check-in process, passport check, security check, shop browsing or shopping, etc. Figure 16 Initial locations of agents shown in EXODUS population density mode Figure 17 Popularity of paths taken by the agents during the circulation phase and prior to the run of the Pilot 3 scenarios Three levels of population density are available for Pilot 3, (a) Very busy with 500 passengers, (b) Busy with 350 passengers and (c) Not busy with 200 passengers. 26

27 3.2 Simulation of simple gestures impact The EXODUS software was updated to allow simulated agents controlled by a user to be able to issue commands to the general population. For the purposes of the Pilot scenarios it meant that blue or red team members could issue commands to the circulating public. The commands issued by user controlled agents can be either vocal or hand gestures. The purpose of the commands is to modify the behaviour of the circulating civilian population. The commands that were implemented include: 1) GET DOWN! 2) GET UP! 3) STOP! 4) GO! 5) EVACUATE! 6) GET OUT OF THE WAY! 7) MOVE OVER THERE! 8) EVACUATE IN THAT DIRECTION! These could be issued either verbally (i.e. a voice command), visually (i.e. a hand gesture) or both. It is assumed that the zone of influence for a voice command is a circle centred at the agent issuing the command, see Figure 18. Other agents located behind the agent issuing the command are able to hear the command. The effective radius of the voice command has an assumed user modifiable range, with a default value of 8m. The zone of influence for a hand gesture is assumed to be a semicircle centred at the agent issuing the command, see Figure 18. Only agents located in front of the agent issuing the command are able to see the command. The effective radius of the hand gesture command has an assumed user modifiable range, with a default of 15m. These radii represent the maximum distances at which agents can clearly hear and comprehend the commands. It is important to note that the calculation of the influence and control areas of a command takes into account the obstacles/walls present within the structure. Consequently, agents within the corresponding hand gesture and vocal command radii may not be able to comprehend the command if obstacles/walls exist between the agent and the individual issuing the command. It is also worth noting that no actual data currently exists to describe this type of phenomena however the model is flexible enough to accommodate appropriate data once they become available. 27

28 Figure 18 The influence areas when an agent tries to communicate verbally or visually with other agents In instances where fire hazards are present then the maximum radii of both the voice and hand gesture influence areas are capped at the maximum visibility distance afforded by the smoke levels. While it is unlikely that smoke would significantly reduce the distance at which vocal commands can be heard it is assumed that agents would not be likely to obey a vocal command if they cannot clearly see who is issuing the command. Within EXODUS the maximum visibility distance is calculated according to the work of Jin [JIN1979, JIN1989] and is calculated using the following formula: V = B/K Where: V is the visibility of an object in meters, K is the extinction coefficient of smoke in the environment, and B is a constant representing the brightness of the object in question. For walls, doors, floors and furniture, B is typically set to a value of 2. Within EXODUS the smoke extinction coefficient value K is assumed to correspond to the current smoke density at head height at the location where the agent is issuing the command. As the smoke density increases at the location where the agent is issuing the command, the range over which the command can be comprehended, and hence acted upon, is reduced. To determine the effectiveness of the commands (i.e. the likelihood that agents will obey the command) EXODUS uses a compliance probability value. Within EXODUS the likelihood that an agent within the influence area will obey a command depends upon two factors: 1. Their distance from the agent issuing the command 2. The number of other agents currently obeying the command To achieve this functionality within EXODUS, a theoretical model was developed to represent all these behaviours. It allows the probability of an agent following a command to be calculated in a time dependent manner (probability/sec) for both the hand gesture and the vocal command for each agent based on their distance from the agent issuing the command (see Figure 19). Within EXODUS the user can define the probability/sec that an agent located 0m away from the agent issuing the command will obey the instruction. Similarly, the user can also define the probability/sec that an agent located at the extreme edge of the influence area (i.e. maximum radii distance away from the agent issuing the command) will obey the command. The probability that a given agent within the influence area will obey the command being issued is then determined according to basic linear interpolation based on their distance from the agent issuing the command, see Figure 19. Hence, the closer an agent is to the agent issuing the command the greater the likelihood that the agent will follow the command. If an agent issues simultaneously a hand gesture and a vocal command then the probability of an agent obeying the commands is assumed to correspond to the sum of the hand gesture and vocal command probabilities. 28

29 Prob max Probability 0 Prob. 0m Voice command (m) max Distance from Blue team Hand gesture (m) max Figure 19 The probability of an agent following a command is a function of distance to the person issuing the command The default values used to determine the probability of following the commands (i.e. the probabilities/sec at 0m and at maximum radii distance) were simply selected based upon assumptions and estimates that generated behaviours that appeared to be the most realistic. It is important to note that these values are not based on real data, but the system is flexible enough to accommodate real data as these become available. User defined data can also be specified to force the agent issuing the commands to be less or more assertive. As mentioned earlier, the likelihood of an agent obeying a command is also dependent upon the number of other agents currently obeying the command. The influence of other agents on an agent s decision to obey a given command is represented by a Compliance Factor which effectively determines how much more likely an agent is to obey the command. The more agents obey the command, as determined by the distance-based probabilities, the more the Compliance Factor increases until it reaches a maximum value when all agents within the influence zone obey the command. When no agents within the influence areas are obeying the command, the Compliance Factor is set to 1, therefore, the agents in this case are no more likely to obey the command. However, when all agents in the influence areas are obeying the command the Compliance Factor is set to a user defined maximum compliance factor value, which by default is equal to 3, thereby assuming that the agents are three times more likely to obey a command. Once again, it is important to note that these values are not based on real data. However, the system is flexible enough to accommodate real data as these become available. 29

30 Figure 20 Screen shot from EXODUS showing the agents that follow a command (coloured red) as opposed to those that do not (coloured grey). The agent issuing the command is situated at the centre of the yellow circle (coloured cyan) Both red and blue team members can initiate a combined hand gesture and a vocal command at any given time within the simulation by pressing the number key on their keyboard corresponding to the given command (i.e. 1 = GET DOWN, 2 = GET UP, 3 = STOP, etc.). Hand gestures and vocal commands are both assumed to be issued at the same time when a given command is issued since it is felt that providing separate buttons to initiate individual types of commands (i.e. hand gesture or vocal command) would over complicate the user controls. Once a given command is issued by a red or blue team member a message is sent from UNITY to EXODUS that includes information regarding which agent issued the command (as defined by their unique agent ID), their location (i.e. x, y and z coordinates), the direction in which the agent is facing, the time duration (in seconds) that the command is issued for and the type of command being issued (i.e. get down, get up, stop, go etc.). The message also includes information defining how the command is to be issued, such as the maximum radii for both the hand gesture and the vocal command, the probabilities/sec that an agent will obey the command at both 0m and at the maximum radii and also the Compliance Factor value when all agents within the influence areas are obeying the command. Once EXODUS has this information it then sets about calculating the corresponding influence areas for both the hand gesture and the vocal command, before then attempting to issue the command to the corresponding agents within the influence areas, see Figure 20. Once again, it is important to note that the probability of an agent obeying the command is based upon their distance from the agent issuing the command and the percentage of other agents within the influence areas that are already obeying the command. Hence, agents are not guaranteed to obey any command issued, especially if they are located a large distance from the agent issuing the command and no other agents have currently obeyed it. Throughout the duration that the command is being issued, UNITY will also send updates to EXODUS regarding the direction in which the agent is facing. In this manner as the agent changes the direction they are facing within UNITY it will be conveyed to EXODUS, and hence the corresponding influence areas will be updated to take account of this fact. The agent will continue to issue the command within EXODUS for the time duration specified. Once this has expired the agent will not issue any further commands until instructed to do so by UNITY. 30

31 3.3 Behavioural crowd modelling In addition to agents responding to hand gestures and vocal commands issued by both red and blue team members, the model (i.e. EXODUS) was also extended to enable them to respond to various threats within their environment. This included agents being able to respond to emergency alarms (effectively telling them to commence their evacuation of the structure), in addition to agents responding to static threats within their immediate vicinity, namely fires. Previously within EXODUS agents had no concept of direct threats (i.e. fires). As a result, when confronted by a threat agents would not react to it or change their behaviour in any way (i.e. they would not redirect away from the immediate threat). This may have resulted in unrealistic agent behaviour within the Pilots, whereby agents could have either failed to move away from a terrorist (dynamic threat) or a fire started by a terrorist (static threat), or even worse actually walked straight towards a threatening terrorist or fire. To alleviate this problem, both the ability to represent threats and the resulting agent threat avoidance behaviour were added into EXODUS. Within EXODUS static threats are defined by providing both the corresponding location of the threat (i.e. x, y and z location), in addition to stating the type of threat that is being represented at the given location. A dynamic threat is similarly defined however it can move through the modelled structure representing for example a moving and armed terrorist. Each threat is assumed to be comprised of two radii, namely an inner Threat Radius and an outer Awareness Radius (see Figure 21), which collectively define the size/severity of the threat, and hence the range over which it will affect the population. The larger outer Awareness Radius defines the area within which agents are aware of the threat but do not feel in immediate danger. Agents within this area will abandon their current tasks and instead seek to evacuate from the structure via the nearest safe exit following a path that does not pass through a corresponding threat area (i.e. Threat Radius). The inner Threat Radius defines the area within which agents feel in immediate danger. Consequently any agents within this area will immediately drop their current tasks and attempt to distance themselves as much as possible from the threat. If as they distance themselves from the threat they enter the outer Awareness Radius they start evacuating from the structure via the nearest safe exit following a path that does not pass through a corresponding threat area. Figure 21 The Threat and Awareness Radii Defining a Given Threat (i.e. Fire) 31

32 Within EXODUS the areas represented by a given threat (i.e. the threat and awareness areas) are calculated in the same manner as the control areas for hand gestures and vocal commands. When an agent is in a threat zone (i.e. orange circle, see Figure 22), the direction that they will need to head in order to avoid the threat is determined by the summation of: 1) The vector from the centre of the threat to the agent (i.e. the blue arrow in Figure 7a), and 2) The vector defining the agent s current direction of travel (i.e. the green arrow in Figure 7a). In each case, only the unit vector of each vector is considered (i.e. the magnitude of each vector is not considered). The summation of these two vectors (i.e. the red arrow in Figure 22a) indicates the new direction of travel for the agent. In instances where a given agent is in the threat zone of more than one threat, then the vector from each threat is considered in the agent s updated direction of travel (see Figure 22b). (a) (b) Figure 22 An Agent Exposed to a) Single, and b) Multiple Threats (i.e. Fires) In the case of arson, whenever the user (i.e. red team member) starts a fire then a message is automatically sent from UNITY to EXODUS telling EXODUS to start the specific fire to be used within the current scenario that the trainer had selected previously. Once EXODUS receives this message it then automatically adds an appropriate static threat at the fires corresponding location. Since no data relating to the likely Threat and Awareness Radii resulting from fires are available, these values were therefore assumed. Consequently, a 4MW fire was assumed to have Threat and Awareness Radii of 6m and 10m respectively. Conversely, an 8MW fire was assumed to have Threat and Awareness Radii of 9m and 20m respectively. Using this approach, whenever a fire was started within UNITY an appropriate threat was added at its corresponding location, thereby resulting in agents taking evasive action from the threat. The exact area affected by a given threat can be calculated based on line of sight or distance calculations from the threat epicentre (see Figure 23). The effect of the threat and the size of the threat awareness radius can change depending upon the type of threat and the running scenario. For example, when a red team member draws their weapon, the threat that they initially represent is assumed to be based merely on line of sight i.e. only agents who can see the terrorist become aware of the threat. However, when a red team member fires their weapon the threat is temporarily changed to being based on distance in order to reflect the fact that the gunshot sound travels further, thereby making agents with no direct line of sight aware of the threat. Furthermore, the 32

33 radius of the threat area increases to represent the fact that agents can identify the threat at greater distances. LINE OF SIGHT DISTANCE Figure 23 Two methods of calculating the threat area In the current implementation of the model, the distance based awareness area is applied for 1 second after a shot is fired which is considered long enough to make the agents aware of the threat. Once this time elapses the threat reverts back to being based on line of sight. If continuous shots are fired then the distance based awareness area is maintained. Further to the awareness area being calculated using different methods the radii of the areas corresponding to different levels of threat change. The initial awareness radius, the one applied when the red team member draws their weapon, is based on the line of sight awareness area and is assumed to correspond to the distance at which the agents can see the threat. When the red team member discharges their weapon the awareness radius increases representing the distance at which other agents are able to hear the threat, furthermore the distance based awareness radii is used. It is worth noting that blue team members also pose a threat when firing their weapons. As described before the threat is applied for 1 second. While the simulated agents do not flee from a police officer holding their weapon they will start fleeing if the police start firing as a response to a threat from the red team. In the current implementation, the default radii are defined as follows: Threat radius = 10 metres Visible line of sight based Awareness radius = 25 metres Audible distance based Awareness radius = 35 metres Within the UNITY host the user (i.e. the trainer) can initialise the alarm indicating that the simulated agents should commence the evacuation process. Upon starting the alarm a message is sent from the UNITY host telling EXODUS to start the alarm, and hence make agents respond accordingly. By default, when any alarm is issued the agents are assumed to respond according to a response time distribution derived from an actual evacuation of a theatre that was purposely conducted and analysed. Based on this analysis it was determined that the agents would respond between 0 and 180 seconds according to the log-normal distribution derived from that study (i.e. mean = 3.99 and standard deviation = 0.54). It is important to note that once the alarm was started each agent was effectively randomly assigned a response time from the aforementioned distribution. Each agent was assumed to continue undertaking their assigned itineraries (i.e. waiting, queuing or moving through the structure etc.) until such time as their individual response time has elapsed, where upon 33

34 they would immediately abandon their remaining tasks and commence immediate evacuation of the structure via the nearest available exit point. It is important to note that even though each agent is randomly assigned a response time from the log-normal distribution when the alarm was started, agents may still be permitted to commence evacuation beforehand. Agents who are either exposed to excessive levels of convective heat, radiative heat or smoke will deem that their environment is a direct threat to their safety, and hence will commence immediate evacuation, even though their individual response time may not have elapsed. This is also true for agents in close proximity to a fire or armed terrorist, and hence is exposed to its corresponding static/dynamic threat (see Figure 21 and Figure 22). A new feature that was developed for Pilot 2 included the ability to add suspect and non-suspect packages to the geometry. The user of the UNITY platform (i.e. the trainer) has the ability drag and drop bags into the structure. The location where the bags are dropped are sent to EXODUS and those locations will be considered as insurmountable obstacles by the crowd agents. However, user controlled agents such as members of the blue team are able to interact with those bags to determine their nature and to thereafter control the crowd as needed. For Pilot 3 the threat model was enhanced to include additional agent behaviours. These included: running, communicating, enhanced fleeing behaviour, trapped behaviour, fleeing behaviour. To represent running a new urgency attribute was introduced for each simulated agent. Once an agent becomes aware of a threat their urgency value is assumed to increase. Within EXODUS, urgency is applied multiplicatively to an agent s current travel speed. By default, the urgency value increases to 2.0 (user configurable), indicating that they will immediately start travelling at double their current travel speed. In this manner, when agents react to a given threat they increase their travel speed and effectively start to run. When a threat becomes known to an individual it is expected that this information will be communicated to others around them. To provide this level of realism and adaptation agents within the model were given the ability of communicating the presence of a threat to those nearby. Any agent therefore, who is aware of a threat (i.e. has at some point entered an awareness/threat area of a particular threat) can pass on knowledge of the threat to other agents who are currently unaware of that threat. Within EXODUS the user can define an inform distance within which this communication can take place. An agent who is aware of a threat and who s distance from an unaware agent is within the inform distance will pass on knowledge of the threat. Once an agent is informed of a threat their urgency increases, they will start running and commence evacuation, attempting to evacuate in the same direction and use the same exit as the agent that informed them. In this manner information of a threat can propagate from one agent to another spreading to the structure s population. By default, the inform radius is set to 2 metres (user configurable). As agents attempt to flee away from a threat they may inadvertently find themselves forced against walls, barriers or into dead ends from where they are unable to retreat any further. In this way the agents are effectively trapped. When this happens the agents will crouch in an attempt to become less visible, pose a smaller target and appear less threatening to the red team members and to allow the blue team members to better target terrorists (red team members). Within EXODUS these agents remain trapped until a threat gets very close. When this happens the agents will attempt to flee to safety. The distance that the red team member has to come close to the trapped agent is 34

35 defined as the flee radius which is assumed to be 5m. It is worth noting that no actual data for this distance is currently available. However the model is designed in such a way that if suitable data becomes available the user is able to add it into the model. In the current implementation the probability of an agent fleeing attempting to reach safety from a trapped position is, for lack of suitable data, set to 100%. 3.4 Modelling impact of explosive devices on simulated agents Depending on their size, explosive devises of a given size can be detonated at specific or random locations. EXODUS is able to determine the effects of the explosions to the agents (various levels of injury) as well as the structure (various levels of damage). To determine the effect that an explosion has on an individual agent UOG developed the concept of the Fractional Blast Dose (FBD) value. This value is calculated for each injured agent in the simulation. The FBD is defined as a value between 0.0 and 1.0 indicating the level of injury an agent has sustained due to the effects of an explosion. Initially all agents have an FBD value of 0.0 indicating that they are not injured. A person that is fatally wounded by an explosion will have an FBD value of 1.0. Minor injuries are reflected by an FBD value between 0.0 and 0.3. The effect that this type of injury will have to the agents is a reduction of travel speed. Agents with an FBD value greater than 0.15 will additionally exhibit staggering behaviour when walking. The condition of agents during the simulation is not necessarily static, for example agents being seriously or very seriously wounded can deteriorate over time if not treated. Several sizes of explosive devises can be utilised within the model including 0.227kg, 1kg, 2kg, 5kg, 10kg, 20kg and 100kg equivalent of TNT. The effect of the explosives on the population is directly dependent on their distance from the epicentre of the explosion. This data was provided by ISTRATEAM. Furthermore, a probability of each agent being affected by the explosion is defined. As mentioned earlier, initially all agents are assumed to have no injury. However, once a device is detonated the condition of the agents is determined by their distance from the centre of the explosion plus the probability of being affected by the explosion. The injury levels incurred include the following categories: Minor injury Moderate injury Serious injury Very serious injury Fatal injury These five categories were identified based on data collected by ISTRATEAM. Further information on how these correspond to the various FBD values and what those mean can be seen in Table 1. For each injured agent an appropriate FBD value is randomly selected between the minimum and maximum values of their corresponding injury category. 35

36 Table 1: FBD values and corresponding injuries FBD Description Minor Injury Basically walking wounded. Minor visible cuts and abrasions, possibly limping, visibly affected, possibly hunched over travelling at a reduced travel speed (i.e. Walk Speed). Will require inspection by professional medic, but not hospitalisation Medium Injuries Immobilised and sitting down waiting for assistance/medical help. Conscious and capable of understanding and reacting to those around them but in some pain. Stable, but with visible cuts, abrasions, blood loss slightly worse than those with minor injuries. Will require treatment on site and evacuation to hospital Serious Injuries Conscious, Immobilised, lying down, moaning, in severe pain, continuous and visible loss of blood. Condition can deteriorate and become very serious if not treated Very Serious Injuries Immobilised and not conscious, lying down, no visible movement serious injuries/abrasions/blood loss are visibly evident. Condition can deteriorate and become fatal if they do not receive medical assistance 1.0 Fatality Visually probably largely the same as for those with very serious injuries (see above) For agents that have incurred a minor injury (i.e. FBD between 0.0 and 0.3) that allow an agent to walk a Mobility Degradation Factor (MDF) is calculated. The MDF value is a multiplicative factor that determines the travel speed of an individual agent. The function that determines the MDF value for each individual is given by the following formula; a graphical representation is depicted on Figure FDB y = e Figure 24 Graphical representation of the FBD function, note that the FBD is capped when mobility reaches 0.1. As the FBD value increases the agent s MDF is reduced and this in turn reduces their travel speed. Within EXODUS the effect of an ever increasing FBD value is capped when the mobility of the agent reaches 0.1. Between FBD values of 0.0 and 0.15 an agent simply slows down. However, between 0.15 and 0.3 an agent will not only use their minimum travel speed but they will also exhibit 36

37 staggering behaviour indicating that their injury hinders their navigation abilities. For FBD values above 0.3 the agent is immobile. A time depended deterioration of the injury level of an agent is also modelled for the categories of serious and very serious injury to represent the deteriorating effects of the injury. Therefore, for FBD values between 0.6 and 1.0 (i.e. serious and very serious injuries) an agent s condition deteriorates over time. The rules that govern this deterioration are as follows: a) Seriously injured agents will become very seriously injured within a maximum of 30 minutes (based on their initial FBD value) b) Very seriously injured agents will become fatally injured within a maximum of 30 minutes (based on their initial FBD value) As with the other explosion effects data these rules were provided by ISRATEAM. For each explosion that takes place within the model four injury radii are considered: 1) Minor Injuries/anxiety/distress: Usually people that can walk, require inspection by professional medic team, first aid but not necessarily hospitalisation 2) Medium Injury: Require on site treatment and evacuation to hospital 3) Serious Injury: Would turn very serious if not treated within 30 minutes i.e. move to next injury state 4) Very serious/fatality Injury: might die within 30 minutes Explosion Injuries Very serious/fatal Injuries Serious Injuries Medium Injuries Minor Injuries Figure 25 Explosion injury radii It is worth noting that Injury radii do not necessarily correspond to the same radii defining damage effects. As a result of detonating a bomb in EXODUS, agents within sufficient proximity will either die (and hence be added to the mortuary list), or alternatively will merely become completely immobilised. Agents suffering minor injuries are assumed to still be mobile and hence will react to the presence of the threat and evacuate the structure by the nearest available safe route and exit. 37

38 3.5 Modelling impact of explosive devices on structure Within EXODUS an explosion can also affect the structure of the modelled geometry. The level of structural damage resulting from an explosion that may pose a possible obstruction to the evacuating population is modelled in a similar fashion as with the explosion effects on the agents. The damage level is dependent solely on the distance from the centre of the explosion. The damage levels caused by the explosions range from minor to severe damage/partial demolition. Within the model damaged areas will pose an obstruction to the moving agents and will hinder their ability to efficiently move within the structure. Areas subjected to severe/partial demolition i.e. near the epicentre of the explosion will be deemed impassable to the crowd agents, thereby making certain evacuation routes unavailable. Each explosion results in four damage radii (see Figure 26): a) Severe damage/partial demolition: major structural damage, region impassable to all agents b) Moderate damage: up to major structural damage, debris, region traversable only by younger/more athletic agents c) Medium damage: up to moderate structural damage, only the movement of elderly/mobility impaired agents likely to be restricted d) Minor damage: minor structural damage, broken glass, region remains traversable Explosion Damage Severe Damage Moderate Damage Medium Damage Minor Damage Figure 26 Damage radii caused by an explosion within a structure 3.6 Integrating explosion data with UNITY Both, the resulting injuries or fatalities to the agents and the damage to the structure, due to an explosion is sent from the EXODUS player to the UNITY server. This allows UNITY to correctly render both injuries to the agents and the damage to the structure (see Figure 27). 38

39 Figure 27 Agent injury and structural damage rendering within UNITY (left: Trainers view, right: Trainees view) 3.7 Modelling shooting injuries on simulated agents Within EXODUS an agent that is deemed to have been shot is assumed to be fatally wounded. This represents that most conservative case due to the current unavailability of actual data that would indicate the probability of being injured depending on the location of injury. A hypothetical, but not implemented model is shown in Figure 28. If suitable data becomes available then this type of model could be used to determine the shooting injury levels for each agent shot within the model, but at present this is not currently the case. 39

40 Arms Fatal = 5% Very Severe = 10% Severe = 25% Medium = 35% Minor = 25% Head Shot Fatal = 90% Very Severe = 10% Upper Torso Fatal = 30% Very Severe = 30% Severe = 40% Lower Torso Fatal = 25% Very Severe = 20% Severe = 35% Medium = 20% Legs Fatal = 10% Very Severe = 10% Severe = 25% Medium = 45% Minor = 10% Figure 28 Hypothetical and possible model for shooting injury effects on individuals. 3.8 Representation methods of 3D Graphics for crowd and scene rendering One important component of any serious game is the visual representation of crowds of people and of the scene in general. For this reason, we have done an analysis of the currently available methods for representing humanoids, both for the crowd rendering as well as for the players of the game, and also for scene objects. There are various ways to mathematically express geometric objects, which are considered below. Geometric objects in space are specified either in the implicit or the parametric form. The implicit representation has the form F(x) = 0 where x is a point on the surface implicitly described by function F (for example, a circle is described by x2+y2+z2-r2 = 0). Using this representation we can easily examine if a given point x is on the surface. However, there is no direct way to systematically generate consecutive points on the surface. For this reason using the implicit form it is not very suitable for real- time graphics. In contrast, the parametric form lets us directly generate points on the surface. Parametric representation has the form of x = F(u, v), where u and v are surface parameters that usually take values from 0 to 1, and x is a point on the surface. Using the parametric 40

41 representation, curves and surfaces can be efficiently described and used in the representation of humanoids, especially in the real-time generation of graphics, and also of generally objects that are placed in the scenes Curves and Surfaces ([Prautzsch, 2002], [Shene, 1997]) Considering that in the AUGGMED project the serious game uses real-time graphics, the parametric form is the one that suits our purposes best and therefore we have mainly focused on this domain. For our analysis, we have first considered the case of 2D geometries (curves) and then moved on to 3D (surfaces), as the 3D methods are just an extension of the ones used for 2D but taking into account another dimension. Firstly, it should be noted that the mathematical methods which will be mentioned below all make use of the Affine space. The concept of Affine space is an extension of a Vector space with the addition that it supports vector-point addition as well as point-point addition, while any point in the space can serve as the origin. The following parametric polynomial representations/curves have been studied for this analysis. Name Equations Bezier representation where n C(u) = B n,i (u) P i i=0 P i s are the control points. Total n+1 points that indicate roughly the shape of the desired curve The coefficients B n,i (u), 0 i n are Bernstein Polynomials n is the degree of the polynomial/curve n C(u) = N i,p (u) P i i=0 where B-Spline P i s are the control points. Total n+1 points U = {u 0, u 1,, u m } is the knot vector, with m = n + p + 1 (i.e. knots are u 0, u 1,, u n+p+1 ) N i,p (u) s are B-spline basis (polynomial) functions of degree p with joining points at knots in [u i, u i+p+1 ], and have nonnegative values 0 N i,p (u) 1 p is the degree of the spline 41

42 where NURBS (Non-uniform rational B-Spline) P i s are the control points. Total n+1 points U = {u 0, u 1,, u m } is the knot vector, with m = n + p + 1 (i.e. knots are u 0, u 1,, u n+p+1 ) Weights w 0, w 1,, w n. Generally these are positive numbers but negatives or zero can also be used N i,p (u) s are B-spline basis (polynomial) functions (see B- Splines above) of degree p with joining points at knots in [u i, u i+p+1 ], and have non-negative values 0 N i,p (u) 1 p is the degree of the spline The above representations are some of the main and commonly used curves in 2D graphics. NURBS are a generalisation of B-Splines and B-Splines are a generalisation of Bezier curves, i.e. under some specific conditions, a NURB can represent a B-Spline or a Bezier curve and B-Spline can represent a Bezier curve. Going up in the generalization hierarchy of the above representations, we have more flexibility that allows us to represent more accurate shapes and at the same time doing it in a less computationally expensive way. For example, while B-Splines are flexible they are not able to represent circles. This is being solved by using homogeneous coordinates on the generalization of B- Splines i.e. rational curves; NURBS. Now, on the AUGGMED project, we are using 3D graphics and as previously mentioned the above representations can be extended to 3D representations or (as they are called) parametric surfaces. Generally, many parametric surface patches are joined together side-by-side to form a more complicated shape. The following is the definition of B-Spline surfaces and is given as an example. a set of m + 1 rows and n + 1 control points p i,j, where 0 i m and 0 j m a knot vector of h + 1 knots in the u-direction, U = {u 0, u 1,, u h } a knot vector of k + 1 knots in the v direction, V = {v 0, v 1,, v k } the degree p in the u-direction the degree q in the v-direction N i,p (u) and N j,q (v) are B-Spline basis functions of degree p and q respectively The surface definition above is an example of a Tensor Product Surface which constructs surfaces by multiplying two curves (note that Bezier and NURBS surfaces also belong in the same category). In other words, the basis function of the first curve is multiplied with the basis function of the second, and use the result is the basis function for a set of 2D control points. The result of this is a surface. 42

43 To conclude, these surface parametrisations can be used in AUGGMED for the designing of scene objects and body parts and help to make the game look more realistic and at the same time allow the designers to make changes more efficiently when needed. In particular, the way this approach makes graphics more realistic is by providing a geometrically higher level-of-detail (LOD). In addition, if for example the designer wanted to draw a vase it makes it fairly simple to redraw the shape by just modifying the control points rather than having to redraw manually the whole object. Finally, we mention that a parametric surface can be triangulated into triangles or polygons, generating a mesh (a collection of polygons). These meshes can be rendered very efficiently with existing graphics programming libraries and are nowadays used broadly Real-time crowd rendering During this part of the work, we have also looked more specifically into the rendering of crowds, especially since this an important part of the scenarios used in AUGGMED. The task of rendering big crowds has been a huge challenge in the graphics community. In 2015 the Computer Graphics forum has published a survey [Beacco, 2015] of real-time crowd rendering. We have used this survey as a reference to find how crowd rendering can be improved in AUGGMED. The aim of the survey is to provide a complete up-to-date overview of the state of the art in crowd rendering and an in-depth comparison of techniques. This is important since there is a trade-off between the number of animated characters simulated and the factors affecting their visual quality. In addition, in addition to the rendering process, the simulation process (taken care of by the EXODUS software in AUGGMED) as well as the animation computations should be considered due to the fact that rendering is strongly tied to them. Starting with the character animation, the following methods have been considered: 1. Skeletal animation The character is represented by a polygonal mesh (or skin) and an underlying skeleton. Then the bones of the skeleton (one or more) are each associated with vertices of a portion of the mesh. Once the bones are moved, the vertices associated with it are deformed. The deformation of the skin geometry is called geometric skinning. A common skinning technique used is called Linear Blend Skinning. 2. Cage-based animation Makes use of one or more cages enclosing the model to facilitate the animation while preserving the smoothness of the deformed meshes. 3. Per-vertex animation This is also called shape interpolation, and refers to interpolating between some pre-stored key frames (each one defining a different pose for an instant of time) in order to obtain new frame deformations. Each of the methods above has its advantages and disadvantages however we have limited the analysis only to these three because these are the ones that can be applied in real time to large group of people (thousands of people). 43

44 In Figure 29 below we can see a comparison of some of the representation methods that are being used in computer graphics. These techniques are categorised into: Level-of-Detail (LoD) representation for Characters Polygon-based techniques, Point-bases techniques, Image-based techniques and some Hybrid technique Acceleration Techniques Culling techniques, Instancing and pseudo instancing techniques and dynamic caching Each of the techniques mentioned above have their trade-offs; their advantages and disadvantages need to be considered carefully according to the aim of the project. In the case of AUGGMED, the LoD approach has been followed since it meets the requirements by just having a couple of LoDs for the humanoids, depending on the distance from the camera. This provides good realism for the purpose of the project and it does not constitute a bottleneck for the system, while at the same time allowing a straightforward implementation. In addition, lighting, shading and shadowing can improve the way we perceive the characters. Moreover, some other factors are mentioned as important for making a crowd feel more realistic. These are the animation individuality (i.e. the more different animations we use the more realistic it will look), and similarly clothing and crowd variability can provide a sense of a more realistic environment in a game. However, both of them are expensive to have in large amounts and are considered a bottleneck for the CPU and GPU. To conclude, although at this time the AUGGMED project does not make use of crowds of many thousands of people, in future releases of the game, if an environment with this amount of people has to be simulated, the methods explained above are considered highly valuable for designing and implementing capabilities for a state of the art real-time crowd. 44

45 Figure 29 Comparison of Representation Methods [Beacco, 2015] 45

46 4 Simulations of threats The most challenging and difficult to alleviate are cyber physical attacks targeting the communications infrastructure and further attacking on systems and appliances related to PSIM. The objective of cyber physical attacks is to gain access, control or make unavailable first the network and second the PSIM infrastructure such as CCTV networks, access controls, building management, automation and control systems. In AUGGMED project the a cyber-attack will be executed in two stages, in the first stage access will be granted to the network infrastructure and in the second stage an executable code will start making systems unavailable and unstable. The two phases of the cyber-attack will be executed one after the other in a serial manner and they will be separated by few hour s time. In any other case where the cyber-attack takes place gradually through a period of time is the most dangerous and difficult to identify, detect and mitigate. During the 3 rd AUGGMED exercise we will provide switch administration and access port security from a central node of the system. Regardless of how users and devices will be connected to the network (wired, wireless or VPN) AUGGMED network administrator will have complete visibility and control over the network. Different VLAN configurations will appear like standard hardware interfaces and will be managed accordingly by applying different network and application security policies to specific virtual ports. Furthermore, due to the diversification and sophistication of cyberattacks hackers today breach the perimeter defence of network infrastructures since networks become more and more flat. In AUGGMED we will provide multiple layers of defense before granting access to the infrastructure by enabling explicit internal segmentation, with firewall policies between users and resources which limits traffic and helps break the infection chain. The solution will be based on Fortigate UTM model 140-D that has a powerful ASIC processor allowing real time Internal Segmentation Firewall policies. The equipment and its technical characteristics that will be provided is referenced bellow. Also during the exercise there will be provided cyber protection against known cyber attacks through the real time intrusion inspection beyond port and protocol filtering by using inspection of the actual content in the traffic exchanged in the AGGMED wireless network which will be set up to support the exercise. Also in case needed VPS and SSL inspection will be provided. 46

47 Figure 30 Hardware specification of FortiGate 140D-PoE 47

48 Further to the above a chipper key radon generator will generate pure random 6 digit codes that will be necessary for everybody entering the AUGGMED wireless network to acquire in order to be granted access to the network. In order to achieve the highest possible security these codes will be valid only for a short period of time not exceeding 60 seconds. AUGGMED partners will have to update the codes every time they enter the wireless network. The chipper and the random generator will be unique for every individual partner of the project. Figure 30 presents the token that will be provide to the AUGGMED partners. Figure 31 Technical characteristics of FortiToken 200 Also, in order to simulate a cyber threat in physical security information management domain of the cruise terminal and in order not to interfere with the live systems of the port an PSIM equivalent of the system will be provided from TVT model TD2716-AE to simulate the entire netowk. The technical characteristics of the system are provided is in Figure

49 Figure 32 Technical characteristics of TVT TD2716-AE It has been decided that in order to avoid interfering with the live IT infrastructure of the PPA the exercise will not use any of the existing operational networks, databases and telecommunications infrastructure. Until now in the previous two exercises in West Yorkshire and Barcelona only wired communication infrastructures have been provided to VR and MR peripherals and their devices. In the PPA exercise there will be an autonomous and dedicated wireless secure communication network, based on WiFi network, in order for VR and MR peripherals, devices and sensors to communicate with each other and with their software application databases. The WiFi network is designed to allow a maximum number of 20 users to communicate simultaneously with each other and the application databases. Special security features have been designed in order to provide secure wireless access to the aforementioned users. A two factor authentication system is designed in order to provide to integrity, authorization and authentication. A series of encryption keys are generated with thermal noise and PRBS generator in order to enable different AUGGMED users to access the Wireless network with different keys. A hardware token will be provided to AUGGMED partners in which all previous encryption keys will be uploaded and at the same time the encryption keys will be uploaded to the wireless controller in order to be able to continuously authenticate every individual user. The system is designed to support two network profiles that reflect the three user roles i.e AUGGMED power user, AUGGMED device access user and AUGGEMD threat user. The treat user will 49

50 be responsible to compromise the overall system cyber security furthermore to create system problems related to unavailability, denial of service and other known network problems. At the same time during the exercise there will be actions of technical nature that compromise the performance and reliability of voice communication technologies that are used by PPDR units when they dispatch voice commands to their staff. Some of the parameters that are taken into consideration for the design and implementation of the WiFi network are the following: Network access requirements for the devices and the systems, modules, components including user profiles, restrictions per profile, and level of access per profile during the pilot exercise. Network bandwidth and throughput requirements per system/module/component during the pilot exercise. Security protocols and procedures that need to be applied to AUGGMED project in order to ensure restricted access to authorised users, data protection and secure data transfer for different systems/modules/components, user devices, user roles and profiles. Definition of security threats already experienced for different systems/modules/components, user devices, user roles and profiles. Execution a small scale cyber security vulnerability during the exercise that will demonstrate the impact on the WiFi IP data exchange communications and to the voice communication systems. IP will demonstrate the dynamic effects of a cyber-attack such as denial of service and network flooding for the aforementioned communications infrastructures in order to help AGGMED partners to record the results of these problematic behaviors / phenomena and their impact during the execution of operational measures form the first responders and security forces. 4.1 Fire Simulation Within the AUGGMED platform all predictions of the dispersal of heat, smoke and toxic gases are calculated using the SMARTFIRE CFD model produced by UoG. Prior to the incorporation of any fires into the AUGGMED platform the corresponding geometry first needs to be built within SMARTFIRE (see Figure 33). This is achieved within SMARTFIRE via the Scenario Designer and the Case Specification Environment. The Scenario Designer tool typically enables CAD DXF files of the structure to be imported, and then manipulated in order to accurately reflect the 3D structure being modelled (i.e. locating the rooms, defining the heights of doors, locations of windows/vents etc.). The structure can also be automatically partitioned up into hazard sub-volumes, to thereby enable the subsequent predictions of the hazardous fire effluents (i.e. heat, thermal radiation, smoke and toxic gas concentrations) to be output in suitable formats to be readable by either EXODUS or UNITY. Once the overall geometry has been constructed, the SMARTFIRE Case Specification Environment then enables the user to define the configuration of the fire(s) within the structure (i.e. their location and evolution profile etc.), the boundary conditions, the time step to be used and the duration of the simulation. 50

51 Figure 33 Plan view of the airport terminal scenario used within pilot 1 as shown in SMARTFIRE (Possible fire locations shown in red). The time taken to run the fire simulation (and hence calculate any predictions of heat, smoke and toxic gases) varies depending upon the cell budget used for the mesh, the time step size that is used and the required duration of the simulation. Due to these computational constraints the process of simulating each scenario can typically take many days, and hence this must be completed before any scenario is run in UNITY (i.e. modelling the spread of fire effluents from the fires cannot be performed during the UNITY run time). Once a fire scenario simulation has been successfully completed it is then possible to generate the corresponding hazard export data files that are required by both EXODUS (to determine population exposure effects) and UNITY (to visualise the evolution of the perceivable fire hazard conditions). The precise hazard information (i.e. from the available heat, thermal radiation, smoke, and toxic gas concentrations) and spatial coordination of the hazard data required by EXODUS and UNITY differ, hence each requires a specific file format in order to import the corresponding hazard data. EXODUS typically only requires hazard data within each zone corresponding to the regions agents are exposed to when either standing or crawling (see Figure 35). These standing and crawling regions (i.e. horizontal cross sections at about 0.5m height and 1.8m in height depending on the population characteristics) are typically defined within SMARTFIRE prior to running the fire simulation. The data within these regions includes predictions of all the available fire agents likely to affect (i.e. kill or reduce the mobility of) the population. This includes heat (both convective and radiative), narcotic gases (i.e. CO, CO2, HCN, O2 depletion etc.), irritant gases (i.e. HCL, HBr, HF, SO2, NO2 etc.) and smoke, at each time step. This data is typically output in EXODUS s existing ASCII hazard file format. In contrast to EXODUS, in order to accurately represent the spread of the fire throughout the structure over time UNITY requires only the relative smoke concentrations within each zone (see Figure 34). However, UNITY requires the smoke concentrations within each zone from all vertical levels (see Figure 35) and from the entire geometry (unlike the data required for EXODUS), this provides the necessary fidelity required to represent the smoke in the VR environment. 51

52 Figure 34 Smoke spread at various stages within UNITY Using this approach the region between the floor and the ceiling within each zone is effectively evenly divided into a given number of levels. For Pilot 1 each zone was separated into ten vertical levels. Since the ceiling of the airport geometry was assumed to be at 3.5m each level had a height of 35cm. To output this data a new hazard file format was specifically developed to enable layered hazard data to be exported to UNITY in order to visualise the perceivable hazards (i.e. smoke) more accurately (see Figure 35). Figure 35 The different hazard data output option for each zone for both EXODUS and UNITY 52

53 In total thirteen different fires were examined or considered. However, for the purposes of Pilot 1 four fire scenarios were selected and simulated using SMARTFIRE. These were located at two locations: one on land side and one on air side. For each of the two fire scenarios two fire variations were modelled, one at 4MW and the other at 8MW. The land side fire was located in the vicinity of the main entrance to the airport terminal. The air side fire was located just after the security check as the people enter the air side lobby (see Figure 36). Figure 36 The location of the land side and air side fires The fires are assumed to be of gaseous fuel release type (see Figure 37) producing heat, smoke, HCl and toxic species (see Figure 38). The modelling included small features of the geometry such as queue tapes, pillars etc. but these had a negligible impact on the fire spread and fire modelling results. When the fire results are imported into EXODUS the scenarios use 2m x 2m sized hazard subvolumes, requiring 2129 sub-volumes in total to represent the fire products. All four fire scenarios were run for 600 seconds of simulated time. At the current implementation each simulation took more than two full days to complete. 53

54 Figure 37 Fuel generation rate produced by SMARTFIRE Figure 38 Smoke concentration and temperature levels as reported by SMARTFIRE at 300 seconds into the fire simulation Given the computational needs to run these simulations the fire scenarios were completed well before the Pilot 1 exercise. Given the amount of data produced by SMARTFIRE and the corresponding time taken to read it into EXODUS, the fire data therefore has to be loaded into EXODUS prior to commencement of the pilot exercise. Consequently, the user (i.e. trainer) must define which of the four available fires they wish to use via the trainer tools (i.e. prior to the commencement of the simulation). The specific fire that the user has selected is then communicated to EXODUS, whereupon it is loaded ready to be started. Once the pilot exercise is started red team members can then initiate the fire in its given location. Once this is done a command is sent via the network from UNITY to EXODUS, whereupon the fire is started within EXODUS. EXODUS then is responsible for reproducing the spread of fire hazards within the structure (see Figure 39) and determining the effect that the fire products have on the population within the airport terminal (blue and red team members plus the public), assesses their 54

55 condition and determines their injury level. Within UNITY the corresponding information is displayed and visualized for further activities of the trainees (e.g. giving commands, tagging people). Figure 39 Spread of fire hazards as indicated by heat and smoke contours in EXODUS The damage design described above can be reused within different scenarios, depending on the used device. The same marks are still visible within the same training session. 4.2 Simulations of the telecommunication capabilities of security units At the cruise terminal of PPA the the telecommunication capabilities offered to security units and public safety and disaster relief are the following Terrestrial Trunk radio (TETRA) Analogue radio 3G/UMTS These three radio infrastructures are seen globally because they are designed to service PPDR agencies and their operational users. PPDR specific capabilities are incorporated in the TETRA 3G/UMTSand analogue radio comms. Further to the previous radio communication technologies that offer advanced level of mobility management and prioritization, there are other technologies based on Internet Protocol (IP) that are gaining ground over conventional PPDR comms. Currently there are significant research and development activities in the area of Wi-Fi systems. Researchers are trying to incorporate PPDR and security specific functionalities in Wi-Fi systems that don t offer such functionalities. Some important PPDR features developed for Wi-Fi systems are the following: Radio call communication over IP Group and individual calls over IP Call management Call priority assignment Call busy queuing 55

56 Call late entry Call interconnection Dispatcher communication emulation Emergency and distress Due to the native technology of Wi-Fi which operates on best effort services without any structured quality of service mechanisms all call PPDR related functionalities are very expensive to accomplish. Furthermore, the radio transmission characteristics of Wi-Fi create additional technical and operational challenges due to the limited amount of power transmitted, due to the relative high frequency of operation and due the propagation / penetration and attenuation of signals in indoor environment. These limiting factors lead AUGGMED project to utilize Wi-Fi only for data exchange between software applications i.e. UNITY and EXODUS. The modelling and telecommunication simulations that will be executed for Wi-Fi will take into account all aspects of limitation. The most important aspects that will be simulated for their limitation results are referenced following Multi-room environment Number of rooms Number of doors per room Walls and doors material structure and composition Walls, doors and glass thickness Obstacles and large objects in the room Stairs Transmission power for the radio equipment Number of antennas Antenna radiation patterns Antenna TX, RX and Gain Number of users Throughput performance QoS parameters 4.3 Constraints and disruption caused by human intervention In the simulation of telecommunication communication infrastructures there are a number of limiting factors that create negative impact and disruption of communications. These limiting factors are either created by physics, electronics or in some cases by the human factor. 56

57 In AUGGEMD we will simulate constrains caused by all aforementioned limitation factors, more specifically the communications simulation will be executed taking into account the following: i. Network loss due to signal attenuation from various materials ii. iii. iv. Network loss due to radio interference from other wireless communication systems Network loss due to radio signal reflections and signal cancelation Network loss due to obstacles experienced the way v. Network loss due to open or closed doors The areas selected are the main hall located on the ground floor, the check points in the main hall, the offices of PPA staff on either side of the main hall, the stair case which connects the main hall with the second floor where the conference hall of PPA is located, the conference hall and the offices of PPA staff that are surrounding the conference hall. High Definition snapshots have been taken for all aforementioned areas, each area is digitized and it is inputted to the radio prediction software. For each individual area/room a radio propagation model is studied in order to select the most convenient propagation characteristics that emulate better the real conditions. The same principle has been applied for the objects that exist in the rooms which in our case operate as obstacles to the propagation of radio communication waves i.e deteriorate signal to noise ratios and quality of service parameters. The wireless systems that 3D radio coverage modeling is provided are the WiFi, VHF communications and 3G/UMTS. These are the communications infrastructure that are available at PPA cruise terminal and the all previously mentioned areas/rooms in which security units and first responders have access. The same telecommunication infrastructure will also be available to the terrorist groups. The simulated models create the radio coverage prediction maps represent every indoor environment of the areas selected to participate to the storyboard. IP will demonstrate the disruption caused by human intervention and by layout changes of internal physical geometries e.g bomb blast, wreckage, obstacles that appear on ad-hoc basis e.t.c. 57

58 5 Interfaces and Devices for Multimodal Interaction People naturally interact with the world multimodal, through both parallel and sequential use of multiple perceptual modalities. Multimodal human-computer interaction has sought for decades to endow computers with similar capabilities, in order to provide more natural, powerful, and compelling interactive experiences. With the rapid advance in non-desktop computer generated by powerful mobile devices and affordable sensors in recent years, multimodal interaction that leverages speech, touch, vision, and gesture is on the rise. Multimodal interfaces describe interactive systems that seek to leverage natural human capabilities to communicate via speech, gesture, touch, facial expression, and other modalities, bringing more sophisticated pattern recognition and classification methods to human computer interaction. While these are unlikely to fully displace traditional desktop and GUI-based interfaces, multimodal interfaces are growing in importance due to advances in hardware and software, the benefits that they can provide to users, and the natural fit with the increasingly ubiquitous mobile computing environment (Cutugno et al., 2012). Multimodal interaction systems aim to support the recognition of naturally occurring forms of human language and behaviour through the use of recognition-based technologies. Multimodal interfaces are generally intended to deliver natural and efficient interaction, but it turns out that there are several specific advantages of multimodality. Although the literature on formal assessment of multimodal systems is still sparse, various studies have shown that multimodal interfaces may be preferred by users over unimodal alternatives, can offer better flexibility and reliability, can offer interaction alternatives to better meet the needs of diverse users with a range of usage patterns and preferences. Humans may process information faster and better when it is presented in multiple modalities (van Wassenhove et al., 2005). Other potential advantages of multimodal interfaces include the following (Oviatt et al., 2000): They permit the flexible use of input modes, including alternation and integrated use. They support improved efficiency, especially when manipulating graphical information. They can support shorter and simpler speech utterances than a speech-only interface, which results in fewer disfluencies and more robust speech recognition. They can support greater precision of spatial information than a speech-only interface, since pen input can be quite precise. They give users alternatives in their interaction techniques. They lead to enhanced error avoidance and ease of error resolution. They accommodate a wider range of users, tasks, and environmental situations. They are adaptable during continuously changing environmental conditions. They accommodate individual differences, such as permanent or temporary handicaps. They can help prevent overuse of any individual mode during extended computer usage. 58

59 While every combination of interface, task, user, and environment is different, and it is thus difficult to draw general conclusions for a whole category, the trend of existing studies points to a wide range of reasons that the pursuit of multimodal interfaces will be advantageous to users. Not every interaction pattern for a training session can be addressed by ordinary devices. That s why e.g. an interactive vest is developed to be part of a training session and communication channel between the team members of AUGGMED. 5.1 General possibilities of devices Humans primarily interact with the world through their five major senses of sight, hearing, touch, smell, and taste. In perception, a mode or modality refers to receiving stimuli from a particular sense. A communication channel is a particular pathway through which information is transmitted. In typical HCI usage, a channel describes an interaction technique that utilizes a particular combination of user ability and device capability (such as the keyboard for inputting text, a mouse for pointing or selecting, or a 3D sensor used for gesture recognition). In this view, the following are all channels: text (which may use multiple modalities when typing in text or reading text on a monitor), sound, speech recognition, images/video, and mouse pointing and clicking. Multimodal interaction, then, may refer to systems that use either multiple modalities or multiple channels. Multimodal systems and architectures vary along several key dimensions or characteristics, including the number and type of input modalities; the number and type of communication channels; the ability to use modes in parallel, serially, or both; the size and type of recognition vocabularies; the methods of sensor and channel integration; and the kinds of applications supported. For AUGGMED both technologies input and output are needed. Input technologies such as speech, gesture recognition and haptic input are as important as output technologies such as multimedia and visualization. The overall goal of multimodal interaction is to fully support both directions of communication between human and machine as well as to empower computer support human-human multimodal interaction for AUGGMED. For the selection of an appropriate device to support activities within the training session various categories are important. Already various input and output devices specially designed for gaming are available on the market. The control and communication devices can be grouped within the following categories: Gameplay devices The gameplay devices are used to control the avatar in a typical gaming situation. A number of shortcuts can be defined and used on a keyboard, while a gamepad supports a fast and direct control of the avatar, the activities and interaction. Communication devices Communication among the team is very important to successfully finish the training session. 59

60 For AUGGMED the communication is three folded. On the one hand the team members (red and blue) have to communicate among each other. On the other hand the trainer has to communicate with the team or with a dedicated person of the training session. This can be done and realized by wearing a headphone or use the two-way radio of the first responders. Displays The AUGGMED game can be visualized to the player on different output devices. The visualization will be done on monitors with different sizes (from smartphone to desktop computers), VR-glasses and AR-glasses depending on the training session and participation of the trainee. Weapons Also weapons are available for gaming available as light guns for shooting simulators. The gun is a pointing device used as a control device for computers and video games. It fully functions only in special set-ups. Body Protection Body protection to simulate hits and and tactical exchange within the team. Protections for all parts of the body. Walking simulations Full simulation of all movements within the simulated environment needs a flexible walking simulation device covering all parts of body movements. 60

61 Figure 40 Available input and output as an example for the categories The images within the Figure above shows various input and output devices. Some of them are common and commercial products. But there are also some special uncommon devices, but very interesting for AUGGMED to fully simulate in a realistic condition all parts of the training. Devices like weapons, body protection and movement simulations. Some of them can be used after a short configuration right out of the box and special devices (like the vest) were developed within the project. 5.2 Integrate of VR-Glasses Despite the significant progress on multimodal interaction system, much work remains to on integration sophisticated multimodal interaction in the system. Each unimodal technology (speech and sound recognition, haptics, touch-based gesture, user modelling, context modelling, etc.) is a specific area for integration. Multimodal integration methods and architectures need to explore a 61

62 wider range of methods and modalities combinations. Most current systems integrate only two modalities, such as speech along with touch or visual gesture. As part of the AUGGMED system we will continuously integrate more and more modalities; starting with the common modality of control and visualization to support the gameplay and the projection of the serious game. Later on, it will include more complex and realistic devices like weapons and body protection. The simulation environment of the latest version is designed for a standard desktop PC set-up. The second version already contained a first beta version running on VR-glasses and on a smartphone. As part of the desktop PC setup common devices like keyboard, mouse and monitors are realised. All control activities where covered by the control shortcuts of such devices. The control shortcuts can be configured individually to support an optimized training session. The second version was extended also with a beta version of a touch control element running on a smartphone allowing trainees to participate. Furthermore, since the second version support of VR glasses was introduced. Starting with the oculus rift, since it was available firstly as a developer version, but with high hardware requirements. The oculus is supported by the Unity engine and allows a 360 view of the surrounding. The control element of the oculus navigates the avatar within the training environment. The hardware requirement 1 for the rift is high, so that only special gaming computers fulfil all requirements for a seamless gaming experience. To reduce the hardware requirements another VR glasses are selected and tested the HTC Vive. The HTC Vive has less hardware requirements and is also supported by the Unity engine. Figure 41 VR-glasses: Oculus rift (left), HTC Vive (middle) and HTC Vive controller (right) The features of the AUGGMED system can be used fully with the HTC Vive. All scenarios can be played using the HTC Vive. The experience with the VR view of the scenario is amazing and raised the common understanding of the usage by the first responders. This means also, that the control elements are fully covered to be used with the HTC Vive controller. For example, to tag a person with the controller, it needs to be placed close to the person to tag it. The interaction is more realistic to bend down, to throw or to grab. The HTC Vive cannot only used for the VR, but also for a Mixed Reality (MR) view onsite a training place. The camera in front of the HTC Vive does not support an adequate resolution, so that there is a HD camera attached to the glasses. The stream is displayed inside the glasses to see the surrounding. Additional training models can be placed beyond the stream to get an augmented

63 reality. In the figure Figure 42 an experimental view is given to show the effect with the same models used within the AUGGMED training sessions. Figure 42 HTC Vive used for mixed reality (experimental version) 5.3 Development of Haptic Vest This section focuses in the development of a haptic vest that provides tactile and thermal stimuli from virtual reality to user's torso skin. The vest can display three kinds of stimuli: tactile, thermal and impact. The use of those stimuli and the combination of them achieve an increase on realism and sense of presence inside the virtual environment. Tactile stimulation is reproduced with vibrotactile actuators (vibration motors). Those motors work in different frequencies and amplitudes causing a wide range of sensations in users, for instance, when a member of a training team gives a tactile signal to another. Thermal effects allow users to perceive cold or heat, reproducing, for example, the heat generated by a fire. Moreover, impact effects are displayed with new actuators created to feel realistic sensations like an impact from a bullet, shrapnel or other objects. The distribution of actuators over the vest is carried out after performing several tests. The aim of these tests is to determine the vibration and thermal resolution in the body areas where actuators are going to be placed. Since impact actuators are larger than vibration and thermal actuators, they should be placed on specific locations. Besides, the actuators are distributed in specific points over the vest at distances considerably large between them. In this way, there will not be two overlapped impact sensations and it is not necessary to determine impact resolution in placing areas. However, tactile and thermal stimulation requires the performance of those skin perception tests in order to determine the number of actuators precisely. If the distribution is adequate, it is possible to create correct haptic patterns, achieving realistic sensations that users can interpret like similar sensations to reality. 63

64 The game engine will generate hazards and interactions in real-time with a virtual avatar and the vest will simulate the effects of those interactions all over the trunk skin, allowing the users to feel like if they were inside the game. Upon building the vest and considering that the vest is a complete hardware system (like a Head-Mounted Displays) some parameters have to be considered: maneuverability, comfort, cost, power consumption, etc. Therefore, throughout different design and construction stages, all those parameters have been evaluated and the research group has taken some decision in order to configure the vest as a tool for increasing realism of virtual environments and the sense of presence therein. First researches conducted in order to develop the haptic vest were the selection of tactile and thermal actuators. Subsequently, a first prototype was assembled using only both kinds of actuators; moreover, an electronic system was created to control the complete device, the system was communicated with cables and managed using commercial microcontrollers (Arduino). Finally, the vest was powered through connections with wall outputs. This version was used in Pilot 2. Afterward, after analyzing some issues and likely improvements considering the first prototype of the vest, the second version has been created and will be used in Pilot 3. This new vest already includes the impact actuator, achieving the original objective of three kinds of feedback: tactile, thermal and impact effects. Moreover, the distribution of actuators and the electronic system have been changed to avoid the use of commercial microcontrollers, substituting them with programming microcontrollers. The new design for the electronic system also includes wireless communications, so that the vest does not need any external cable and can be used with a greater easiness by users. Moreover, the last version uses batteries that have been included inside the vest, removing all possible outside cables. As a result, the weight and volume of the vest have been minimized, but also its mobility (since wireless communications are used) and energetic efficiency have been maximized, achieving a successful integration of the vest inside the AUGGMED virtual environment Vest requirements Haptic technology has had a great boost for last years and several investigation groups have studied it as a way to obtain enhanced results in human-machine interaction [Jarillo Silva, 2009] [Kuschel, 2010]. On the other hand, virtual reality is an emergent technology with a wide range of applications. One of these applications is serious games : games designed for non-entertainment purposes. Combining both technologies, it is possible to generate systems where haptic technology produces a significant improvement in immersion and realism of virtual environment [Galiana Bujanda, 2013]. A vest for mixed reality environments must improve immersion and realism in the virtual world. The main objective is to develop a vest to generate several haptic effects in the serious game users: Haptic vest creates different feedback types on the skin increasing immersion and realism of virtual interaction through the torso trainee. Some developments of haptic vests have been already done, like TactaVest [Lindeman, 2004], a tactile vest for astronauts [Van Erp, 2003] or a vest developed that use different actuation methods distributed by several trunk areas [Jones, 2004]. On design stage, actuators are distributed by the entire torso. Thus, each actuator generates a different stimulus in a specific area. However, there are no studies about two-point discrimination 64

65 distance with vibration or thermal actuators. There is only a study about vibration distances at back but it is not applicable in this case [Eskildsen, 1969]. Vest integration inside a virtual reality environment is another of the goals in this development, in this matter and oriented to security forces training in a serious game, users can wear the vest and feel the virtual reality touch, receiving all stimuli that its avatar is experimenting, improving immersion and realism during training. For serious game environment, a software tool for videogames is being used, Unity 3D: this tool allows the creation of interactions with external devices like the vest. Hardware must be controlled from Unity in real time. Developing a haptic interface for the trunk to enhance the immersion and realism in a virtual reality, it integrates a number of actuators for tactile feedback, impact and thermal stimuli to reproduce the skin feelings in usual environments. The vest must satisfy design specifications to be wearable: having low weight and low volume, to avoid user discomfort when dressing. Moreover, the wearer of the vest must be able to perceive a series of tactile sensations with a good resolution, so to perform the design of haptic vest it is very important the actuators distribution for its entire surface: - Tactile feedback: is intended to create touch-based feedback to achieve communication between team members training in virtual reality. In this case, vibrotactile actuators could be used. - Thermal feedback: the user must feel sensations of cold and heat, this means using thermal actuators. - Feedback impact: In AUGGMED serious game, the trainee should feel effects of impacts. Should analyse the different possibilities and propose the most appropriate. Other considerations must to be considered in this design stage: accessibility, wearable, comfort, sizing, costs, security, safety, real-time control, portability, power consumption, durability, skin perception characteristics, etc. The last haptic vest prototype is showed in figure

66 Figure 43 Vest Prototype Stimuli generator hardware in Vest In this section, all haptic actuators and their driver electronics are detailed Electronics The definite version of the electronic system is based on using microcontrollers for controlling the vest and all the actuators with high levels of reliability. To do that, some Printed-Circuit Boards (PCB) has been designed using Surface Mount Technology (SMT) components in order to minimize the size and to achieve an easy going and efficient system. The system is based on a master-slave system. The master receives some information from the Virtual Environment (VE) acting as a VE slave. The reception of that information from the VE is performed with a Bluetooth module. Subsequently, that information is sending to the rest of slaves using a MiWi module, based on MiWi wireless communication protocol. Therefore, the electronic system master performed an information management, sending the corresponding commands to the different slaves. Finally, every slave controls the actuators in a specific area of the vest, obtaining a modular system and avoiding likely operation issues. Every slave contains a MiWi module that receives the information provided by Unity and, afterwards, the slave controls the actuators depending on the command received from the VE. Every module can control up to 9 vibration motors, a Peltier cell and an impact actuator. Moreover, the module is capable of being expanded for controlling a higher quantity of actuators. All the slaves are independent among them, so that an individual slave failure only makes useless a singles area of the vest and allows continue working the rest of areas. The main secondary circuits in the slave modules are: - Power H-Bridge: this circuit controls the Peltier cells and changes his polarity, allowing that the cell generates cold and hot in the same side. - Current limiter: this circuit controls the amount of current driving by the Peltier cell using a PWM output from the microcontroller, allowing a temperature control of the temperature conveyed on the user. 66

67 - Temperature control: this circuit controls, using a temperature sensor (Pt100) and a regulator, the specific temperature of the Peltier cell side. - Vibration controller: this circuit controls the motor operation using a PWM output from the microcontroller. - EEPROM Memory: this integrated circuit and its corresponding circuit allows to store all the needed information, avoiding exchanges of huge amount of information between master and slaves, which can suppose delays and incorrect operation Tactile actuators Two different actuation methods have been considered to generate tactile stimuli: Electrical Muscle Stimulation (EMS) and vibration motors. However, after making an analysis about both methods, it was decided to use vibration motors due to induced sensations being more reliable and comfortable for users. After several tests, two motors were chosen from 10 different motors. Linear Resonant Actuators also were tested, being ruled out because of low vibration intensity that is not easily appreciable in stimulated areas. Motors chosen are and models, both from Precision Microdrives Ltd. Figure 44 Motors placed on the vest The choice is due to highest frequencies that motors can reach, allowing the creation of haptic sensations easily. Both models and their characteristics are shown in figure 28 and table 1, respectively. Table 2 Characteristics of vibrotactile actuators Finally, it is makes the decision of using model, due to the other model can be too big for including them into the vest and too noisy for achieving comfort sensations when the motor is vibrating. 67

68 As can be seen in table 1, the frequency that motor can reach is larger than the frequency reached by model and could be useful for generating a larger range of sensations on skin, but negative aspects in previous paragraph of that motor cause to take the decision of excluding that motor in the first prototype of the vest. Finally, these actuators are controlled by a Microchip microcontroller with several Pulse-Width Modulation (PWM) outputs that can be used for varying working frequency motors. The control of vibration actuators allows the creation and development of haptic patterns that can be displayed over users, inducing realistic sensations that can be understood as signals, contacts or any way of touch-based communication between different system users and virtual avatars. The haptic patterns are vibration sequences that, correctly configured, reproduce those realistic sensations. To do this, depending on the signal to be displayed, operation times and frequencies are defined for every motor on the vest. These parameters are programmed in the microcontroller, achieving the creation of haptic patterns in an easy way Thermal actuators and materials Thermal conductivity studies are very relevant to create a more immersive environment, especially if there are a temperature gradient respect trainee body. These studies include the actuators and the materials and fabrics to transmit it to the skin. Peltier cells have been chosen to generate thermal feedback because of its capability to create hot and cold sensations with a single device. Other methods, as resistors, only can generate hot sensations on the skin. Finally, Peltier cells are easily controllable using a microcontroller and a simple power circuit. After several tests (maximum and minimum temperature reachable, cold sensation time) performed with 10 different Peltier cell models, one was chosen to carry out first iterations of the vest development, shown in Figure 29. Its characteristics can be seen in table 2. Figure 45 Peltier cells placed on the vest Table 3 Characteristics of Peltier cell used during experiments Characteristics Peltier cell 68

69 Maximum Cooling Power (W) 21.4 Maximum Voltage (V) 4 Maximum Current (A) 9 Maximum Temperature Difference (K) 77 The Peltier cell manages average power allowing reaching low temperatures in one of the faces against heat in the other. This maximum temperature difference can produce temperatures sufficiently high to be noticed by human skin and its size allows a correct integration in a wearable device. Some researches in fabrics were made with silver threads have a number of features and advantages which, apart from meeting that reduce odours and control bacteria, it includes better heat conduction. They have tested two models of thermally conductive fabrics brand HITEK Electronic Materials Ltd, a copper wire and a silver wire with yet unsatisfactory results, since an improvement was noticed in operation or in the heat transfer the Peltier. Some researches were performed about flexible and wearable materials with the best possible thermo-conductivity for applying the temperature to broader areas, however results were not satisfactory and there are not materials with these requirements. After the performed research, the definite version of the vest has the Peltier side over the user body directly, without any intermediate fabric, just with the clothes that the user is wearing at that moment. In that way, a maximum heat transmission is assured from the cell until the body user. The side contacting with the user contains a temperature sensor allowing to control the temperature applied over the user continuously, avoiding an overheating and to exceed the pain thermal threshold established by human perception. This sensor has a minimum size, so it is not supposed any issue when the cell transmits heat towards human body. Refrigeration of Peltier cells is performed using a metallic sink attached to the side that is not contacting with the user. The other side of the sink is uncovered for achieving a better heat dissipation. In that way, the exposition time when the Peltier cell displays cold is increased, without increasing the temperature in an uncontrolled way in the side that is generating cold. When the Peltier cell is displaying heat, the sink does not affect to the performance considerably. Just as the haptic vibration patterns, the Peltier cells will work depending on the interactions or stimulations provided by the VE. In this case, the VE will send the information about a thermal stimulus and the corresponding Peltier cells will reproduce the commanded temperature that will maintain stable during the necessary time. This control is performed using the temperature sensor and the regulator previously addressed Impact actuators Impact feedback is based on actuators that generate sensations as similar as possible to an impact of a bullet or shrapnel produced by a nearby explosion. This kind of interaction can be considered as tactile feedback because stimulate mechanoreceptors just as a vibration; however, the feedback 69

70 processes are different because those receptors are not stimulated in the same way, since a vibration cannot generate the same force than an impact force, being considerably more weak. It is important to point out that is not possible to display painful sensations, so that it is necessary to determine the maximum force value displayable with accuracy. Therefore, impact forces below the maximum force do not create sensations unpleasant for users. Considering all previous aspects, a new impact actuator has been created, allowing to display similar sensations to real impacts over the user. Several methods have been analyzed in order to select the best option for creating the impact actuator, as magnets or pneumatic systems. However, some of those methods were discarded: magnets generate impact forces too low; whereas pneumatic or hydraulic systems require some additional material like conductions, valves, compressor, pumps, etc., making the system not wearable neither comfortable. Therefore, a mechanical system has been identified as the most plausible solution in order to create an impact actuator, allowing to create impact sensations inside the human pain thresholds for forces. Therefore, the actuator is based on a mechanical system made up a motor, an eccentric cam, a spring and a firing pin. The motor rotates during a predetermined time and the cam rotates in conjunction with the motor. Then, the cam comprises a spring until its maximum compression with the help of the firing pin and, at that moment, the motor stops. When the electronic system sends an order to carry out the impact, the motor continues rotating and the cam releases the firing pin, producing the impact. The motor has to integrate a reduction gearbox in order to achieve the enough torque to comprise the spring completely; moreover, the motor and the spring must be carefully selected to ensure that the displayed impact on the user is easily perceivable and the force displayed is below of the established thresholds. The complete control process is as follows: the VE sends a signal that is transmitted through Bluetooth and MiWi protocols until the corresponding module. This module sends the signal to the impact actuator and the mechanical system releases the firing pin to generate the impact. The actuator is always loaded, since the beginning of the operation or since the previous impact in order to achieve the best synchronization possible with the virtual system. Therefore, when the actuator completes the impact, the process of loading the spring is carried out as previously addressed. The components used for assembling the actuator have been purchased or have been prototyped with a 3D printer. The motor and the spring are commercial components, whereas the cam, the firing pin and the encapsulating structure are printed. The design of the impact actuator is minimized until the minimum size in order to integrate it inside the haptic vest; moreover, if the actuator is small, the integration does not mean an increase in weight or discomfort. Finally, the system can be controlled through two methods: on one hand, the slave PCB can control directly the actuator, sending the corresponding signal, so that the motor comprises the spring and prepare the actuator for the next use. On the other hand, it is possible integrate a small internal electronic circuit inside the own actuator, allowing an improvement on reliability of compressions and impacts; although this suppose an increase of impact size. Then, the used method depends on the vest application and the number of impact actuators inside the haptic vest. The complete system is showed in figure

71 Figure 46 Impact actuator Actuators distribution The last vest version has a distribution of actuators in order to cover the whole extension of the user torso. In this way, the haptic patterns displayed on users can be reproduced over any area and the haptic feedback will be more realistic. Moreover, it is important to consider that every slave PCB controls a certain number of actuators (9 vibration motors, one Peltier cell and one impact actuator), so the distribution was made due to manage that kind of control (the 11 actuators need to be close to that PCB). Although every PCB controls eleven actuators, the distribution is organized in such a way that all areas of the vest are completely covered according to the resolution parameters previously addressed. These parameters have been obtained performing some experiments to determine the two-point discrimination distance for vibration and thermal actuators. Those experiments have been performed in different areas of the human torso, achieving a resolution map that has been used to distribute the actuators properly. The values obtained (discrimination distance) are used to determine the resolution distances in every area and, therefore, to obtain the distribution of the actuators for haptic patterns being perceived adequately. The obtained values for vibration motors are around 60 millimeters. Therefore, two motors working at lower distances will not be individually perceived by users, so haptic patterns will not be correctly perceived and the user does not feel the adequate sensation on his body. Thus, the distribution is done considering the results of experiments and the motors have been placed at 75 millimeters between them, following a triangular configuration. In this way, it is possible to display both complex haptic patterns and generalized sensations of vibration. Moreover, if the configuration of patterns is adequate, the sensation will be realistic since those vibrations will be differentiable by users, provided that those patterns will be properly configured. The experiments for vibration motors were performed with two different kinds of motors in order to find out if discrimination distances are variable depending on the actuators. Two motors have different rated frequency values and it is possible that human perception interpret the vibration in 71

72 different ways. However, it was checked that different frequencies do not mean variations about perception of vibration. Figure 31 shows the results of motors distribution over the vest. Figure 47 Distribution of vibration motors Regarding to thermoelectrical actuators, the obtained results of resolution experiments show that a high percentage of users can differentiate temperature sensations if the actuators are placed at distances equal or greater than 15 centimeters. Thermal actuators are placed at that distance (15 cm) attending to efficiency and space issues; that will allow to create both generalized heat or cold sensations and complex thermal haptic patterns if those patterns are necessary for reproducing some interaction from the VE. Figure 32 shows the results of Peltier cells distribution over the vest. Therefore, the ensemble of PCB and the actuators controlled by that PCB is called module. The objective is those modules are distributed over all areas of the vest. The modules allow to reproduce vibration and thermal patterns, but it is possible to combine both kinds of stimulation in order to create combined haptic patterns, getting more interaction possibilities. The definitive version of the vest includes 8 modules (4 modules in the front of the vest and 4 modules in the rear of the vest). Thus, 72 vibration motors and 8 Peltier cells are distributed all over the vest. Every actuator is fitted in a support make using 3D printing; moreover, those supports are attached to the vest with different methods as sewn or Velcro, that is, every motor and every Peltier cell is put up in its own support. Moreover, the electronic system is always placed fitted in other support and is placed around the Peltier cell and the heat sink. The wiring between the actuators and the electronic system is adjusted to the fabric of the vest and is guided using printed structures, avoiding likely issues with those cables (breaks, wire stripped, etc.) 72

73 Figure 32 Distribution of Peltier Cells The power of modules is supplied using batteries, carefully selected to maintain the operation of the vest during prolonged time periods, keeping in mind consumptions of all actuators: Peltier cells, vibration motors, impact actuators, electronic system, etc. The batteries are placed in the lateral area of the vest by removing all external cables (the vest has to be wireless), achieving a good wearability and usability since all the systems are included in the own vest. Finally, any study has been performed about resolution distances for impact actuators since it is not intended to create these stimuli in areas very close to each other. Just as the Peltier cells, every module contains an impact actuator, in this way all areas of the vest have its own actuators and several impacts can be reproduced at time if the VE requires it, achieving sensations as realistic as possible. Moreover, the impact actuators are larger compared to the previous ones, so including more than one suppose an increase of size, weight and discomfort Haptic Vest Integration in Unity The communication between the vest electronics and the VE software has been done using two methods: a USB cable from the computer or another user interface; and through Bluetooth communication. The USB cable was the communication protocol for Pilot 2, whereas the Bluetooth communication is the protocol selected for Pilot 3 and the definitive version of the vest, achieving a better wearability and user experience. Two kinds of communications have been done with Unity. On one hand, an interface has been created, allowing the actuators working without connection with any VE; this interface is used for demonstrations and test. On the other hand, the vest is connected with the AUGGMED VE, allowing to perceive all the interactions and stimulations provided by the virtual world. Those interactions and stimulations are predefined and stored in the vest electronics, which allows to reproduce them whichever it is the VE. The number of stored sensations can be increased using the corresponding programming interface. These sensations can also be reproduced using the interface without relation with the VE. 73

74 Figure 48 Unity Interface for Tests During the integration between the haptic vest and the VE have been defined different areas in the virtual avatar controlled by the user, named colliders. When an event happens in the VE and this event affects to the user, an interaction between virtual colliders is also produced, that generates an event that will be transmitted to the vest. For instance, when an event related to touch-based communication is produced, the avatar collider crashes with a collider defining an object or another avatar; this crash is interpreted by the system and sends a variable to the vest electronics; then, the electronic system reproduces the selected haptic pattern. In the same way, when the avatar receives a bullet impact, the user will feel a bump from the impact actuator. Finally, regarding to thermal sensations, the VE has defined an environmental temperature varying depending on the environmental conditions; when a significant variation of temperature is produced, the system sends a signal to the vest electronics and this system indicates to the Peltier cell the specific temperature that have to reach. If the thermal stimulus is isolated, the system follows the same process that the impact process. Figure 34 shows the results of the integration of the vest inside Unity system. Figure 49 Vest integration in Unity 5 Therefore, there are a correlation between virtual events and haptic sensations previously created. Unity sends to the vest a variable indicating pattern or haptic sensation to be reproduced and the electronic vest system will reproduce it through the actuators. The communication is fast, so the sensations will be synchronized with the VE and the happened events. This VE is seen by the user 74

D3.1 Report on the first and second versions of the simulation environment

D3.1 Report on the first and second versions of the simulation environment Public AUGGMED (Automated Serious Game Scenario Generator for Mixed Reality Training) D3.1 Report on the first and second versions of the simulation environment Deliverable No. D3.1 Workpackage No. WP3

More information

Simulation of Passenger Evacuation using a NAPA Model

Simulation of Passenger Evacuation using a NAPA Model Simulation of Passenger Evacuation using a NAPA Model J. Ala-Peijari, P. Berseneff (STX Europe) U. Langbecker (GL), A. Metsä (NAPA) Page 1 Outline Introduction Model Creation Evacuation Simulation Sample

More information

TOOLBOX TALKS. Active Shooter Awareness. A Quality Service Contractors Publication for Members July What is an Active Shooter?

TOOLBOX TALKS. Active Shooter Awareness. A Quality Service Contractors Publication for Members July What is an Active Shooter? Active Shooter Awareness These are just a few of the numerous active shooter instances that occur every year in the United States. A university, a movie theatre, a place of worship and an elementary school

More information

SIMULATION MODELING WITH ARTIFICIAL REALITY TECHNOLOGY (SMART): AN INTEGRATION OF VIRTUAL REALITY AND SIMULATION MODELING

SIMULATION MODELING WITH ARTIFICIAL REALITY TECHNOLOGY (SMART): AN INTEGRATION OF VIRTUAL REALITY AND SIMULATION MODELING Proceedings of the 1998 Winter Simulation Conference D.J. Medeiros, E.F. Watson, J.S. Carson and M.S. Manivannan, eds. SIMULATION MODELING WITH ARTIFICIAL REALITY TECHNOLOGY (SMART): AN INTEGRATION OF

More information

Pedestrian Dynamics Tutorial 1

Pedestrian Dynamics Tutorial 1 Pedestrian Dynamics Tutorial 1 1 Table of Contents 1. Table of Contents 1-2 2. Getting Familiar with Pedestrian Dynamics 3-4 2.2. Starting Pedestrian Dynamics 3-4 2.1. Pedestrian Dynamics 3-4 3. Building

More information

ACTIVE SHOOTER AWARENESS TRAINING

ACTIVE SHOOTER AWARENESS TRAINING ACTIVE SHOOTER AWARENESS TRAINING ACTIVE SHOOTER AWARENESS TRAINING Welcome & Introductions For Official Use Only 2 AGENDA San Bernardino Active Shooter (Radio Traffic) Run, Hide, Fight Video Profile of

More information

Multipath and Diversity

Multipath and Diversity Multipath and Diversity Document ID: 27147 Contents Introduction Prerequisites Requirements Components Used Conventions Multipath Diversity Case Study Summary Related Information Introduction This document

More information

Developing the Model

Developing the Model Team # 9866 Page 1 of 10 Radio Riot Introduction In this paper we present our solution to the 2011 MCM problem B. The problem pertains to finding the minimum number of very high frequency (VHF) radio repeaters

More information

Visualisation of Traffic Behaviour Using Computer Simulation Models

Visualisation of Traffic Behaviour Using Computer Simulation Models Journal of Maps ISSN: (Print) 1744-5647 (Online) Journal homepage: http://www.tandfonline.com/loi/tjom20 Visualisation of Traffic Behaviour Using Computer Simulation Models Joerg M. Tonndorf & Vladimir

More information

Huawei Indoor WLAN Deployment Guide

Huawei Indoor WLAN Deployment Guide Huawei Indoor WLAN Deployment Guide 1 2 3 4 5 6 Project Preparation Coverage Design Placement Design Bandwidth Design Power Supply and Cabling Design Project Cases 1 WLAN Planning Process Project Demands

More information

Interoperability Training

Interoperability Training SEGARRN Interoperability Training System Wide Communications Coordination Authored by the SEGARRN Training Committee 5/24/2011 This document aims to educate the SEGARRN user base on the essential interoperability

More information

Title. Author Date Audience

Title. Author Date Audience Prepared for: National Public Safety Telecommunications Council - The Voice of Public Safety Title Author Date Audience Sean O Hara (Syracuse Research Corp.) 09/21/04 4.8 GHz Incident Scenario and Simulation

More information

AN0503 Using swarm bee LE for Collision Avoidance Systems (CAS)

AN0503 Using swarm bee LE for Collision Avoidance Systems (CAS) AN0503 Using swarm bee LE for Collision Avoidance Systems (CAS) 1.3 NA-14-0267-0019-1.3 Document Information Document Title: Document Version: 1.3 Current Date: 2016-05-18 Print Date: 2016-05-18 Document

More information

Part 1: Determining the Sensors and Feedback Mechanism

Part 1: Determining the Sensors and Feedback Mechanism Roger Yuh Greg Kurtz Challenge Project Report Project Objective: The goal of the project was to create a device to help a blind person navigate in an indoor environment and avoid obstacles of varying heights

More information

Software Requirements Specification

Software Requirements Specification ÇANKAYA UNIVERSITY Software Requirements Specification Simulacrum: Simulated Virtual Reality for Emergency Medical Intervention in Battle Field Conditions Sedanur DOĞAN-201211020, Nesil MEŞURHAN-201211037,

More information

Virtual Reality in E-Learning Redefining the Learning Experience

Virtual Reality in E-Learning Redefining the Learning Experience Virtual Reality in E-Learning Redefining the Learning Experience A Whitepaper by RapidValue Solutions Contents Executive Summary... Use Cases and Benefits of Virtual Reality in elearning... Use Cases...

More information

BIM & Emerging Technologies. Disrupting Design process & Construction

BIM & Emerging Technologies. Disrupting Design process & Construction BIM & Emerging Technologies Disrupting Design process & Construction Introduction Introduction - BIM Disrupting the Construction Introduction Design Major disruption already in various parts of the World

More information

KING COUNTY FIRE MODEL PROCEDURE Section 15 Abandon / Withdraw

KING COUNTY FIRE MODEL PROCEDURE Section 15 Abandon / Withdraw KING COUNTY FIRE MODEL PROCEDURE Section 15 Abandon / Withdraw Adopted 1/21/07 Revised 6/5/17 1.0 PURPOSE 1.1 This model procedure is endorsed by the King County Fire Chiefs Association as a template for

More information

Modelling Small Cell Deployments within a Macrocell

Modelling Small Cell Deployments within a Macrocell Modelling Small Cell Deployments within a Macrocell Professor William Webb MBA, PhD, DSc, DTech, FREng, FIET, FIEEE 1 Abstract Small cells, or microcells, are often seen as a way to substantially enhance

More information

Fire Service College - immersive 3D emergency training

Fire Service College - immersive 3D emergency training Fire Service College - immersive 3D emergency training The Fire Service College are an award-winning leader in fire and emergency response training and operate one of the world s largest fire and rescue

More information

Networks of any size and topology. System infrastructure monitoring and control. Bridging for different radio networks

Networks of any size and topology. System infrastructure monitoring and control. Bridging for different radio networks INTEGRATED SOLUTION FOR MOTOTRBO TM Networks of any size and topology System infrastructure monitoring and control Bridging for different radio networks Integrated Solution for MOTOTRBO TM Networks of

More information

HeroX - Untethered VR Training in Sync'ed Physical Spaces

HeroX - Untethered VR Training in Sync'ed Physical Spaces Page 1 of 6 HeroX - Untethered VR Training in Sync'ed Physical Spaces Above and Beyond - Integrating Robotics In previous research work I experimented with multiple robots remotely controlled by people

More information

Active Shooter Training - Safety and Security SafetySecurityUpdate.mp4

Active Shooter Training - Safety and Security SafetySecurityUpdate.mp4 Active Shooter Training - Safety and Security SafetySecurityUpdate.mp4 https://youtu.be/sulbla2qdbq [Fade in on HACC sign] [Fade to SKI standing outside on Harrisburg Campus] JOHN J. SKI SYGIELSKI, ED.D.,

More information

Interactive guidance system for railway passengers

Interactive guidance system for railway passengers Interactive guidance system for railway passengers K. Goto, H. Matsubara, N. Fukasawa & N. Mizukami Transport Information Technology Division, Railway Technical Research Institute, Japan Abstract This

More information

A STUDY OF WAYFINDING IN TAIPEI METRO STATION TRANSFER: MULTI-AGENT SIMULATION APPROACH

A STUDY OF WAYFINDING IN TAIPEI METRO STATION TRANSFER: MULTI-AGENT SIMULATION APPROACH A STUDY OF WAYFINDING IN TAIPEI METRO STATION TRANSFER: MULTI-AGENT SIMULATION APPROACH Kuo-Chung WEN 1 * and Wei-Chen SHEN 2 1 Associate Professor, Graduate Institute of Architecture and Urban Design,

More information

ACCESS TO HIGH VOLTAGE APPARATUS

ACCESS TO HIGH VOLTAGE APPARATUS CORPORATE PROCEDURE ACCESS TO HIGH VOLTAGE APPARATUS Approved By: Prepared By: Issue Date: 17/6/2011 Andrew Macrides Access to Apparatus Rules File No: Managing Director Committee QDOC2011/63 Status: Approved

More information

Active Shooter. Preparation

Active Shooter. Preparation Active Shooter Active Shooter - an individual actively engaged in killing or attempting to kill people in a confined and populated area; in most cases, active shooters use firearms(s) and there is no pattern

More information

Overview. Copyright Remcom Inc. All rights reserved.

Overview. Copyright Remcom Inc. All rights reserved. Overview Remcom: Who We Are EM market leader, with innovative simulation and wireless propagation tools since 1994 Broad business base Span Commercial and Government contracting International presence:

More information

Qosmotec. Software Solutions GmbH. Technical Overview. QPER C2X - Car-to-X Signal Strength Emulator and HiL Test Bench. Page 1

Qosmotec. Software Solutions GmbH. Technical Overview. QPER C2X - Car-to-X Signal Strength Emulator and HiL Test Bench. Page 1 Qosmotec Software Solutions GmbH Technical Overview QPER C2X - Page 1 TABLE OF CONTENTS 0 DOCUMENT CONTROL...3 0.1 Imprint...3 0.2 Document Description...3 1 SYSTEM DESCRIPTION...4 1.1 General Concept...4

More information

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright E90 Project Proposal 6 December 2006 Paul Azunre Thomas Murray David Wright Table of Contents Abstract 3 Introduction..4 Technical Discussion...4 Tracking Input..4 Haptic Feedack.6 Project Implementation....7

More information

EstimaXpro. S&R Consultants

EstimaXpro. S&R Consultants EstimaXpro S&R Consultants Contents 1. Introduction... 5 2. Masters... 7 2.1 Project Details... 7 2.2 Storey Details... 8 2.3 Joinery Details... 8 2.4 Rate types... 9 2.5 Rates... 9 2.6 Rate Analysis Type...

More information

Venue Public Security & Stadium Access Security

Venue Public Security & Stadium Access Security Venue Public Security & Stadium Access Security Fred S. Roberts CCICADA Director froberts@dimacs.rutgers.edu May 3, 2017 1 2 CCICADA Founded 2009 as DHS University COE Based at Rutgers University; many

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

David J. Gellner, AICP, Principal Planner

David J. Gellner, AICP, Principal Planner Staff Report PLANNING DIVISION COMMUNITY & ECONOMIC DEVELOPMENT To: From: Salt Lake City Planning Commission David J. Gellner, AICP, Principal Planner - 801-535-6107 - david.gellner@slcgov.com Date: October

More information

Part 1: General principles

Part 1: General principles Provläsningsexemplar / Preview INTERNATIONAL STANDARD ISO 129-1 Second edition 2018-02 Technical product documentation (TPD) Presentation of dimensions and tolerances Part 1: General principles Documentation

More information

SIMGRAPH - A FLIGHT SIMULATION DATA VISUALIZATION WORKSTATION. Joseph A. Kaplan NASA Langley Research Center Hampton, Virginia

SIMGRAPH - A FLIGHT SIMULATION DATA VISUALIZATION WORKSTATION. Joseph A. Kaplan NASA Langley Research Center Hampton, Virginia SIMGRAPH - A FLIGHT SIMULATION DATA VISUALIZATION WORKSTATION Joseph A. Kaplan NASA Langley Research Center Hampton, Virginia Patrick S. Kenney UNISYS Corporation Hampton, Virginia Abstract Today's modern

More information

SURVIVING AN ACTIVE SHOOTER INCIDENT: 5 STEPS TO STAY ALIVE

SURVIVING AN ACTIVE SHOOTER INCIDENT: 5 STEPS TO STAY ALIVE $19.95 SURVIVING AN ACTIVE SHOOTER INCIDENT: 5 STEPS TO STAY ALIVE Special Report: Survive An Active Shooter Understand the profile of an active shooter Active shooter statistics 5 steps to survival Tips

More information

TECHNOLOGY COMMONALITY FOR SIMULATION TRAINING OF AIR COMBAT OFFICERS AND NAVAL HELICOPTER CONTROL OFFICERS

TECHNOLOGY COMMONALITY FOR SIMULATION TRAINING OF AIR COMBAT OFFICERS AND NAVAL HELICOPTER CONTROL OFFICERS TECHNOLOGY COMMONALITY FOR SIMULATION TRAINING OF AIR COMBAT OFFICERS AND NAVAL HELICOPTER CONTROL OFFICERS Peter Freed Managing Director, Cirrus Real Time Processing Systems Pty Ltd ( Cirrus ). Email:

More information

SPACES FOR CREATING CONTEXT & AWARENESS - DESIGNING A COLLABORATIVE VIRTUAL WORK SPACE FOR (LANDSCAPE) ARCHITECTS

SPACES FOR CREATING CONTEXT & AWARENESS - DESIGNING A COLLABORATIVE VIRTUAL WORK SPACE FOR (LANDSCAPE) ARCHITECTS SPACES FOR CREATING CONTEXT & AWARENESS - DESIGNING A COLLABORATIVE VIRTUAL WORK SPACE FOR (LANDSCAPE) ARCHITECTS Ina Wagner, Monika Buscher*, Preben Mogensen, Dan Shapiro* University of Technology, Vienna,

More information

Sheet Metal OverviewChapter1:

Sheet Metal OverviewChapter1: Sheet Metal OverviewChapter1: Chapter 1 This chapter describes the terminology, design methods, and fundamental tools used in the design of sheet metal parts. Building upon these foundational elements

More information

Which Dispatch Solution?

Which Dispatch Solution? White Paper Which Dispatch Solution? Revision 1.0 www.omnitronicsworld.com Radio Dispatch is a term used to describe the carrying out of business operations over a radio network from one or more locations.

More information

SPIDERMAN VR. Adam Elgressy and Dmitry Vlasenko

SPIDERMAN VR. Adam Elgressy and Dmitry Vlasenko SPIDERMAN VR Adam Elgressy and Dmitry Vlasenko Supervisors: Boaz Sternfeld and Yaron Honen Submission Date: 09/01/2019 Contents Who We Are:... 2 Abstract:... 2 Previous Work:... 3 Tangent Systems & Development

More information

bringing spaces to life travel environments: project examples

bringing spaces to life travel environments: project examples travel environments: project examples hello from hello from bwd We are interior architects. We provide a creative consultancy service to help you improved your business premises. We offer our services

More information

Booklet of teaching units

Booklet of teaching units International Master Program in Mechatronic Systems for Rehabilitation Booklet of teaching units Third semester (M2 S1) Master Sciences de l Ingénieur Université Pierre et Marie Curie Paris 6 Boite 164,

More information

ELECTROMAGNETIC PROPAGATION PREDICTION INSIDE AIRPLANE FUSELAGES AND AIRPORT TERMINALS

ELECTROMAGNETIC PROPAGATION PREDICTION INSIDE AIRPLANE FUSELAGES AND AIRPORT TERMINALS ELECTROMAGNETIC PROPAGATION PREDICTION INSIDE AIRPLANE FUSELAGES AND AIRPORT TERMINALS Mennatoallah M. Youssef Old Dominion University Advisor: Dr. Linda L. Vahala Abstract The focus of this effort is

More information

WRS Partner Robot Challenge (Virtual Space) is the World's first competition played under the cyber-physical environment.

WRS Partner Robot Challenge (Virtual Space) is the World's first competition played under the cyber-physical environment. WRS Partner Robot Challenge (Virtual Space) 2018 WRS Partner Robot Challenge (Virtual Space) is the World's first competition played under the cyber-physical environment. 1 Introduction The Partner Robot

More information

DARPA-BAA Next Generation Social Science (NGS2) Frequently Asked Questions (FAQs) as of 3/25/16

DARPA-BAA Next Generation Social Science (NGS2) Frequently Asked Questions (FAQs) as of 3/25/16 DARPA-BAA-16-32 Next Generation Social Science (NGS2) Frequently Asked Questions (FAQs) as of 3/25/16 67Q: Where is the Next Generation Social Science (NGS2) BAA posted? 67A: The NGS2 BAA can be found

More information

An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots

An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots Maren Bennewitz Wolfram Burgard Department of Computer Science, University of Freiburg, 7911 Freiburg, Germany maren,burgard

More information

Demonstration Experiment on Information Services Using Active RFID Reader Attached to Mobile Terminals

Demonstration Experiment on Information Services Using Active RFID Reader Attached to Mobile Terminals Active RFID Information Distributing Service Monitoring Service Demonstration Experiment on Information Services Using Active RFID Reader Attached to Mobile Terminals A prototype of information system

More information

Context-Aware Planning and Verification

Context-Aware Planning and Verification 7 CHAPTER This chapter describes a number of tools and configurations that can be used to enhance the location accuracy of elements (clients, tags, rogue clients, and rogue access points) within an indoor

More information

ANT Channel Search ABSTRACT

ANT Channel Search ABSTRACT ANT Channel Search ABSTRACT ANT channel search allows a device configured as a slave to find, and synchronize with, a specific master. This application note provides an overview of ANT channel establishment,

More information

Admin. Today: Designing for Virtual Reality VR and 3D interfaces Interaction design for VR Prototyping for VR

Admin. Today: Designing for Virtual Reality VR and 3D interfaces Interaction design for VR Prototyping for VR HCI and Design Admin Reminder: Assignment 4 Due Thursday before class Questions? Today: Designing for Virtual Reality VR and 3D interfaces Interaction design for VR Prototyping for VR 3D Interfaces We

More information

MODEL SETUP FOR RENOVATION PROJECTS INSTRUCTIONS AND TUTORIALS

MODEL SETUP FOR RENOVATION PROJECTS INSTRUCTIONS AND TUTORIALS MODEL SETUP FOR RENOVATION PROJECTS INSTRUCTIONS AND TUTORIALS WHAT S INSIDE INTRODUCTION 1 PART ONE LAYERS AND CLASSES FOR RENOVATION PROJECT 1 OVERVIEW 1 SETTING UP LAYERS AND CLASSES 1 CREATING OBJECT

More information

Essential requirements for a spectrum monitoring system for developing countries

Essential requirements for a spectrum monitoring system for developing countries Recommendation ITU-R SM.1392-2 (02/2011) Essential requirements for a spectrum monitoring system for developing countries SM Series Spectrum management ii Rec. ITU-R SM.1392-2 Foreword The role of the

More information

Virtual Environments. Ruth Aylett

Virtual Environments. Ruth Aylett Virtual Environments Ruth Aylett Aims of the course 1. To demonstrate a critical understanding of modern VE systems, evaluating the strengths and weaknesses of the current VR technologies 2. To be able

More information

Fact File 57 Fire Detection & Alarms

Fact File 57 Fire Detection & Alarms Fact File 57 Fire Detection & Alarms Report on tests conducted to demonstrate the effectiveness of visual alarm devices (VAD) installed in different conditions Report on tests conducted to demonstrate

More information

Lecture #6 Basic Concepts of Cellular Transmission (p3)

Lecture #6 Basic Concepts of Cellular Transmission (p3) November 2014 Integrated Technical Education Cluster At AlAmeeria E-716-A Mobile Communications Systems Lecture #6 Basic Concepts of Cellular Transmission (p3) Instructor: Dr. Ahmad El-Banna Agenda Duplexing

More information

Software for design and manufacture of stairs

Software for design and manufacture of stairs Software for design and manufacture of stairs From sales to production Picture by Drömtrappor elecosoft.com/staircon Streamline stair production and increase sales The complete software for stair manufacturers

More information

Smart Automatic Level Control For improved repeater integration in CDMA and WCDMA networks

Smart Automatic Level Control For improved repeater integration in CDMA and WCDMA networks Smart Automatic Level Control For improved repeater integration in CDMA and WCDMA networks The most important thing will build is trust Smart Automatic Level Control (SALC) Abstract The incorporation of

More information

The development of a virtual laboratory based on Unreal Engine 4

The development of a virtual laboratory based on Unreal Engine 4 The development of a virtual laboratory based on Unreal Engine 4 D A Sheverev 1 and I N Kozlova 1 1 Samara National Research University, Moskovskoye shosse 34А, Samara, Russia, 443086 Abstract. In our

More information

WPI PPL System Development Updates & Overview of the results from the August 2008 WPI PIPILTER Workshop

WPI PPL System Development Updates & Overview of the results from the August 2008 WPI PIPILTER Workshop WPI PPL System Development Updates & Overview of the results from the August 2008 WPI PIPILTER Workshop David Cyganski, James Duckworth Electrical and Computer Engineering Department Worcester Polytechnic

More information

FPS Assignment Call of Duty 4

FPS Assignment Call of Duty 4 FPS Assignment Call of Duty 4 Name of Game: Call of Duty 4 2007 Platform: PC Description of Game: This is a first person combat shooter and is designed to put the player into a combat environment. The

More information

INCLINED PLANE RIG LABORATORY USER GUIDE VERSION 1.3

INCLINED PLANE RIG LABORATORY USER GUIDE VERSION 1.3 INCLINED PLANE RIG LABORATORY USER GUIDE VERSION 1.3 Labshare 2011 Table of Contents 1 Introduction... 3 1.1 Remote Laboratories... 3 1.2 Inclined Plane - The Rig Apparatus... 3 1.2.1 Block Masses & Inclining

More information

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Davis Ancona and Jake Weiner Abstract In this report, we examine the plausibility of implementing a NEAT-based solution

More information

What will the robot do during the final demonstration?

What will the robot do during the final demonstration? SPENCER Questions & Answers What is project SPENCER about? SPENCER is a European Union-funded research project that advances technologies for intelligent robots that operate in human environments. Such

More information

AUGMENTED REALITY IN URBAN MOBILITY

AUGMENTED REALITY IN URBAN MOBILITY AUGMENTED REALITY IN URBAN MOBILITY 11 May 2016 Normal: Prepared by TABLE OF CONTENTS TABLE OF CONTENTS... 1 1. Overview... 2 2. What is Augmented Reality?... 2 3. Benefits of AR... 2 4. AR in Urban Mobility...

More information

best practice guide Ruckus SPoT Best Practices SOLUTION OVERVIEW AND BEST PRACTICES FOR DEPLOYMENT

best practice guide Ruckus SPoT Best Practices SOLUTION OVERVIEW AND BEST PRACTICES FOR DEPLOYMENT best practice guide Ruckus SPoT Best Practices SOLUTION OVERVIEW AND BEST PRACTICES FOR DEPLOYMENT Overview Since the mobile device industry is alive and well, every corner of the ever-opportunistic tech

More information

AN ARCHITECTURE-BASED MODEL FOR UNDERGROUND SPACE EVACUATION SIMULATION

AN ARCHITECTURE-BASED MODEL FOR UNDERGROUND SPACE EVACUATION SIMULATION AN ARCHITECTURE-BASED MODEL FOR UNDERGROUND SPACE EVACUATION SIMULATION Chengyu Sun Bauke de Vries College of Architecture and Urban Planning Faculty of Architecture, Building and Planning Tongji University

More information

A Productivity Comparison of AutoCAD and AutoCAD Architecture Software

A Productivity Comparison of AutoCAD and AutoCAD Architecture Software AUTODCAD ARCHITECTURE A Productivity Comparison of and Software provides the best software-based design and documentation productivity for architects. This study details productivity gains over in designing

More information

1 Abstract and Motivation

1 Abstract and Motivation 1 Abstract and Motivation Robust robotic perception, manipulation, and interaction in domestic scenarios continues to present a hard problem: domestic environments tend to be unstructured, are constantly

More information

Team Breaking Bat Architecture Design Specification. Virtual Slugger

Team Breaking Bat Architecture Design Specification. Virtual Slugger Department of Computer Science and Engineering The University of Texas at Arlington Team Breaking Bat Architecture Design Specification Virtual Slugger Team Members: Sean Gibeault Brandon Auwaerter Ehidiamen

More information

Independent Communications Authority of South Africa Pinmill Farm, 164 Katherine Street, Sandton Private Bag X10002, Sandton, 2146

Independent Communications Authority of South Africa Pinmill Farm, 164 Katherine Street, Sandton Private Bag X10002, Sandton, 2146 Independent Communications Authority of South Africa Pinmill Farm, 164 Katherine Street, Sandton Private Bag X10002, Sandton, 2146 ANNEXURE A TECHNICAL SPECIFICATIONS ICASA 09/2018 1. Purpose of the Request

More information

3 Using AutoTransient to Carry Out a Simple Transient Study

3 Using AutoTransient to Carry Out a Simple Transient Study 3 Using AutoTransient to Carry Out a Simple Transient Study 3 Using AutoTransient to Carry Out a Simple Transient Study 3.1 Introduction Dr. Simon Fortin Last year at the CDEGS Users Group Meeting we introduced

More information

CEPT WGSE PT SE21. SEAMCAT Technical Group

CEPT WGSE PT SE21. SEAMCAT Technical Group Lucent Technologies Bell Labs Innovations ECC Electronic Communications Committee CEPT CEPT WGSE PT SE21 SEAMCAT Technical Group STG(03)12 29/10/2003 Subject: CDMA Downlink Power Control Methodology for

More information

Performance review of Pico base station in Indoor Environments

Performance review of Pico base station in Indoor Environments Aalto University School of Electrical Engineering Performance review of Pico base station in Indoor Environments Inam Ullah, Edward Mutafungwa, Professor Jyri Hämäläinen Outline Motivation Simulator Development

More information

Index. Linear Booth, Corner Booth and Perimeter Booth 2. End-cap Booth and Peninsula Booth 3. Split Island Booth and Island Booth 4

Index. Linear Booth, Corner Booth and Perimeter Booth 2. End-cap Booth and Peninsula Booth 3. Split Island Booth and Island Booth 4 Index Linear Booth, Corner Booth and Perimeter Booth 2 End-cap Booth and Peninsula Booth 3 Split Island Booth and Island Booth 4 Other Important Considerations 5 Issues Common To All Booth Types 6-7 The

More information

Active Shooter Situations. Ernest Valverde, CrossRoad United Methodist Church Jacksonville, FL

Active Shooter Situations. Ernest Valverde, CrossRoad United Methodist Church Jacksonville, FL + Active Shooter Situations Ernest Valverde, CrossRoad United Methodist Church Jacksonville, FL + Objectives The objective of this training is to give the staff and employees of CrossRoad United Methodist

More information

Pedestrian Simulation in Transit Stations Using Agent-Based Analysis

Pedestrian Simulation in Transit Stations Using Agent-Based Analysis Urban Rail Transit (2017) 3(1):54 60 DOI 10.1007/s40864-017-0053-5 http://www.urt.cn/ ORIGINAL RESEARCH PAPERS Pedestrian Simulation in Transit Stations Using Agent-Based Analysis Ming Tang 1 Yingdong

More information

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL

More information

TRBOnet Enterprise/PLUS

TRBOnet Enterprise/PLUS TRBOnet Enterprise/PLUS Guard Tour User Guide Version 5.2 World HQ Neocom Software 8th Line 29, Vasilyevsky Island St. Petersburg, 199004, Russia US Office Neocom Software 15200 Jog Road, Suite 202 Delray

More information

TRBOnet Guard Tour Configuration and Operation Guide

TRBOnet Guard Tour Configuration and Operation Guide TRBOnet Guard Tour and Operation Guide Version 5.0 World HQ Neocom Software 8th Line 29, Vasilyevsky Island St. Petersburg, 199004, Russia US Office Neocom Software 15200 Jog Road, Suite 202 Delray Beach,

More information

Haptic presentation of 3D objects in virtual reality for the visually disabled

Haptic presentation of 3D objects in virtual reality for the visually disabled Haptic presentation of 3D objects in virtual reality for the visually disabled M Moranski, A Materka Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, Lodz, POLAND marcin.moranski@p.lodz.pl,

More information

Improving Performance through Superior Innovative Antenna Technologies

Improving Performance through Superior Innovative Antenna Technologies Improving Performance through Superior Innovative Antenna Technologies INTRODUCTION: Cell phones have evolved into smart devices and it is these smart devices that have become such a dangerous weapon of

More information

Emergency Preparedness and Planning: GIS and Critical 360. Samantha Luckhardt EGIS Supervisor City of Baltimore, Maryland

Emergency Preparedness and Planning: GIS and Critical 360. Samantha Luckhardt EGIS Supervisor City of Baltimore, Maryland Emergency Preparedness and Planning: GIS and Critical 360 Samantha Luckhardt EGIS Supervisor City of Baltimore, Maryland www.sitacgroup.com Types of Threats Have Changed Change in Response Tactics Response

More information

IoT Wi-Fi- based Indoor Positioning System Using Smartphones

IoT Wi-Fi- based Indoor Positioning System Using Smartphones IoT Wi-Fi- based Indoor Positioning System Using Smartphones Author: Suyash Gupta Abstract The demand for Indoor Location Based Services (LBS) is increasing over the past years as smartphone market expands.

More information

Trunking Information Control Console

Trunking Information Control Console Trunking Information Control Console One Touch Communication and Control In a TICC we can: Initiate a call in one touch Send a status in one touch Call a group of users in one touch See what type of call

More information

Network Standard NS

Network Standard NS Network Standard NS 21-2006 Artwork on Western Power Assets Technical Requirements for application to South West Interconnected System (SWIS) DMS #1049174 NS 21-2006 Artwork on Western Power Assets REVISION

More information

Visual & Virtual Configure-Price-Quote (CPQ) Report. June 2017, Version Novus CPQ Consulting, Inc. All Rights Reserved

Visual & Virtual Configure-Price-Quote (CPQ) Report. June 2017, Version Novus CPQ Consulting, Inc. All Rights Reserved Visual & Virtual Configure-Price-Quote (CPQ) Report June 2017, Version 2 2017 Novus CPQ Consulting, Inc. All Rights Reserved Visual & Virtual CPQ Report As of April 2017 About this Report The use of Configure-Price-Quote

More information

The Field of Systems Management, Graduate School of Engineering, Nagoya Institute of Technology, Nagoya, Aichi , Japan

The Field of Systems Management, Graduate School of Engineering, Nagoya Institute of Technology, Nagoya, Aichi , Japan Computer Technology and Application 7 (2016) 227-235 doi: 10.17265/1934-7332/2016.05.001 D DAVID PUBLISHING valuation of Behavior of vacuees on a Floor in a Disaster Situation Using Multi-agent Simulation

More information

Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment

Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment Helmut Schrom-Feiertag 1, Christoph Schinko 2, Volker Settgast 3, and Stefan Seer 1 1 Austrian

More information

Case Study - Safeguarding. Case Study Safeguarding

Case Study - Safeguarding. Case Study Safeguarding Case Study - Safeguarding Paul Santi Director - Engineering FANUC America Corp. October 14 th 16 th, 2013 ~ Indianapolis, Indiana USA Case Study Safeguarding Professional Background: Mechanical Engineering

More information

Battery-Free Wireless Pushbutton Useful Tips for Reliable Range Planning

Battery-Free Wireless Pushbutton Useful Tips for Reliable Range Planning Battery-Free Wireless Pushbutton Useful Tips for Reliable Range Planning,, 2010-11-12,, leipzig@schlegel.biz, www.schlegel.biz 1. INTRODUCTION Compared to wireline systems, wireless solutions enable convenient

More information

STATE OF THE ART 3D DESKTOP SIMULATIONS FOR TRAINING, FAMILIARISATION AND VISUALISATION.

STATE OF THE ART 3D DESKTOP SIMULATIONS FOR TRAINING, FAMILIARISATION AND VISUALISATION. STATE OF THE ART 3D DESKTOP SIMULATIONS FOR TRAINING, FAMILIARISATION AND VISUALISATION. Gordon Watson 3D Visual Simulations Ltd ABSTRACT Continued advancements in the power of desktop PCs and laptops,

More information

Virtual Reality Calendar Tour Guide

Virtual Reality Calendar Tour Guide Technical Disclosure Commons Defensive Publications Series October 02, 2017 Virtual Reality Calendar Tour Guide Walter Ianneo Follow this and additional works at: http://www.tdcommons.org/dpubs_series

More information

ADVANCED WHACK A MOLE VR

ADVANCED WHACK A MOLE VR ADVANCED WHACK A MOLE VR Tal Pilo, Or Gitli and Mirit Alush TABLE OF CONTENTS Introduction 2 Development Environment 3 Application overview 4-8 Development Process - 9 1 Introduction We developed a VR

More information

800 System Procedures

800 System Procedures Emergency Button Activation: 800 System Procedures All ACFR radios are equipped with emergency button functionality. When this button is activated by the end-user, an audible alarm and a flashing visual

More information

Using sound levels for location tracking

Using sound levels for location tracking Using sound levels for location tracking Sasha Ames sasha@cs.ucsc.edu CMPE250 Multimedia Systems University of California, Santa Cruz Abstract We present an experiemnt to attempt to track the location

More information

Showcase your venue and add value to your Accessibility Guide

Showcase your venue and add value to your Accessibility Guide Photography Guide Showcase your venue and add value to your Accessibility Guide High quality photographs are a great way to showcase your venue and help set visitor expectations. Your photographs can reassure

More information

Performance Analysis of Ultrasonic Mapping Device and Radar

Performance Analysis of Ultrasonic Mapping Device and Radar Volume 118 No. 17 2018, 987-997 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Performance Analysis of Ultrasonic Mapping Device and Radar Abhishek

More information

Specifying, predicting and testing:

Specifying, predicting and testing: Specifying, predicting and testing: Three steps to coverage confidence on your digital radio network EXECUTIVE SUMMARY One of the most important properties of a radio network is coverage. Yet because radio

More information