CARE INO III 3D IN 2D PLANAR DISPLAY PROJECT D1.2: INNOVATIVE CONCEPTS AND THEIR DESIGN RATIONALE

Size: px
Start display at page:

Download "CARE INO III 3D IN 2D PLANAR DISPLAY PROJECT D1.2: INNOVATIVE CONCEPTS AND THEIR DESIGN RATIONALE"

Transcription

1 CARE INO III 3D IN 2D PLANAR DISPLAY PROJECT D1.2: INNOVATIVE CONCEPTS AND THEIR DESIGN RATIONALE Reference : Edition Effective Date 24/08/07 Authors Organisation Signature Simone Rozzi Middlesex University Alessandro Boccalatte Space Application Services Paola Amaldi Middlesex University Bob Fields Middlesex University Martin Loomes Middlesex University William Wong Middlesex University

2 D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc DOCUMENT CONTROL Copyright notice 2007 European Organisation for the Safety of Air Navigation (EUROCONTROL). All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of EUROCONTROL. Edition history Edition Effective date Author(s) Reason Nº or status July 07 Simone Rozzi Aug 07 Alessandro Boccalatte Simone Rozzi Integrated description of implementation of innovative concepts provided by Space Application Aug 07 Alessandro Boccalatte Simone Rozzi Integrated description of large display suitable for implementation of ATC_ExoVis Display Aug 07 Simone Rozzi Integrate comments from project manager and EUROCONTROL reviewer, minor editing. Acknowledgements Name Peter Martin Steve Ellis Antonio Monteleone Steven Bancroft Adrian Gizdavu Location EUROCONTROL, Bretigny NASA Ames, Moffett Field CA NEXT Ingegneria dei Sistemi S.p.A., Rome - Italy EUROCONTROL, Bretigny EUROCONTROL, Bretigny D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page II

3 TABLE OF CONTENTS LIST OF FIGURES...4 LIST OF TABLE...7 EXECUTIVE SUMMARY...8 INTRODUCTION...12 PART 1 THE HMI INNOVATION METHODOLOGY...16 PART 2 INNOVATIVE CONCEPTS AND THEIR DESIGN RATIONALE D in your hand Table Top Display (AR_Exovis_Overview plus detail display) Skyscraper Display Stereoscopic Display D analogical symbols in 2D Magic Plane Display ATC Exo Vis Display PART 3 CONCLUSIONS Degree of innovation of the innovative concepts Future work: How to use the innovative concepts REFERENCES...61 D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 3

4 LIST OF FIGURES Figure 1. 3D in your hand consist in a localized sub volume of the airspace available on controllers hand on demand Figure 2, 3, 4. The interaction sequence necessary to activate the 3D in your hand: the controller brings his hand close to an interesting portion of the airspace on the radar display, waves it, and as a result a 3D volume would appear on his or her hand Figure 5, 6: The user selects an area on his screen by bringing his/her hand closer to the screen21 Figure 7, Figure 8: After operating the selection 3D appears on the controller s hand. If the image is to cluttered s/he can bring it closer and rotate it if necessary to better inspect the traffic situation. 22 Figure 9: Finally the controller can show the 3d picture to other operators in order to quicken the discussion. The availability of the 3D imagee should require less verbal communication Figure 10: AR visualisation example: a virtual character rendered on top of a real object. The user can see the combined real-virtual images through special see-through glasses Figure 11: AR Toolkit approach: the camera sees the marker and the software computes the marker reference frame (= the pose) with respect to the camera reference frame. The virtual objects are displayed in the user field according to the marker pose Figure 12: ARToolkit process: the ARToolkit software captures images from the video stream, detects the marker and computes its reference frame. The virtual objects are displayed in the user field according to the marker pose Figure 13: 3D-in-your-hand explained: the camera sees the marked object hold by the user and the software computes the marker reference frame (= the pose) with respect to the camera reference frame. The virtual objects are displayed in the user field according to the marker pose Figure 14: 3D-in-your-hand explained: the user holds the marked object on top of which a part of 3D airspace is displayed Figure 15: 3D-in-your-hand interaction: the selects a region, with the mouse, in the 2D screen (right) and the corresponding 3D view is shown in the AR view (left) Figure 16: commercial see-through HMDs used for AR visualisation: a binocular example (left) and a more compact monocular one (right) Figure 17. The Table Top Display display would support precise distance estimation on the 2D planar view. The 3D data on the top of it should promote a qualitative understanding of the situation by supportind 3D pattern identification Figure 18. An Early prototype showing how global traffic view would appear to controllers D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 4

5 Figure 19. An example of an augmentation applied to the final approach fix. An aircraft A is approaching the final approach fix. The controller can see this in the main global 3D view where he can get a qualitative understanding of the situation. The magnified sub-volume view on the upper left provides an enlarged view for a deeper assessment of the situation. The projection on the 2D walls make possible to carry out multiple precise distance estimations between the aircraft and the final approach fix. By moving the selection area it is possible to select and magnify other portion of the air space, e.g. holding stack, terrains Figure 20. Augmentation can provide the controller with perspective view- upper left. When an aircraft has to face a cumulonimbus, the controller can open pilot s perspective view to better support the pilot in formulating an alternative course of action Figure 21, 22: A static mock up showing how the augmentation would appear to a viewer. On the left the user is accessing a pilot view point. On the left the user is holding a side view of an aircraft approaching the final approach fix. Selection of aircraft can be achieved by means of paddle (not represented in the picture) Figure 23. Advanced symbology applied to the Table Top Display. With this configuration the information of height (length of the drop line), attitude (arrow direction), direction (triangle) is split on different graphical elements Figure 24. Which aircraft is descending? Which aircraft is headed east? The aircraft on the left is in level flight headed southwest. The aircraft on the right is descending headed east. 3D iconic representation can code physically different aircraft hampering a precise estimation of individual analogical attributes (Smallman, John, Oonk, & Cowen, 2001a) Figure 25: AR Table top: the 3D airspace is represented in the AR view on top of a marked planar surface Figure 26. The image shows the aircraft elevation data appearing only on the selected area. The remaining part of the display is instead 2D Figure 27, 28. The skyscraper display in operation. The controller can work with a standard 2D radar. Over this display he/she can select an area wher aircraft appear in 3D as seen from a top viewpoint, thus resembling a group of skyscrapers as seen from above. This view contains aircraft elevation data and the transition guarantee maximum consistency since the 3D data is added on the top of 2D Figure 29. Aircraft A is now displayed in the selection area with its drop line. The red arrow indicates the translation movement Figure 30. After the selected area is moved aircraft A drop line changes orientation due to the translation of the camera view point, placed at the centre of the selection area D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 5

6 Figure 31. Aircraft B is now included into the selection area and thus it is displayed in 3D. Mean while aircraft A flatten itself out as it is out of the selection area and is part of the 2D radar Figure 32: Red-cyan glasses for anaglyph stereo visualisation Figure 33: Red-cyan glasses filter the left and right eye views, so that each eye can see only one of the two views Figure 34: Shutter glasses for active stereo visualisation. Figure also shows an infrared device which is used to synchronize the glasses with the computer screen Figure 35 and 36: Mono and Stereoscopic representation of a simplified airspace Figure 37. 3D information of pitch and roll integrated in a 2D display Figure 38. The display looks as a standard 2D display before any selection is operated Figure 39. The controller selects now the aircraft 5. A selection area appears around the selected aircraft Figure 40. The selection area flips over rotating around the CD segment. This results on a 3D image where the plane made by the corner ABEF correspond to the flight level of aircraft 5. Thus it reveals the relative altitude of aircraft 1, 2, 3, and 4. The slow transition of the selected area from the original 2D to the 3D aims at preserving spatial orientation for the viewer Figure 41. Exo Vis Display for ATC. The 2D radar (centre of the display), the local 3D picture (Upper left), the electronic strips (on the left) appear within a single 3D information space. Correlation of data across views by mean of graphical elements reduces the effort for the controller to scan and search for information. In the picture selection of a electronic strip lead to the corresponding aircraft on the radar view. Other views can be added, e.g. separation display Figure 42. An example of Separation display inspired to the original work of Falzon (1982). This is one but an example of view that could be integrated into the ATC ExoVs Display. Vertical and Horizontal separations are reported respectively on the vertical and horizontal axes. The resulting graph represents separation associated to each pair of aircraft in the airspace. Separation moving quickly towards the origin signals an imminent loss of safety separation Figure 43, 44: An example of data correlation across separation and main radar views. As the separation between a pairs of aircraft enters the red zone in the separation display (Fig. 45) they are linked to the corresponding aircraft on the main 2D radar view (Fig. 46) Figure 47: The PowerWall from Fakespace systems Figure 48: Example of curved display D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 6

7 LIST OF TABLE Table 1. The combination Display Framework. The names in bold correspond to the innovative concepts described in this document Table 2. Creation methodology adopted for the 3D-in-2D project Table 3. The combination Display Framework. The names in bold correspond to the innovative concepts D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 7

8 EXECUTIVE SUMMARY This document reports on the seven innovative concepts developed for the 3D-in-2D Display project founded by EUROCONTROL under the INO CARE action. Scope of the project is to develop innovative visualizations combining 3D with 2D views for air traffic controllers to help controller to build an improved mental picture of traffic situation. Other users not involved in real time activity, e.g. airspace planners, might benefit from these innovations too. In the early years of radar, displays used very basic oscilloscopes to show strengths of the radar signal returns in a given direction to indicate the presence of an aircraft. It was difficult to assess exactly where they were and in what direction they were heading, and to therefore develop a mental picture or understanding of what all the aircraft in a controller's area of responsibility were doing. Subsequent developments of the Plan Position Indicator (PPI) provided a means of converting those radar responses into a plan view of various aircraft positions in relation to the radar antenna. This was presented to the radar controller as a rotating sweep of light, with the "blips" of lights representing aircraft position. As the moved they left behind a trail of fading phosphor blips on the radar display. These fading trails of blips provided controllers with an indication of speed and direction. This was a significant improvement in how radar information was provided. However altitude information was not available and the controller had to ask pilots for their altitudes or use alternative technologies such as height finding radars to ascertain that information. Later, developments of the Secondary Surveillance Radar (SSR) from the war time IFF (Identification Friend or Foe) system integrated information about altitude and aircraft identification code. Digital System Data Display (SDDs) integrates data provided by different technologies, e.g. GPS, data link in order to provide different information to the controller such as call sign, actual flight level, cleared flight level, speed and heading of the aircraft. These developments have provided the air traffic controller with significant tools to help him or her understand the air traffic picture. However, using the information to create an understanding of the aircraft relative positions in 3D space is still a challenge and often a key skill of the controller developed over years of experience, vital for achieving safe separations and for expediting traffic flow. The purpose of this work is to investigate new forms of ATC radar representation formats that could potentially increase the informativness of the traditional 2D PPI displays. We intend to do this by integrating or combining with it, 3D information in different perspectives and by view integration. In this report we present our efforts in creating a suite of alternative 3D-in-2D representation design D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 8

9 concepts that can assist the controller in their development of a global awareness of the traffic situation, managing different traffic in rapid succession, in orientating the controller rapidly to the situation, and to provide a sense of continuity between the 3D and 2D sets of information and perspectives. These concepts represent an exploration of the design space. Each of the concepts described in this report is not a stand-alone complete design solution, but should be considered as a part of or a feature of a larger integrated design solution, and are therefore intended to be used in combination at a future time. How they are combined will need to carefully studied, as the effects of feature interaction can cancel out positive effects through poor implementation of the individual features or concepts. We will integrate or combine these features in the Year 2 of the project, and will then study the effects of their interaction. In this report we focus on describing the 3D-in-2D concepts and features. It is noticeable that the innovative concepts do not propose new symbology, nor a new type of 3D information contents to present to the air traffic controller. Rather they have to be regarded as novel Information Visualization mechanisms to combine (visually) 3D traffic representation with 2D ones. Finding from previous research on 3D display provides the rationale for devising combined 3D within 2D displays. As documented in the Deliverable D1.1_Innovation and Consolidation Report 3D can provide an enhanced qualitative understanding of traffic; however strict 3D cannot be accessed easily mainly due to the difficulty of operating a 3D camera. Further 3D has also some perceptual drawbacks such as hampered precise distance estimation. For these reasons recent innovative visualizations look at ways to combine 2D with 3D. Earlier work has looked at side by side configurations. But more effective combinations of the two spatial presentation formats are needed. The concepts presented in this document aims at filling this gap to enable controllers to perform their job more effectively. The seven innovative concepts are: (a) 3D in your hand. The 3D in your hand concept aims at simplifying the interaction between controller and 3D. It consists of a localized 3D volume which the controller can hold in his hand. The concept can be implemented using technology such as the AR toolkit; (b) Table Top Display. It consists in a radar display permanently rotated horizontally upon which 3D information is superimposed by mean of AR technology. The system over imposes additional three dimensional data over the 2D radar, to provide the controller with perspective traffic view where aircraft and other airspace features navigation points, weather - can be represented on their real 3D location; D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 9

10 (c) Stereoscopic Display. A 2D monitor that can be turned into a 3D immersed view viewed from above. In this way the controller can perceive a top view of traffic feeling to be immersed into it; (d) Skyscraper Display. this concept presents a top 2D view of the sector. Over this view, the user can select any localised area of interest and see it in 3D on demand. The selected local area will show aircraft complete with their drop lines as seen from above. (e) 3D analogical symbols in 2D. An alternative approach to integrate 3D in a 2D radar consists in representing represent the former as analogical symbols in 2D displays; (f) Magic plane Display. The Magic Plane Display enables a controller to operate with a standard 2D display and accessing a local 3D picture displaying traffic information relative to an aircraft of interest. The concept exploit a transition mechanism intended to minimize disorientation when moving from one 2D view to the other 3D and vice versa; (g) ATC Exo Vs Display. The ATC Exo-Vis Display consists in a three dimensional space comprehensive of all the information needed for the ATC task. This solution aims at reducing the effort for the controller to search information across different information resources. By looking at the innovative concepts in relation to the state of the art review, as represented by the Combination Display Framework (Rozzi et al. 2007), it is possible to appreciate their level of innovation, in particular: a. Three of the innovative concepts 3D on your hand, the table Top display, ATC Exo Vis Display - appear on the EXO VIS column where no previous ATC work has been found before. These display can be regarded as innovative contributions in the ATC; b. Other work developed from previous visualizations; i. The 3D symbols in 2D differs from the original work of Smallman et al. (2001) in that it applies to ATC; ii. Similarly to the Lens Display (2006), the Sky Scraper display implements a filtering technique to show 3D only in correspondence of a selected portion of the airspace. However the top 3D view removes distortion problems found in original work. The same display also represents an improvement to the PiP display since it does not suffer from continuity problems i.e. matching aircraft of 2d with those in 3d and vice versa; iii. The Magic plane Display represent an improvement compare with the Picture within a Picture display (Rozzi et al., 2006) in that it presents a slow transition from 2D to 3D to avoid the user getting lost. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 10

11 Table 1. The Combination Display Framework (Rozzi et al. 2007). The names in bold correspond to the innovative concepts described in this document Display Format Strict 3D 2D/3D Combination Display Side By side Multiview Exo-Vis In Place 2D/3D Smallman Information availability 3D in 2D ( 01) -3D symbols in 2D Multi windows EC-Lin AR PiP Display, ( 06) AD4 ( 06) Display technique (F+C) Rapid Zooming Ellis ( 87) Azuma ( 96) Brown ( 94) Eddy et. Al ( 99) Distortion Overview Plus detail Filtering EC-Lin VR ( 05) Multiple EC-Lin VR Coordinated ( 05) Views Uncorrelated view -Magic Plane Display Distortion Display, AD4 ( 06) Alexander & -Table Top Wickens Display ( 03) -3D in your hand Azuma ( 00) Ilog Display Lens Display, AD4 ( 06) -Sky Scraper Display -Stereoscopic Display Azuma ( 00) Ilog Display -ATC Exo Furtsenau Vis ( 06) Display D3, AD4 ( 06) St. John ( 01) D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 11

12 INTRODUCTION This document reports on the innovative concepts developed for the 3D-in-2D Display project, funded by EUROCONTROL under the INO CARE action. The aim of the Programme is to promote innovation in the ATC research. In this project, the aim is to investigate ways to integrate 3D into 2D presentation formats in an effective manner, to better support air traffic controller job. Other ATC application areas emerging during the project, e.g. airspace planning, may be considered too. As illustrated by David (2007), in the early years of radar, displays used very basic oscilloscopes to show strengths of the radar signal returns in a given direction to indicate the presence of an aircraft. It was difficult to assess exactly where they were and in what direction they were heading, and to therefore develop a mental picture or understanding of what all the aircraft in a controller's area of responsibility were doing. Subsequent developments of the Plan Position Indicator (PPI) provided a means of converting those radar responses into a plan view of various aircraft positions in relation to the radar antenna. This was presented to the radar controller as a rotating sweep of light, with the "blips" of lights representing aircraft position. As the moved they left behind a trail of fading phosphor blips on the radar display. These fading trails of blips provided controllers with an indication of speed and direction. This was a significant improvement in how radar information was provided. However altitude information was not available and the controller had to ask pilots for their altitudes or use alternative technologies such as height finding radars to ascertain that information. Later, developments of the Secondary Surveillance Radar (SSR) from the war time IFF (Identification Friend or Foe) system integrated information about altitude and aircraft identification code. Digital System Data Display (SDDs) integrates data provided by different technologies, e.g. GPS, data link in order to provide different information to the controller such as call sign, actual flight level, cleared flight level, speed and heading of the aircraft. These developments have provided the air traffic controller with significant tools to help him or her understand the air traffic picture. However, using the information to create an understanding of the aircraft relative positions in 3D space is still a challenge and often a key skill of the controller developed over years of experience, vital for achieving safe separations and for expediting traffic flow. The purpose of this work is to investigate new forms of ATC radar representation formats that could potentially increase the informativness of the traditional 2D PPI displays. We intend to do this by integrating or combining with it, 3D information in different perspectives and by view integration. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 12

13 In this report we present our efforts in creating a suite of alternative 3D-in-2D representation design concepts that can assist the controller in their development of a global awareness of the traffic situation, managing different traffic in rapid succession, in orientating the controller rapidly to the situation, and to provide a sense of continuity between the 3D and 2D sets of information and perspectives. 1.1 On the nature of the concepts The innovative concepts described in the present document represent alternative ways of providing ATC information and more precisely to combine 2D with 3D information. Therefore they cannot be regarded as complete stand-alone systems. Rather they represent some features that are supposed to be combined on a global system in Year 2 of the project, after their potential and limitation will be explored at the end of Year 1 - Evaluation criteria will be specified in the D1.4 Evaluation Report. This approach makes possible to maximize the potential for innovation relevant to support air traffic control tasks. Alternatively, looking immediately at designing a complete operational system would have drastically limited the exploration of alternative information design, thus reducing the potential for innovation. 1.2 Difference among the concepts: Technology vs Visualization The concepts differs from one another in they appearance to the user i.e. what the user can see - but not necessary on the technology. So although the Stereoscopic display and Skyscraper display can be implemented with the same stereoscopic technology, thus looking very similar from an implementation perspective, they implement a very different information visualization mechanism, In the former the data is available as 3D on demand in the whole sector with a view point fixed on the centre of the screen, in the latter only a selected portion of the sector is shown in 3D with the view point fixed on the centre of the selected area. Since from a viewer perspective they correspond to two different concepts with a two distinct impact on human visual performance, they have to be kept separated. Similar considerations apply to the other concepts D-in-2D investigated for Approach Control The innovative concepts presented in this document are targeted mainly at the approach control. The three-dimensional geometry of traffics moving in the terminal area requires operators to preserve a real-time 3D picture composed by several aircraft descending and climbing while turning at the same time, i.e. moving both on the vertical and horizontal planes. Arguably the D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 13

14 cognitive effort to preserve and update this 3D picture can be reduced by using a 3D visualization of traffic, thus making the terminal area a suitable sector where to investigate the use of 3D. This is especially during vectoring when operators have to oversee several traffics in rapid succession under severe time pressure. Having said that, the consortium is not excluding to target other ATC phases which could benefit from 3D-in-2D Display, such as airspace planning. 1.4 Combining 2D with 3D vs Design for a better 3D Design for a better display is a different approach than designing for combining 2D and 3D view. The former has been found in many of the concepts reviewed in the D1.1: Innovation and Consolidation Report. Those works focused mainly on what information to include in the 3D scene, e.g. aircraft, future trajectories, and how to represent it, e.g. realistic aircraft icon (Lange, Cooper, Duong, & Bourgois, 2005) vs. 3D aircraft symbols (Smallman, Oonk, St. John, & Cowen, 2001) or linear projected trajectories (Ellis, McGreevy, & Hitchcook, 1987) vs. volumetric projected trajectories (Azuma, Neely, & Daily, 1999). They also looked at how to integrated other safety related information, e.g. air vortex (Aragon. C. & Long, 2005; Modi, 2002), terrains, conflicts, and weather (Lange, Cooper, Duong, & Bourgois, 2005). Overall these works- of which an extensive review is presented in the D1.1 Innovative and Consolidation Report - focused on aspects such as what 3D information can be provided to controllers, or what is the best symbology for the controller. On the other hand the present work focuses solely on devising spatial mechanism to combine 3D with 2D to make 3D accessible to a controller during real life operations. The rationale for this approach is explained below. 1.5 Rationale for combined 3D within 2D Display. Research on 3D for ATC have shown that 3D can be useful for the task of the air traffic controller, who could perceive a 3D picture inclusive of actual elevation data, thus reducing the mental effort for building and maintaining a 3D traffic picture. However the same work left open the question of how to make 3D usable during real-time operations. Drawbacks such as time-consuming camera navigation, low distance estimation performance, hampered global traffic management (e.g., Rozzi et al., 2006; Wickens, Vincow, & Yeh, 2005) do not allow replacing 2D radar with 3D radar. A controller cannot spend time on manipulating a 3D camera to check for one conflict when he or she has to oversee and take decisions about other 30 aircraft spread across the airspace. More effective presentation formats are needed. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 14

15 There appears to be a trend towards the use of combined visualization of 2D and 3D display in many domains in order to combine the relative advantages of the two presentation formats (see for example Konig, 1999; Tory, Kirkpatrick, Atkins, & Moller, 2006; Tresens & Kuester, 2004). This project is exploring this research path which is scope for further development. The Combination Display Framework, resulting from the D1.1: Innovation and Consolidation Report, provides a spectrum of the possible combinations of 2D and 3D that can be explored in the current project. For instance more effective combination of 2D with 3D can be achieved looking at alternative ways to combine 2D and 3D frame of reference, e.g. Exo-Vis, other than just juxtaposition. Simple juxtaposition of the two displays also known as Side by Side configuration may not be enough: it requires an extra effort to carry forward information from one display to the other which may not be compatible with the time-frame available for real-time decision making typical of the ATC task. Other innovative configurations are needed and the concepts presented in this document aim at filling this gap. 1.6 Organization of the document - Section 1 provides a description of the innovation methodology behind the generation and evaluation of innovative visualizations for ATC; - Section 2 reports on the innovative visualization concepts describing their use, the design rationale, and the technology to implement them; - Section 3 concludes the document and reporting indication for the next phases of work. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 15

16 PART 2 THE HMI INNOVATION METHODOLOGY This section describes the methodology adopted during the present project in order to create novel information visualization systems integrating 3D with 2D. It refers to the whole innovation process expected for Year 1, since the generation of the innovative concepts reported in this document represent a single step of a broader process aimed at producing new design features, e.g. add 3D over an horizontal 2D radar; exploring their potential, e.g. understand what work and what does not; and then combining them in a few operational concepts in year 2, e.g. combine 3D over horizontal radar with AR in your hand idea, The Theory of Inventive Problem Solving (TRIZ) distinguishes different level of innovation. Level 1 Apparent Solution is based in personal knowledge, occurs about 32% of the time; Level 2 Minor Improvement is based in knowledge within a company, occurs about 45% of the time; Level 3 Major Improvement comes from knowledge within the industry, occurs about 18% of the time; Level 4 New Concept requires knowledge from outside the industry, occurs quite rarely at 4% of the time; and Level 5 Discovery, requires all that is knowable, occurs extremely rarely at 1% of the time. Thus the results of a innovation effort can span the spectrum of improvements, to innovations, to true discoveries. Realising this allows us to draw on different techniques and sources of knowledge to apply as the situation demands (Wong, 2006). In the order to produce innovation at Level 5 we implemented creativity in a systematic manner, as this can improve the chances of developing useful and inventive solutions (Wong, 2006). In particular focus was on exploring the highest number of alternatives, avoiding focusing on predefined solutions. This was achieved by means of the following steps 1. Creativity Workshop 2. Innovative Concept Design 3. Innovative Concept Evaluation 4. Concept Integration Creativity workshop. This phase served to produce ground breaking ideas about new ways of doing things, radically different than how things are done today in ATC. The resulting ideas were not directly implementable since they were ranging from organization of work e.g. controllers can D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 16

17 work at home - to interaction design e.g. 3D pipes where the aircraft can fly in - but served the purpose of generating alternative viewpoints on the design problem of integrating 2D with 3D. In particular this process minimized the risk of focusing consortium efforts on a very narrow set of display solutions from the very beginning of the project. The risk here would have been to refine existing combination displays, e.g. Distortion Display, rather than creating some new ones, e.g. Table Top Display. For a thorough description of the creativity workshop and the adopted creativity techniques based on the work on lateral thinking of De Bono (1992) the reader can refer to the Creativity Workshop Report reported on the reference list (Rozzi & Wong, 2007). Finally it is worth noting that others have looked at implementing creativity workshop in the design of user interface. In the ATC domain Maiden et al. (2004) applied creativity workshop to the requirement generation process. Our work differs from that approach since in this case the user requirements were available and the creativity strategy helped the design team moving from the requirements to the visual design concepts; Innovative Concept design. This phase focused on devising new visualization concepts of display which integrated 2D with 3D. Ideas at this level consisted in a particular type of display configuration, e.g. table top display, and this is what has been reported in this document. The innovative ideas developed during the creativity workshop served as a resource for the design of the innovative concepts. The combination display framework resulting from the state of the art review (D1.1) guided this phase avoiding duplication of existing work, and promoting the exploration of areas on display not yet explored such as Exo Vis (see Combination Display Framework); Innovative Concepts Evaluation. This phase will look at features of the innovative concepts and at their capabilities, i.e. what work and what do not, and how to improve the design. The methodology and result of this activity will be reported on a separate report (D1.4 Evaluation Report); Concepts integration. Having understood potential and limitation of the innovative concepts features, it will be possible to combine these into a few operational concepts. This activity will start during Year 2. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 17

18 Table 2. Creation methodology adopted for the 3D-in-2D project Phase Phase 1 Phase 2 Phase 3 Phase 4 Innovative ideas Innovative Concepts Innovative Concepts Innovative Concepts generation definition Evaluation Combination (to be started in Year 2) Objective Focus on generation Focus on the definition Identify what is the Integrate the innovative the maximum amount of HMI display concepts potential capability of concepts in a few of ground breaking combining 2D with 3D. each innovative concept operational systems (2 ideas on ATC display. Efforts form this phase and its limitations. or 3). This will be Ideas in this phase have been consistent possible after having were very fragmented with the Combination understood the main and at different levels, Display Framework, to potential/limitation of e.g. from low level guarantee originality of each individual concept. interaction to the approach and avoid organizational. This duplication of existing avoid to jump work. prematurely to solutions while integrating at the same time the maximum amount of perspective on the problem. Output Innovative ideas, Innovative 3D-within-2D List of potential and - provocations Display Concepts limitation of Innovative 3D-in-2D Concepts How - 3 Creativity - MDX Internal - Exploration - Workshop Design session with Meeting controllers - Design meeting with Space App - Design meeting with all partners D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 18

19 PART 3 INNOVATIVE CONCEPTS AND THEIR DESIGN RATIONALE This section describes the 3D-within-2D innovative concepts we have developed. These are: (a) 3D in your hand; (b) Table Top Display; (c) Skyscraper Display; (d) Stereoscopic Display; (e) 3D analogical Symbols in 2D; (f) Magic Plane Display; (g) ATC Exo Vis Display. Each concept description will cover the following points: (a) Description of the concept: it provides an overview of the display; (b) Intended use: this is a description of how the user is intended to work with the new display and its features, i.e. what s/he can see, what he can interact with; (c) Expected benefits for the user: a list of the advantages that the user would gain from the operating with the proposed concept; (d) Potential drawbacks: the anticipated shortcomings considered from a user standpoint; (e) Implementation of the concept: the technology necessary to mock up the proposed concept. Possible technologies include for instance AR to allow the overlay of computer graphics on the real world objects. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 19

20 3.1 3D in your hand Description: The 3D in your hand concept aims at simplifying the interaction between controller and 3D. It consists in a localized 3D volume which the controller can hold in his hand, as shown in Figure 1. The concept can be implemented using AR toolkit technology. D in your hand Figure 1. The 3D in your hand concept. How to use it: After locating an interesting portion of the airfield over the 2D radar, the operator can wave her/his hand over this area to operate a selection. In this way the interesting portion of the airfield will appear in 3D on the operator s hand (see Figures 2, 3, 4). The Concept The Concept Figure 2, 3, 4. The interaction sequence necessary to visualize the 3D local picture on the operator s hand. With this localized 3d view the controller can: (a) Carry out a deeper investigation. Simply bringing it closer or rotating the palm to change perspective. For instance the controller could inspect from a closer view whether the two aircraft are separated or their 3D trajectories are actually converging. In case of cluttering, D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 20

21 the operator can simply rotate the 3D volume by rotating his or her palm. The very natural movement of the hand in relation to own head is much quicker and intuitive compared to complex 3D camera navigation. Associating the north of the sub volume to a standard hand part, e.g. middle finger, would make possible to preserve orientation awareness when rotating the palm; (b) Leave it open aside the display. The view could refer to a hot spot thus the controller can leave it open aside the main display. Similarly he could do the same with other localized views. Then the user could simply snack his finger to close the 3D sub volumes view. (c) Use the localized view a base for interaction with other controllers; or handing it over view to other controllers. This could be helpful for instance to improve the communication concerning a given aircraft situation. If an aircraft need to be shown to the supervisor, the controller can select it, hold it in his/her hand and then pass it to the supervisor to show the behaviour of the aircraft in question. The availability of a shared 3D image should reduce the verbal communication needed to specify the details of the situation. The following pictures show a complete sequence of the concept in operation. Figure 5, 6: The user selects an area on his screen by bringing his/her hand closer to the screen D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 21

22 Figure 7, Figure 8: After operating the selection 3D appears on the controller s hand. If the image is too cluttered s/he can bring it closer and rotate it to inspect the traffic situation. Figure 9: Finally the controller can show the 3d picture to other operators. The availability of the 3D image should require less verbal communication. Relationship with the Combination display framework: the concept implements an AR Exo-Vis Configuration. In fact it enforces 3D sub volumes over a 2D display. Benefit for controllers: (a) Quick access to a local 3D view. Selection of airspace of interest is quicken since it requires to wave controllers hand only in front of the screen. (b) Natural and simple interaction with 3D view. By providing a 3D volume on own hand the user can implement the very natural interactions human he used to with 3D objects in the real world. Waving the hand, rotating the hand s palm are expected to be faster and more intuitive interactions compared with a 3D camera manipulation of strict 3D display; (c) Enhanced team collaboration. Availability a shared local view that can be discussed and even handed over to other colleagues, e.g. supervisor, planner. This should fasten D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 22

23 collaboration and coordination, since the time spent describing the situation should be consistently lower than the one used with verbal communication only. Envisaged Drawbacks The way the concept is implemented deserves consideration since the frame of reference of the 2D global view and 3D local view differ. This might require some effort to understand which aircraft is which in the local view Concept Implementation The 3D-in-your-hand concept is implemented by means of Augmented Reality (AR) techniques. Augmented Reality consists in overlaying a computer-generated virtual imagery on the real world view of the user. In typical AR applications, a user wearing special head-mounted goggles with see-through capabilities can look at the surrounding real world and see virtual objects (computergenerated graphical elements) on top of the real view; the virtual objects appear and move in the user field of view as if they were attached to real world objects. Figure 10: AR visualisation example: a virtual character rendered on top of a real object. The user can see the combined real-virtual images through special see-through glasses. For example, in the image shown in Figure 1, a three-dimensional virtual character appears standing on a real card. It can be seen by the user in the head set display they are wearing. When the user moves the card, the virtual character moves with it and appears attached to the real object. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 23

24 In general, the crucial aspect of AR visualisation is to determine the current viewpoint and viewing direction of the user, so that the computer can display the graphical objects at the correct position in the user s field of view, giving the user the perceptual illusion that the virtual objects are actually located within the real world. Therefore, it is important to know the knowledge of the current point of view and gaze direction of the user, i.e. the position and orientation of the user s head in a real world reference frame or, equivalently, the position and orientation of a real world object with respect to the viewer (user). These information must be collected in some way and made available (in near real-time) to a computer system which will then compute the expected position of the virtual objects and render them accordingly in the user s field of view Technologies Several approaches are possible to achieve the necessary position and orientation tracking for the AR visualisation. The one adopted for implementing the 3D-in-your-hand concept, as well as the other AR-based concepts of the project, is based on the use of optical marker and video processing supported by the ARToolkit software library. ARToolkit uses special algorithms to detect a fiducial marker in a sequential stream of images which are captured by a video camera (which can be as simple as a web-cam or other small, portable camera) and to compute the three-dimensional position and orientation of such a marker in the camera reference frame. The fiducial marker can be chosen among a variety of shapes supported by the software and can be for instance printed on paper and attached to a real world object. In such a way, the real object is marked and the software can continuously detect its 3D pose (position and orientation) with respect to the camera. By knowing the relative 3D pose of the fiducial marker, and thus of the marked object, the software is able to display any virtual objects (as defined by the specific application) at a location which appear to be attached to the real object. Since the camera is installed on the user s head and oriented as the user gaze (usually attached to the head mounted display), the images captured by the camera correspond to the current view of the user. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 24

25 Figure 11: AR Toolkit approach: the camera sees the marker and the software computes the marker reference frame (= the pose) with respect to the camera reference frame. The virtual objects are displayed in the user field according to the marker pose. The concept of marker detection and tracking is illustrated in figure Figure 11. The ARToolkit process is illustrated in Figure 12 below. Image capture (video camera) Image processing for marker detection Marker reference frame computation Augmentation: rendering virtual objects based on marker position Figure 12: ARToolkit process: the ARToolkit software captures images from the video stream, detects the marker and computes its reference frame. The virtual objects are displayed in the user field according to the marker pose. In Figure 12, the process is illustrated in the case of the 3D-in-your-hand HMI concept; the fiducial marker is attached to a real object hold by the user in his/her hand; the images in the figure represent the user s view (as captured by the camera); the ARToolkit software computes the pose D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 25

26 of the marker with respect to the viewer (left image in the figure) and then renders a virtual scene (in this case a portion of airspace with aircraft and fixes) on top of it, with the same position and orientation of the marker (right image in figure). This process is completed several times per second, thus resulting in a continuous update of the virtual scene, which will appear as moving smoothly and consistently with the marked object. Figure 13: 3D-in-your-hand explained: the camera sees the marked object hold by the user and the software computes the marker reference frame (= the pose) with respect to the camera reference frame. The virtual objects are displayed in the user field according to the marker pose. Another illustration of the concept in operation is shown in figure below. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 26

27 Figure 14: 3D-in-your-hand explained: the user holds the marked object on top of which a part of 3D airspace is displayed Software considerations The ARToolkit is a software library written in C language, which can be easily integrated in any software application written in C or C++. The ARToolkit library supports the following capabilities: (a) video stream capture from virtually any digital video camera connected to the host computer via USB or Firewire ports; (b) marker detection in the video stream; (c) recognition of several different markers (more than one marker can be used simultaneously); (d) real-time computation of the marker position and orientation; (e) 3D visualisation based on OpenGL, which is a standard API for graphics rendering. The 3D-in-hand HMI application software can thus connect to an ATC traffic simulator to be fed by aircraft data (position, speed, flight level, etc.) and other airspace data such as the position of fixes, airports, holding stacks, weather. Using the ARToolkit library, the traffic patterns corresponding to a selected portion of airspace can be displayed in 3D on top of the marked object hold by the user in his hand User interaction The portion of airspace to be displayed in the 3D AR view is selected by the user on the 2D radar display. The coordinates of the selected area are then sent to the 3D rendering engine for displaying the selected volume. In the first version of the prototype (year 1 of the project), the actual selection mechanism consist in defining a rectangular region of airspace in the 2D view using a standard mouse, i.e. by mouseclicking on the screen of the 2D radar view. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 27

28 Figure 15: 3D-in-your-hand interaction: the selects a region, with the mouse, in the 2D screen (right) and the corresponding 3D view is shown in the AR view (left) Advanced interaction A more complex interaction paradigm is foreseen for the future implementation of the 3D-in-yourhand concept, in which the user can select the region of interest from the 2D radar view with a simple hand gesture, i.e. by waving his hand in front of the radar screen, without the need for using the mouse. In this way, the user can use the same hand for both selecting of the region of interest and manipulating the corresponding 3D volume appearing on his hand in the AR view. Implementing such a function however requires a more sophisticated approach, based on the continuous tracking of the hand position not only with respect to the camera view but also with respect to the radar screen, since the software must be able to detect over which part of the radar screen the hand has been waved in order to determine the region of interest to select. This will require the use of more advanced tracking hardware and software, for accurate tracking of user head and hand Hardware equipment The hardware needed to support the AR visualisation (using the ARToolkit software), includes: (a) (b) a see-through Head Mounted Display, allowing to merge the real-world images with the computer-generated graphics in the user field of view; a lightweight, portable video camera, installed on the user s head (attached to or integrated in the Head-Mounted Display); D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 28

29 Finally, the marker can be simply printed on a physical support (e.g. paper) and attached to any object which can be hold and manipulated by the user. Figure 16: commercial see-through HMDs used for AR visualisation: a binocular example (left) and a more compact monocular one (right). D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 29

30 3.2 Table Top Display (AR_Exovis_Overview plus detail display) Description: The idea consists in a horizontal radar display (displayed in Figure 17) upon which 3D information is superimposed by mean of AR technology. The system imposes three dimensional data over the 2D radar additional, to provide the controller with a global perspective traffic view where aircraft and other airspace features navigation points, weather - can be represented on their real 3D location. Figure 17. The Table Top Display. Figure 18. An early prototype showing how global traffic view would appear to controllers. How to Use it: the controller would be presented with a planar display. Upon this he can choose to have permanently displayed a global 3D traffic view useful to observe not only horizontal positions, but also vertical ones and vertical tendencies, such as the rate of climb and attitude. This should make easier the extractions of 3D patterns and trends, since all the relevant 3D data would be available. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 30

31 To de-clutter the display the controller has simply to turn or move his or her head, since the concepts would implement the motion parallax visual cue. This will be helpful for instance when two aircraft appear on the same line of sight: by turning own head it would be possible to disentangle the two aircraft traces. Besides the main 3D view, some on demand augmentations (Figure 19) would be available to magnify localized view of the airfield when needed. So for instance a controller can select an area nearby the final approach fix to see on an augmented 3D view whether an aircraft is able to intercept it with current flight parameters. This magnified sub volume would place the 3D image at the centre, while projecting side views on 2D walls to support precise distance estimation. The controller could for instance understand the 3D arrangement at the centre, while he or she can check precisely on the 2D wall distance and variation of distance between aircraft and the final approach fix. This arrangement of 2D and 3D views results from the Magic Cube Display (Rehm & al., 1998) and the Magic Mirror Display (Konig, 1999), reviewed in the D1.1: Innovation and Consolidation Report, adopted in neuro imaging to show areas of activations of the human in 3D and their position on side views arranged around the main 3D view. The same concept applies to any other selected area of the airspace. For instance a controller might want to: check whether a given aircraft is able to level off at an assigned flight level after a climb (or descent); visualize the 3D path of an aircraft close to terrains; monitor one or more holding stacks; accessing a pilot point of view during severe weather conditions (Figure 20), when it might be helpful to assist aircraft in formulate alternative plans to navigate around the restricted volume. All these airspace features can be included and magnified in a sub volume such as the one described above. Other augmentations might consist in a augmented measuring tool (see Figure 22) to support precise estimation of distances and angles. Such a tool could be available on demand to the user. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 31

32 a a Figure 19. An example of an augmentation applied to the final approach fix. By moving the selection area it is possible to select and magnify other portion of the air space, e.g. holding stack, terrains. Note that any augmentation would be placed aside the main 3D view thus avoiding covering any part of the global 3D view. At the same time, the use of graphical links between the borders of the augmentation and the corresponding corners in the 2D table selection area aims at favouring the identification of which part of the 3D global view the augmented local area belongs to, thus minimizing disruptions, such as loss of orientation or aircraft positions, when moving from one view to the other. This approach to graphically link different view of a display is based on Overview plus Detail data arrangement technique (see for example Baudish, Good, & Stewart, 2001) reviewed on the D1.1 (Rozzi et al., 2007). D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 32

33 Figure 20. Augmentation can provide the controller with perspective view- upper left. When an aircraft has to face a cumulonimbus, the controller can open pilot s perspective view to better support the pilot in formulating an alternative course of action. Figure 21, 22: A static mock up showing how the augmentation would appear to a viewer. On the left the user is accessing a pilot view point. On the left the user is holding a side view of an aircraft approaching the final approach fix. Selection of aircraft can be achieved by means of paddle (not represented in the picture) D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 33

34 Figure 23. Advanced symbology applied to the Table Top Display. Figure 23 exploits an alternative symbology for the Table Top Display which associated to different graphical features relevant aircraft data. The triangles represent the aircraft horizontal position and direction. The length of the drop line indicates height information. The arrow indicates the attitude, i.e. whether the aircraft is climbing or descending. This approach differs from aircraft realistic representation and should be beneficial for two main reasons. First it should make easier to distinguish for instance among aircraft at low or high altitudes, or among descending or climbing aircraft. This data could be extracted directly by the symbology for instance by comparing the length of the drop lines, or the direction of the arrows. This would not be possible using an alphanumerical format, e.g. labels, whose comparison requires high level cognitive processing. It noticeable that to assess altitude separation precisely this format might be still necessary as little differences cannot be appreciated from graphical data only. Secondly the symbology to represent altitude, horizontal position, direction, and attitude is split up to different graphical features. This reduces the confusion typical of 3D icons, which confound these three ATC information into a single graphical feature, thus creating conditions for errors as observed by Smallman et al. (2001a) (see Figure 24). D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 34

35 Figure 24. Which aircraft is descending? Which aircraft is headed east? The aircraft on the left is in level flight headed southwest. The aircraft on the right is descending headed east. 3D iconic representation can code physically different aircraft hampering a precise estimation of individual analogical attributes (Smallman, John, Oonk, & Cowen, 2001a). Relationship with the Combination Display Framework: The concept implements the Exo-Vis display type, since it make use of a reference 3D global traffic view, from which enlarged 3D sub volumes and 2D walls are extracted on demand. Further it implements the Overview+Detail visual technique. In fact global data and local data are visible at the same time and graphical links are used to minimize the perceptual effort when moving from one view, e.g. the global table top view, to another, e.g. a local 3D traffic view. Envisaged benefits (a) The controller would be presented with a 3D global view of the sector. This would make possible to support 3D pattern perception, i.e. how traffic is moving on the 3d space rather than just on a horizontal plane; (b) Localized 3D traffic views would enable local assessments without losing track of the global view, that remains available in front of the controller, and without getting lost, since graphical enhancements make clear the link between the local and the global views. This represents a significant advantage with 3D only, which often results in the user getting lost or having to navigate a camera. (c) The controller would still be able to monitor 2D global traffic. In fact the concept builds 3D information on the top of 2D one, without removing the latter which is available on the surface of the table top. Envisaged Drawbacks Having the radar horizontal might hamper correct distance estimation and relative distances can be difficult to estimate. This could require using a parabolic display. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 35

36 3.2.1 Concept Implementation The Table Top Display concept is implemented by means of Augmented Reality (AR) techniques. The technologies used for developing the prototype are the same used for the 3D-in-Your-Hand concept. In particular, the ARToolkit software library (with C/C++ programming language) is utilized for performing the Augmented Reality (AR) visualization needed to implement the concept. Please refer to the specific 3D-in-Your-Hand section for details Differences from the 3D-in-Your-Hand concept From the implementation point of view, there are some minor differences between the Table Top Display and the 3D-in-Your-Hand. The main difference consists in the fact that the optical (fiducial) markers are no longer located on a hand-held object (as it happens in the 3D-in-Your-Hand), but they are placed over a fixed horizontal surface, i.e. a table designated to support this kind of visualisation. Users can look at the horizontal table from different points of view and consistently see the 3D airspace representation displayed over it. Figure 25: AR Table top: the 3D airspace is represented in the AR view on top of a marked planar surface. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 36

37 Interaction The user can select specific aircraft or regions of the airspace in order to visualize additional information. The coordinates of the selected area are then sent to the 3D rendering engine for displaying the selected volume. In the first version of the prototype (year 1 of the project), the actual selection mechanism consist in defining a rectangular region of airspace in the 2D view using a standard mouse, i.e. by mouseclicking on the screen of the 2D radar view (which is still provided and accessible by the user when needed) Advanced interaction A more complex interaction paradigm is foreseen for the future implementation of the AR Table- Top concept, in which the user can select a region of interest directly from the 3D view with a simple hand gesture, e.g. by picking an aircraft with his fingers or by waving his hand over a specific region. In this way, the user is not forced to use the mouse on the 2D radar screen, but can fully operate on the AR Table Top view. Implementing such a function however requires a more sophisticated approach, based on the continuous tracking of the hand position with respect to the 3D view., since the software must be able to detect over which part of the radar screen the hand has been waved in order to determine the region of interest to select. This will require the use of more advanced tracking hardware and software, for accurate tracking of user head and hand. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 37

38 3.3 Skyscraper Display Description: Similarly to a strict 2D radar, this concept presents a top 2D view of the sector. However, over this view, the user can select any localised area of interest and see it in 3D on demand. The selected local area will show aircraft complete with their drop lines as seen from above. This concept is similar to the original PiP Display, acronym for Picture within a Picture Display, developed during the AD4 project (Rozzi et al., 2006). The perspective view in the localized viewport looks at the scene from above (i.e. it has virtually the same viewpoint and viewing direction as the surrounding 2d view), but with the difference that this view is rendered according to a perspective model with a field of view of about degrees. The surrounding view (2d) is instead a standard 2d orthographic projection, i.e. not perspective, so that the controller can still control the global traffic movements. In addition, by enabling stereoscopic (binocular) viewing in the local 3d view, altitude information can be more intuitively appreciated (stereoscopic depth perception => the controller has the impression that airplanes come out of the screen according to their altitude). Continuity is also preserved, because both views look at the scene from above, i.e. from the same viewpoint and with the same viewing direction. Figure 26. The image shows the aircraft elevation data appearing only on the selected area. The remaining part of the display is instead 2D. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 38

39 Figure 27, 28. The skyscraper display in operation. How to use it: The controller would be presented with a 2D radar view (Figure 27). Over this view he/she can select an area of interest to examine in 3D (see Figure 28). Here, aircraft appear in 3D with their drop lines as seen from a top viewpoint, thus resembling a group of skyscrapers as seen from above. 3D information is added in the same frame of reference. As a result the transition from 2D to 3D avoids users getting lost since the 3D data is added on the top of 2D, i.e. the drop lines emerge from the aircraft blip. This is a radical improvement compared to the original PIP display, which presented the 3D picture from a very different point of view, thus introducing sever problems of continuity and disorientation. The sequence shown in Figures 29, 30, and 31, shows the visual effect resulting from moving the selected 3D area. Aircraft a appears on the Figure 29 with his actual elevation drop line. By moving the selected 3D area toward left the aircraft drop line changes orientation because of the translation of the top camera view point. This is not portrayed but is placed at the centre of the selected area. When aircraft A is excluded by the selection it turns into a 2D blip (Figure 30). At the same time the right translation of the selection is now including the aircraft B whose drop line is now being displayed (Figure 31). D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 39

40 A A B Figure 29. Aircraft A is now displayed in the selection area with its drop line. The red arrow indicates the translation movement A A B Figure 30. After the selected area is moved aircraft A drop line changes orientation due to the translation of the camera view point, placed at the centre of the selection area. B D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 40

41 A A Figure 31. Aircraft B is now included into the selection area and thus it is displayed in 3D. Mean while aircraft A flatten itself out as it is out of the selection area and is part of the 2D radar. Envisaged Benefit: (a) Analogical representations of altitude and attitude information are available for the controller; (b) The 3d image is shown over the 2d information in the same frame of reference. Thus new information is added over the 2D, there is no replacement as in the original PIP concepts. This make sure that is easy to match the information in 3D with the respective in 2D, and avoid getting lost when moving from 2D to 3D and vice versa; (c) 2D global traffic is still available. This make possible to preserve awareness of overall traffic. Relationship with the Combination Display framework: the concept implements an integrated display concept since 3d is shown over the 2D view in correspondence of the area where it belongs to, in the same frame of reference. Envisaged Drawbacks Possible errors in estimating altitudes are anticipated. Length of drop line varies depending of the position of the aircraft in relation to the camera view. In other words drop lines of aircraft close to the border appear longer than drop line of the same length displayed at the centre of the selection because of the perspective projection Concept Implementation The skyscraper display prototype is implemented using standard software technologies for 3D visualisation. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 41

42 In particular, the application is developed in C++ language using the OpenGL graphics API Stereo view Stereoscopic imaging is any technique capable of creating the illusion of depth in an image. The illusion of depth in two-dimensional image is created by presenting a slightly different image to each eye. The left eye image represents the user view from the left eye viewpoint, while the right eye image represents the user view from the right eye viewpoint; in other words, the left and right eye views have two slightly different perspectives on the same scene. The brain can automatically process and merge the two views, obtaining a single representation which provides the sense of depth. The stereoscopic visualisation can be equally obtained using one of the following techniques: (a) (b) (c) anaglyph stereo, using red and green (or red and cyan) glasses; active stereo, using special shutter glasses; auto-stereoscopic monitors (also known as 3D displays) The current implementation of the concept prototype supports anaglyph stereo Anaglyph stereo Anaglyph stereo is a relatively simple technique consisting in generating the left and right eye views on a standard monitor, and applying colour filters to each view, typically a red filter for the right eye and a cyan filter for the left. The colour filtering is realized by the software application using standard OpenGL colour-masking techniques Special two-colour glasses must then be used, where the two lenses (left and right) are capable of filtering the light so that only specific colour components are perceived. The left lens filters the light so that only cyan components can pass (in some cases green), while the right lens allows red components to pass through. Therefore, by wearing such glasses the user left eye only perceives the left view (cyan) while the right eye only perceives the right view (red). The brain automatically combines and merges the two views obtaining the stereoscopic effect. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 42

43 Figure 32: Red-cyan glasses for anaglyph stereo visualisation. Figure 33: Red-cyan glasses filter the left and right eye views, so that each eye can see only one of the two views Active stereo Active stereo is more advanced technique for stereoscopic visualisation, consisting in displaying the left and right eye views as an alternate sequence (left-right-left-right..) on the screen. The frequency of alternation is typically very high, usually 60Hz. In this case the user must wear a pair of special glasses, so called shutter glasses, which are synchronized with the screen and which alternatively shuts one lens while opening the other lens (using special crystals) at the same frequency of the screen alternate sequence. When the left lens is open, the right one is closed and vice-versa. Therefore, the user can only see one image at a time (left or right). Due to the high frequency of this alternate sequence, the brain actually believes to see two images simultaneously and merges them obtaining the stereoscopic view. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 43

44 Figure 34: Shutter glasses for active stereo visualisation. Figure also shows an infrared device which is used to synchronize the glasses with the computer screen Auto-stereoscopic displays Auto-stereoscopic displays are capable of visualising stereo images without the need for the user to wear any glasses; the user can perceive the sense of depth with his naked eye, simply sitting in front of the monitor. Auto-stereoscopic displays show two different images at the same time, one for the left eye and one for the right eye; however, the screen images are filtered by a physical layer which acts as a parallax barrier, so that each eye only sees one of the two images (left or right). D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 44

45 3.4 Stereoscopic Display Description: A 2D monitor than can be turned in 3D immersed view viewed from above. In this way the controller can perceive a top view of traffic feeling to be immersed into it. How to use it: This concept would enable a controller to monitor a standard 2D view (Figure 35) and dive into it when needed by activating a stereoscopic visualization (Figure 36). The new 3D view would appear from a top view, thus maintaining the same frame of reference as the 2D view. This would avoid identification problems when moving from one view to the other making possible to see 3D information such altitude drop lines. The controller would be totally immersed in a 3D traffic scene as if he or she was immerging their heads into a fish tank. Figure 35 and 36: Mono and Stereoscopic representation of a simplified airspace. Envisaged Benefit: (a) The controller would be presented with a 3D global view of the sector. This would make possible to support recognition of 3D patterns; (b) No disorientation or identification problems when moving from 2D to 3D. 2D and 3D appear in the same frame of reference thus avoiding disorientation effects; (c) 2D global traffic still available. This make possible to control overall traffic. Relationship with the Combination Display framework: the concept implements an integrated display concept since the 3D picture is shown over the over the 2D in the same frame of reference. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 45

46 Envisaged Drawbacks: Possible limitations in precise distance estimation are anticipated. Because of the perspective projection altitude drop lines at the centre of the display would be very short compared to drop lines of the same length located at the edge of the display Concept Implementation The same technology as for the Skyscraper Display would apply also to the implementation of the Stereoscopic Display. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 46

47 3.5 3D analogical symbols in 2D Description: An alternative approach to integrate 3D in a 2D radar consists in representing 3D information by mean of analogical symbols in 2D displays. Rationale: the work of Smallman and St. John (Smallman, John, Oonk, & Cowen, 2001b; Smallman, Oonk, St. John, & Cowen, 2001) provides the rationale for this approach. They found that for visual search performances the use of symbolic 3D information on 2D display supports faster visual search (identification) performances than 3D iconic information, available on 3D display. Their work showed that information availability, and not only spatial format 3D vs 2D -, can improve human performance. One of the problems with 3D icons is to disambiguate heading and attitude which are confounded in the same display item. For a deeper discussion the reader can refer to the original article or to the sections and of the deliverable D1.1. How to use it: The approach shown below integrates 3D information of pitch and roll on a 2D aircraft track. A triangular shape portrays the aircraft as seen from above. Within this triangle a small segments indicates the pitch - longitudinal movements of the segment - and bank - lateral movements of the segment - or a combination of both. By means of this information about the present state of an aircraft the controller would be able to assess the manoeuvre which is being implemented, what the aircraft is doing, and then to anticipate the outcome of it, e.g. deviation/consistency from/to intended path. This information would be very useful during intense traffic when the controller has to vector several aircraft at the same time and available margins of manoeuvre are very tight (Rozzi et al., 2007). Benefit for the controller (a) 3D information would be available in the same frame of reference as 2D display. This should avoid all problems related to continuity, i.e. identify which aircraft is which when moving from one display to the other, and orientation, i.e. understand where the scene is being observed; (b) The display should support faster search performances. This it should be easier for an observer to identify aircraft that are descending South West from those climbing North East. Relationship with Combination Display framework The 3D analogical symbols in 2D leverages on information availability on 2D displays, similarly to the Symbicons proposed by Smallman (2001). This solution is alternative to the use of F+C display technique. Further the display can be classified as In Place since 3D information is provided n the same frame of reference as 2D D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 47

48 Figure 37. 3D information of pitch and roll integrated in a 2D display Envisaged Drawback - Optimal readability of the symbols might require high resolution display Concept Implementation The concept is intended as desktop system making use of a standard monitor. It would be implemented using C++ and Open GL. Interaction is based on mouse and keyboard.. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 48

49 3.6 Magic Plane Display Description: The Magic Plane Display enables a controller to operate with a standard 2D display and accessing a local 3D picture displaying traffic information relative to an aircraft of interest. The concept exploit a transition mechanism intended to minimize disorientation when moving from one 2D view to the other 3D and vice versa. How it works: while working on the 2D display, the controller can select an aircraft of interest, for example aircraft 5 in Figure 39. Then the area surrounding this aircraft (delimited in the picture by the corners ABEF) would flip over thus producing a 3D local view displaying the flight level of the selected aircraft 5 (see Figure 40), and revealing other s aircraft in the real 3D location. This allows checking the altitudes and 3D direction on the surrounding traffics in relation to the selected craft. After completing the inspection the controller can go back to a standard 2D image. The slow transition of the selected area from 2D to 3D would allow the user to preserve orientation. An alternative use would consist in opening to inspect which is the first available flight level above or below a restricted portion of the airspace. Figure 38. The display looks as a standard 2D display before any selection is operated. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 49

50 E 2 F C 1 5 D 4 3 A B Figure 39. The controller selects now the aircraft 5. A selection area appears around the selected aircraft. 2 E 1 F C 4 5 D A 3 B Figure 40. The selection area flips over rotating around the CD segment. This results on a 3D image where the plane made by the corner ABEF correspond to the flight level of aircraft 5. Thus it reveals the relative altitude of aircraft 1, 2, 3, and 4. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 50

51 Relationship with combination display framework This concept makes 3D available on demand on a selected portion of a 2D display. Therefore it can be classified as In Place display. Further it implements the Rapid Zooming F+C Display technique, since the 3D picture 3D information replaces the corresponding portion of the 2D display. Envisaged benefits for the controller - The user can explore quickly a 3D portion of the airspace with minimum distortion Envisaged Drawbacks - Might disrupt global traffic perception, especially during vectoring, if transition time is not properly fixed Concept Implementation The concept is intended as desktop system making use of a standard monitor. It would be implemented using C++ and Open GL. Interaction is based on mouse and keyboard. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 51

52 3.7 ATC Exo Vis Display Description: The ATC Exo-Vis Display consists in a three dimensional space comprehensive of all the information needed for the ATC task. This solution aims at reducing the effort for the controller to search information across different information resources. How it Works: Figure 41the user would be presented with a global three dimensional information space inclusive of views such as the main radar presenting aircraft blip with the corresponding label (not represented in the figure), 3D local picture/s, and electronic strips, with the possibility to add other views, e.g. a separation display. This configuration would make possible to relate easily information across views. For example when an electronic strip is selected, a line would link it with the corresponding aircraft on the main 2D radar (see Figure 41), thus pointing the controller to the specific aircraft and reducing the visual effort for scanning and locating the aircraft especially on a cluttered display. The same would apply for instance when selecting aircraft on the 3D picture, these could be linked to the corresponding ones on the main 2D radar. AC1 AC2 AC3 AC4 AC5 Figure 41. Exo Vis Display for ATC. The 2D radar (centre of the display), the local 3D picture (Upper left), the electronic strips (on the left) appear within a single 3D information space. Aircraft label are not represented in the main 2D radar. Other additional views could be integrated, for instance a separation display (see Figure 42) such as those defined by Falzon (1982). On such a display vertical and horizontal separations are reported respectively on the vertical and horizontal axes. The resulting graph represents separation associated to each pair of aircraft in the airspace. Separation moving quickly towards the origin signals an imminent loss of safety separation. Once two aircraft are approaching the lower left corner of the separation display (see Figure 43)., these could be visually linked to the D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 52

53 corresponding aircraft on the 2D radar, thus avoiding the controller the visual effort to locate the aircraft on the main radar view (see Figure 44) vertical separation (NM) A/C1 A/C3 A/C2 A/C2 A/C3 A/C3 A/C5 A/C5 A/C4 A/C6 A/C1 A/C horizontal separation (NM) Figure 42. An example of Separation display inspired to the original work of Falzon (1982). This is one but an example of view that could be accommodated into the ATC ExoVs Display. AC1 AC2 AC3 AC4 AC5 AC1 AC2 AC3 AC4 AC5 Figure 43, 44: An example of data correlation across separation and main radar views. # Relationship with combination display framework The Multiple Coordinated Views (McV) reviewed in the D1.1: Innovation and Consolidation Report inspired the AT Exo Vis Display. McV represent an interaction technique useful to correlate data D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 53

54 across different views. So that for instance a user can select a set of data on one view and find the corresponding data on a separate view. Benefit for the controller (a) Reduced time to match information from one view to the other, since views appears in a single 3D space, ad data is correlated by means of graphical elements; (b) Reduced information management tasks Expected drawbacks A spaghetti problem might results whenever several items from different views are linked at the same time Concept Implementation The concept is intended as desktop system making use of a standard monitor. It would be implemented using C++ and Open GL. Interaction is based on mouse and keyboard. The ExoVis Display would require the use of large screens and/or immersive or semi-immersive projection technologies, typically used in the Virtual Reality domain. We provide hereafter an overview of existing commercial solutions that could be used. These are: (a) large flat screens; (b) Powerwall display; (c) Curved display; (d) spherical display ; (e) CAVE display Large flat screens Large flat screens (with diagonal of 40 inches or more) could be used to provide a high resolution screen covering a considerable part of the user field of view. Auto-stereoscopic displays could also be considered in order to add 3D (stereo) visualisation capabilities Powerwall The Powerwall TM was one of the first multi-channel visualization walls, used for collaborative analysis of scientific data. The high resolution and large screen size were achieved through the combination of 2 x 2 projectors in a rear-screen configuration. The Powerwall has led the way to a large family of comparable display systems such as the CAD-Wall (for 1:1 scale visualization of CAD data), the Heye-Wall, and many others. The CRT projectors used in the Powerwall have now been replaced by bright light valve projectors using either active or passive stereo, and blended into a seamless display using a 1- or 2-dimensional array of channels. As projector sizes have shrunk considerably, it is now possible to create very dense arrays, thereby achieving extremely small pixel sizes. Additionally, computing power has become so affordable that addressing large amounts of pixels is no longer D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 54

55 restricted to supercomputers. The fact that one can approach the screen without casting a shadow has made it highly suitable for interactive applications. Figure 45: The PowerWall from Fakespace systems Curved The degree of immersion can be increased significantly by wrapping the screen around the users. For this reason, systems covering 150 or more, to cover the whole field of view, are quite common. In mono configuration, this display type is widely used for various simulators (car, cockpit, ship s bridge): it s easy to get a mock-up in the centre, and actual stereoscopic cues are often irrelevant due to the large distance to the virtual objects displayed. The combination of curved screens and stereo is less common: in the case of passive stereo, a large polarization-maintaining screen needs to be used, its metallic reflection characteristics complicating uniform illumination. In the case of active stereo nowadays usually done with 3-chip DLP projectors the complication lies in the fact that ultra-fast display at double refresh rates needs to be combined with soft edge blending and geometry correction (to compensate for the curved screen shape). This explains why this combination is usually found in the higher segment of the market. Figure 46: Example of curved display. D1.2_Innovative Concepts and Their Design Rationale_v0.9.doc Page 55

Interactive and Immersive 3D Visualization for ATC. Matt Cooper Norrköping Visualization and Interaction Studio University of Linköping, Sweden

Interactive and Immersive 3D Visualization for ATC. Matt Cooper Norrköping Visualization and Interaction Studio University of Linköping, Sweden Interactive and Immersive 3D Visualization for ATC Matt Cooper Norrköping Visualization and Interaction Studio University of Linköping, Sweden Background Fundamentals: Air traffic expected to increase

More information

Interactive and Immersive 3D Visualization for ATC

Interactive and Immersive 3D Visualization for ATC Interactive and Immersive 3D Visualization for ATC Matt Cooper & Marcus Lange Norrköping Visualization and Interaction Studio University of Linköping, Sweden Summary of last presentation A quick description

More information

PROGRESS ON THE SIMULATOR AND EYE-TRACKER FOR ASSESSMENT OF PVFR ROUTES AND SNI OPERATIONS FOR ROTORCRAFT

PROGRESS ON THE SIMULATOR AND EYE-TRACKER FOR ASSESSMENT OF PVFR ROUTES AND SNI OPERATIONS FOR ROTORCRAFT PROGRESS ON THE SIMULATOR AND EYE-TRACKER FOR ASSESSMENT OF PVFR ROUTES AND SNI OPERATIONS FOR ROTORCRAFT 1 Rudolph P. Darken, 1 Joseph A. Sullivan, and 2 Jeffrey Mulligan 1 Naval Postgraduate School,

More information

Chapter 1 Virtual World Fundamentals

Chapter 1 Virtual World Fundamentals Chapter 1 Virtual World Fundamentals 1.0 What Is A Virtual World? {Definition} Virtual: to exist in effect, though not in actual fact. You are probably familiar with arcade games such as pinball and target

More information

COPYRIGHTED MATERIAL. Overview

COPYRIGHTED MATERIAL. Overview In normal experience, our eyes are constantly in motion, roving over and around objects and through ever-changing environments. Through this constant scanning, we build up experience data, which is manipulated

More information

COPYRIGHTED MATERIAL OVERVIEW 1

COPYRIGHTED MATERIAL OVERVIEW 1 OVERVIEW 1 In normal experience, our eyes are constantly in motion, roving over and around objects and through ever-changing environments. Through this constant scanning, we build up experiential data,

More information

Application of 3D Terrain Representation System for Highway Landscape Design

Application of 3D Terrain Representation System for Highway Landscape Design Application of 3D Terrain Representation System for Highway Landscape Design Koji Makanae Miyagi University, Japan Nashwan Dawood Teesside University, UK Abstract In recent years, mixed or/and augmented

More information

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real... v preface Motivation Augmented reality (AR) research aims to develop technologies that allow the real-time fusion of computer-generated digital content with the real world. Unlike virtual reality (VR)

More information

Chapter 1 - Introduction

Chapter 1 - Introduction 1 "We all agree that your theory is crazy, but is it crazy enough?" Niels Bohr (1885-1962) Chapter 1 - Introduction Augmented reality (AR) is the registration of projected computer-generated images over

More information

Toward an Integrated Ecological Plan View Display for Air Traffic Controllers

Toward an Integrated Ecological Plan View Display for Air Traffic Controllers Wright State University CORE Scholar International Symposium on Aviation Psychology - 2015 International Symposium on Aviation Psychology 2015 Toward an Integrated Ecological Plan View Display for Air

More information

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL

More information

A Kinect-based 3D hand-gesture interface for 3D databases

A Kinect-based 3D hand-gesture interface for 3D databases A Kinect-based 3D hand-gesture interface for 3D databases Abstract. The use of natural interfaces improves significantly aspects related to human-computer interaction and consequently the productivity

More information

Virtual Environments. Ruth Aylett

Virtual Environments. Ruth Aylett Virtual Environments Ruth Aylett Aims of the course 1. To demonstrate a critical understanding of modern VE systems, evaluating the strengths and weaknesses of the current VR technologies 2. To be able

More information

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception

More information

User Interfaces in Panoramic Augmented Reality Environments

User Interfaces in Panoramic Augmented Reality Environments User Interfaces in Panoramic Augmented Reality Environments Stephen Peterson Department of Science and Technology (ITN) Linköping University, Sweden Supervisors: Anders Ynnerman Linköping University, Sweden

More information

Using Dynamic Views. Module Overview. Module Prerequisites. Module Objectives

Using Dynamic Views. Module Overview. Module Prerequisites. Module Objectives Using Dynamic Views Module Overview The term dynamic views refers to a method of composing drawings that is a new approach to managing projects. Dynamic views can help you to: automate sheet creation;

More information

SIMGRAPH - A FLIGHT SIMULATION DATA VISUALIZATION WORKSTATION. Joseph A. Kaplan NASA Langley Research Center Hampton, Virginia

SIMGRAPH - A FLIGHT SIMULATION DATA VISUALIZATION WORKSTATION. Joseph A. Kaplan NASA Langley Research Center Hampton, Virginia SIMGRAPH - A FLIGHT SIMULATION DATA VISUALIZATION WORKSTATION Joseph A. Kaplan NASA Langley Research Center Hampton, Virginia Patrick S. Kenney UNISYS Corporation Hampton, Virginia Abstract Today's modern

More information

AR 2 kanoid: Augmented Reality ARkanoid

AR 2 kanoid: Augmented Reality ARkanoid AR 2 kanoid: Augmented Reality ARkanoid B. Smith and R. Gosine C-CORE and Memorial University of Newfoundland Abstract AR 2 kanoid, Augmented Reality ARkanoid, is an augmented reality version of the popular

More information

Air Traffic Soft. Management. Ultimate System. Call Identifier : FP TREN-3 Thematic Priority 1.4 Aeronautics and Space

Air Traffic Soft. Management. Ultimate System. Call Identifier : FP TREN-3 Thematic Priority 1.4 Aeronautics and Space En Route Air Traffic Soft Management Ultimate System Call Identifier : FP6-2004-TREN-3 Thematic Priority 1.4 Aeronautics and Space EUROCONTROL Experimental Centre EUROCONTROL Innovative Research Workshop

More information

Omni-Directional Catadioptric Acquisition System

Omni-Directional Catadioptric Acquisition System Technical Disclosure Commons Defensive Publications Series December 18, 2017 Omni-Directional Catadioptric Acquisition System Andreas Nowatzyk Andrew I. Russell Follow this and additional works at: http://www.tdcommons.org/dpubs_series

More information

Enhanced Virtual Transparency in Handheld AR: Digital Magnifying Glass

Enhanced Virtual Transparency in Handheld AR: Digital Magnifying Glass Enhanced Virtual Transparency in Handheld AR: Digital Magnifying Glass Klen Čopič Pucihar School of Computing and Communications Lancaster University Lancaster, UK LA1 4YW k.copicpuc@lancaster.ac.uk Paul

More information

Augmented Reality Lecture notes 01 1

Augmented Reality Lecture notes 01 1 IntroductiontoAugmentedReality Lecture notes 01 1 Definition Augmented reality (AR) is a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated

More information

10 Secondary Surveillance Radar

10 Secondary Surveillance Radar 10 Secondary Surveillance Radar As we have just noted, the primary radar element of the ATC Surveillance Radar System provides detection of suitable targets with good accuracy in bearing and range measurement

More information

VR-programming. Fish Tank VR. To drive enhanced virtual reality display setups like. Monitor-based systems Use i.e.

VR-programming. Fish Tank VR. To drive enhanced virtual reality display setups like. Monitor-based systems Use i.e. VR-programming To drive enhanced virtual reality display setups like responsive workbenches walls head-mounted displays boomes domes caves Fish Tank VR Monitor-based systems Use i.e. shutter glasses 3D

More information

Visualization of Aircraft Approach and Departure Procedures in a Decision Support System for Controllers

Visualization of Aircraft Approach and Departure Procedures in a Decision Support System for Controllers Visualization of Aircraft Approach and Departure Procedures in a Decision Support System for Controllers Kristina Lapin, Vytautas Čyras, Laura Savičienė Vilnius University, Faculty of Mathematics and Informatics,

More information

HELPING THE DESIGN OF MIXED SYSTEMS

HELPING THE DESIGN OF MIXED SYSTEMS HELPING THE DESIGN OF MIXED SYSTEMS Céline Coutrix Grenoble Informatics Laboratory (LIG) University of Grenoble 1, France Abstract Several interaction paradigms are considered in pervasive computing environments.

More information

AIR ROUTE SURVEILLANCE 3D RADAR

AIR ROUTE SURVEILLANCE 3D RADAR AIR TRAFFIC MANAGEMENT AIR ROUTE SURVEILLANCE 3D RADAR Supplying ATM systems around the world for more than 30 years indracompany.com ARSR-10D3 AIR ROUTE SURVEILLANCE 3D RADAR ARSR 3D & MSSR Antenna Medium

More information

Exploring 3D in Flash

Exploring 3D in Flash 1 Exploring 3D in Flash We live in a three-dimensional world. Objects and spaces have width, height, and depth. Various specialized immersive technologies such as special helmets, gloves, and 3D monitors

More information

Psychophysics of night vision device halo

Psychophysics of night vision device halo University of Wollongong Research Online Faculty of Health and Behavioural Sciences - Papers (Archive) Faculty of Science, Medicine and Health 2009 Psychophysics of night vision device halo Robert S Allison

More information

Falsework & Formwork Visualisation Software

Falsework & Formwork Visualisation Software User Guide Falsework & Formwork Visualisation Software The launch of cements our position as leaders in the use of visualisation technology to benefit our customers and clients. Our award winning, innovative

More information

Introduction to Virtual Reality (based on a talk by Bill Mark)

Introduction to Virtual Reality (based on a talk by Bill Mark) Introduction to Virtual Reality (based on a talk by Bill Mark) I will talk about... Why do we want Virtual Reality? What is needed for a VR system? Examples of VR systems Research problems in VR Most Computers

More information

Sample Copy. Not For Distribution.

Sample Copy. Not For Distribution. Photogrammetry, GIS & Remote Sensing Quick Reference Book i EDUCREATION PUBLISHING Shubham Vihar, Mangla, Bilaspur, Chhattisgarh - 495001 Website: www.educreation.in Copyright, 2017, S.S. Manugula, V.

More information

CARE INO III 3D IN 2D PLANAR DISPLAY PROJECT D2-1: ATC SIM AND EXPERIMENTATION TEST-BED (LOT NO. 1, WP 2)

CARE INO III 3D IN 2D PLANAR DISPLAY PROJECT D2-1: ATC SIM AND EXPERIMENTATION TEST-BED (LOT NO. 1, WP 2) CARE INO III 3D IN 2D PLANAR DISPLAY PROJECT D2-1: ATC SIM AND EXPERIMENTATION TEST-BED (LOT NO. 1, WP 2) Reference : Edition 1 Effective Date 30/05/08 Authors Organisation Signature Stephen Gaukrodger

More information

Immersive Visualization On the Cheap. Amy Trost Data Services Librarian Universities at Shady Grove/UMD Libraries December 6, 2019

Immersive Visualization On the Cheap. Amy Trost Data Services Librarian Universities at Shady Grove/UMD Libraries December 6, 2019 Immersive Visualization On the Cheap Amy Trost Data Services Librarian Universities at Shady Grove/UMD Libraries atrost1@umd.edu December 6, 2019 About Me About this Session Some of us have been lucky

More information

Usability Evaluation of Multi- Touch-Displays for TMA Controller Working Positions

Usability Evaluation of Multi- Touch-Displays for TMA Controller Working Positions Sesar Innovation Days 2014 Usability Evaluation of Multi- Touch-Displays for TMA Controller Working Positions DLR German Aerospace Center, DFS German Air Navigation Services Maria Uebbing-Rumke, DLR Hejar

More information

November 30, Prof. Sung-Hoon Ahn ( 安成勳 )

November 30, Prof. Sung-Hoon Ahn ( 安成勳 ) 4 4 6. 3 2 6 A C A D / C A M Virtual Reality/Augmented t Reality November 30, 2009 Prof. Sung-Hoon Ahn ( 安成勳 ) Photo copyright: Sung-Hoon Ahn School of Mechanical and Aerospace Engineering Seoul National

More information

A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems

A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems F. Steinicke, G. Bruder, H. Frenz 289 A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems Frank Steinicke 1, Gerd Bruder 1, Harald Frenz 2 1 Institute of Computer Science,

More information

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE Copyright SFA - InterNoise 2000 1 inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering 27-30 August 2000, Nice, FRANCE I-INCE Classification: 7.2 MICROPHONE ARRAY

More information

Effective Iconography....convey ideas without words; attract attention...

Effective Iconography....convey ideas without words; attract attention... Effective Iconography...convey ideas without words; attract attention... Visual Thinking and Icons An icon is an image, picture, or symbol representing a concept Icon-specific guidelines Represent the

More information

Introduction to Psychology Prof. Braj Bhushan Department of Humanities and Social Sciences Indian Institute of Technology, Kanpur

Introduction to Psychology Prof. Braj Bhushan Department of Humanities and Social Sciences Indian Institute of Technology, Kanpur Introduction to Psychology Prof. Braj Bhushan Department of Humanities and Social Sciences Indian Institute of Technology, Kanpur Lecture - 10 Perception Role of Culture in Perception Till now we have

More information

Environmental control by remote eye tracking

Environmental control by remote eye tracking Loughborough University Institutional Repository Environmental control by remote eye tracking This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation: SHI,

More information

Virtual Reality Devices in C2 Systems

Virtual Reality Devices in C2 Systems Jan Hodicky, Petr Frantis University of Defence Brno 65 Kounicova str. Brno Czech Republic +420973443296 jan.hodicky@unbo.cz petr.frantis@unob.cz Virtual Reality Devices in C2 Systems Topic: Track 8 C2

More information

The Application of Human-Computer Interaction Idea in Computer Aided Industrial Design

The Application of Human-Computer Interaction Idea in Computer Aided Industrial Design The Application of Human-Computer Interaction Idea in Computer Aided Industrial Design Zhang Liang e-mail: 76201691@qq.com Zhao Jian e-mail: 84310626@qq.com Zheng Li-nan e-mail: 1021090387@qq.com Li Nan

More information

Geo-Located Content in Virtual and Augmented Reality

Geo-Located Content in Virtual and Augmented Reality Technical Disclosure Commons Defensive Publications Series October 02, 2017 Geo-Located Content in Virtual and Augmented Reality Thomas Anglaret Follow this and additional works at: http://www.tdcommons.org/dpubs_series

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

TEPZZ A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: G01S 7/40 ( ) G01S 13/78 (2006.

TEPZZ A_T EP A1 (19) (11) EP A1 (12) EUROPEAN PATENT APPLICATION. (51) Int Cl.: G01S 7/40 ( ) G01S 13/78 (2006. (19) TEPZZ 8789A_T (11) EP 2 87 89 A1 (12) EUROPEAN PATENT APPLICATION (43) Date of publication: 08.04.201 Bulletin 201/1 (1) Int Cl.: G01S 7/40 (2006.01) G01S 13/78 (2006.01) (21) Application number:

More information

Haptic Camera Manipulation: Extending the Camera In Hand Metaphor

Haptic Camera Manipulation: Extending the Camera In Hand Metaphor Haptic Camera Manipulation: Extending the Camera In Hand Metaphor Joan De Boeck, Karin Coninx Expertise Center for Digital Media Limburgs Universitair Centrum Wetenschapspark 2, B-3590 Diepenbeek, Belgium

More information

INTERACTIVE SKETCHING OF THE URBAN-ARCHITECTURAL SPATIAL DRAFT Peter Kardoš Slovak University of Technology in Bratislava

INTERACTIVE SKETCHING OF THE URBAN-ARCHITECTURAL SPATIAL DRAFT Peter Kardoš Slovak University of Technology in Bratislava INTERACTIVE SKETCHING OF THE URBAN-ARCHITECTURAL SPATIAL DRAFT Peter Kardoš Slovak University of Technology in Bratislava Abstract The recent innovative information technologies and the new possibilities

More information

ReVRSR: Remote Virtual Reality for Service Robots

ReVRSR: Remote Virtual Reality for Service Robots ReVRSR: Remote Virtual Reality for Service Robots Amel Hassan, Ahmed Ehab Gado, Faizan Muhammad March 17, 2018 Abstract This project aims to bring a service robot s perspective to a human user. We believe

More information

CSC 170 Introduction to Computers and Their Applications. Lecture #3 Digital Graphics and Video Basics. Bitmap Basics

CSC 170 Introduction to Computers and Their Applications. Lecture #3 Digital Graphics and Video Basics. Bitmap Basics CSC 170 Introduction to Computers and Their Applications Lecture #3 Digital Graphics and Video Basics Bitmap Basics As digital devices gained the ability to display images, two types of computer graphics

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION 1 CHAPTER 1 INTRODUCTION In maritime surveillance, radar echoes which clutter the radar and challenge small target detection. Clutter is unwanted echoes that can make target detection of wanted targets

More information

Adobe Photoshop CC 2018 Tutorial

Adobe Photoshop CC 2018 Tutorial Adobe Photoshop CC 2018 Tutorial GETTING STARTED Adobe Photoshop CC 2018 is a popular image editing software that provides a work environment consistent with Adobe Illustrator, Adobe InDesign, Adobe Photoshop,

More information

INTERACTIVE 3D VIRTUAL HYDRAULICS Using virtual reality environments in teaching and research of fluid power systems and components

INTERACTIVE 3D VIRTUAL HYDRAULICS Using virtual reality environments in teaching and research of fluid power systems and components INTERACTIVE 3D VIRTUAL HYDRAULICS Using virtual reality environments in teaching and research of fluid power systems and components L. Pauniaho, M. Hyvonen, R. Erkkila, J. Vilenius, K. T. Koskinen and

More information

H2020 RIA COMANOID H2020-RIA

H2020 RIA COMANOID H2020-RIA Ref. Ares(2016)2533586-01/06/2016 H2020 RIA COMANOID H2020-RIA-645097 Deliverable D4.1: Demonstrator specification report M6 D4.1 H2020-RIA-645097 COMANOID M6 Project acronym: Project full title: COMANOID

More information

Interior Design using Augmented Reality Environment

Interior Design using Augmented Reality Environment Interior Design using Augmented Reality Environment Kalyani Pampattiwar 2, Akshay Adiyodi 1, Manasvini Agrahara 1, Pankaj Gamnani 1 Assistant Professor, Department of Computer Engineering, SIES Graduate

More information

COLLABORATION WITH TANGIBLE AUGMENTED REALITY INTERFACES.

COLLABORATION WITH TANGIBLE AUGMENTED REALITY INTERFACES. COLLABORATION WITH TANGIBLE AUGMENTED REALITY INTERFACES. Mark Billinghurst a, Hirokazu Kato b, Ivan Poupyrev c a Human Interface Technology Laboratory, University of Washington, Box 352-142, Seattle,

More information

EXPERIMENTAL FRAMEWORK FOR EVALUATING COGNITIVE WORKLOAD OF USING AR SYSTEM IN GENERAL ASSEMBLY TASK

EXPERIMENTAL FRAMEWORK FOR EVALUATING COGNITIVE WORKLOAD OF USING AR SYSTEM IN GENERAL ASSEMBLY TASK EXPERIMENTAL FRAMEWORK FOR EVALUATING COGNITIVE WORKLOAD OF USING AR SYSTEM IN GENERAL ASSEMBLY TASK Lei Hou and Xiangyu Wang* Faculty of Built Environment, the University of New South Wales, Australia

More information

II. Basic Concepts in Display Systems

II. Basic Concepts in Display Systems Special Topics in Display Technology 1 st semester, 2016 II. Basic Concepts in Display Systems * Reference book: [Display Interfaces] (R. L. Myers, Wiley) 1. Display any system through which ( people through

More information

Microsoft Scrolling Strip Prototype: Technical Description

Microsoft Scrolling Strip Prototype: Technical Description Microsoft Scrolling Strip Prototype: Technical Description Primary features implemented in prototype Ken Hinckley 7/24/00 We have done at least some preliminary usability testing on all of the features

More information

Haptic control in a virtual environment

Haptic control in a virtual environment Haptic control in a virtual environment Gerard de Ruig (0555781) Lourens Visscher (0554498) Lydia van Well (0566644) September 10, 2010 Introduction With modern technological advancements it is entirely

More information

Figure 1 HDR image fusion example

Figure 1 HDR image fusion example TN-0903 Date: 10/06/09 Using image fusion to capture high-dynamic range (hdr) scenes High dynamic range (HDR) refers to the ability to distinguish details in scenes containing both very bright and relatively

More information

Simulation of Water Inundation Using Virtual Reality Tools for Disaster Study: Opportunity and Challenges

Simulation of Water Inundation Using Virtual Reality Tools for Disaster Study: Opportunity and Challenges Simulation of Water Inundation Using Virtual Reality Tools for Disaster Study: Opportunity and Challenges Deepak Mishra Associate Professor Department of Avionics Indian Institute of Space Science and

More information

Aerospace Sensor Suite

Aerospace Sensor Suite Aerospace Sensor Suite ECE 1778 Creative Applications for Mobile Devices Final Report prepared for Dr. Jonathon Rose April 12 th 2011 Word count: 2351 + 490 (Apper Context) Jin Hyouk (Paul) Choi: 998495640

More information

Input devices and interaction. Ruth Aylett

Input devices and interaction. Ruth Aylett Input devices and interaction Ruth Aylett Contents Tracking What is available Devices Gloves, 6 DOF mouse, WiiMote Why is it important? Interaction is basic to VEs We defined them as interactive in real-time

More information

Novel Hemispheric Image Formation: Concepts & Applications

Novel Hemispheric Image Formation: Concepts & Applications Novel Hemispheric Image Formation: Concepts & Applications Simon Thibault, Pierre Konen, Patrice Roulet, and Mathieu Villegas ImmerVision 2020 University St., Montreal, Canada H3A 2A5 ABSTRACT Panoramic

More information

Android User manual. Intel Education Lab Camera by Intellisense CONTENTS

Android User manual. Intel Education Lab Camera by Intellisense CONTENTS Intel Education Lab Camera by Intellisense Android User manual CONTENTS Introduction General Information Common Features Time Lapse Kinematics Motion Cam Microscope Universal Logger Pathfinder Graph Challenge

More information

ISO JTC 1 SC 24 WG9 G E R A R D J. K I M K O R E A U N I V E R S I T Y

ISO JTC 1 SC 24 WG9 G E R A R D J. K I M K O R E A U N I V E R S I T Y New Work Item Proposal: A Standard Reference Model for Generic MAR Systems ISO JTC 1 SC 24 WG9 G E R A R D J. K I M K O R E A U N I V E R S I T Y What is a Reference Model? A reference model (for a given

More information

Paper on: Optical Camouflage

Paper on: Optical Camouflage Paper on: Optical Camouflage PRESENTED BY: I. Harish teja V. Keerthi E.C.E E.C.E E-MAIL: Harish.teja123@gmail.com kkeerthi54@gmail.com 9533822365 9866042466 ABSTRACT: Optical Camouflage delivers a similar

More information

Social Viewing in Cinematic Virtual Reality: Challenges and Opportunities

Social Viewing in Cinematic Virtual Reality: Challenges and Opportunities Social Viewing in Cinematic Virtual Reality: Challenges and Opportunities Sylvia Rothe 1, Mario Montagud 2, Christian Mai 1, Daniel Buschek 1 and Heinrich Hußmann 1 1 Ludwig Maximilian University of Munich,

More information

Determining Optimal Player Position, Distance, and Scale from a Point of Interest on a Terrain

Determining Optimal Player Position, Distance, and Scale from a Point of Interest on a Terrain Technical Disclosure Commons Defensive Publications Series October 02, 2017 Determining Optimal Player Position, Distance, and Scale from a Point of Interest on a Terrain Adam Glazier Nadav Ashkenazi Matthew

More information

Copyrighted Material - Taylor & Francis

Copyrighted Material - Taylor & Francis 22 Traffic Alert and Collision Avoidance System II (TCAS II) Steve Henely Rockwell Collins 22. Introduction...22-22.2 Components...22-2 22.3 Surveillance...22-3 22. Protected Airspace...22-3 22. Collision

More information

- applications on same or different network node of the workstation - portability of application software - multiple displays - open architecture

- applications on same or different network node of the workstation - portability of application software - multiple displays - open architecture 12 Window Systems - A window system manages a computer screen. - Divides the screen into overlapping regions. - Each region displays output from a particular application. X window system is widely used

More information

Immersive Simulation in Instructional Design Studios

Immersive Simulation in Instructional Design Studios Blucher Design Proceedings Dezembro de 2014, Volume 1, Número 8 www.proceedings.blucher.com.br/evento/sigradi2014 Immersive Simulation in Instructional Design Studios Antonieta Angulo Ball State University,

More information

The Mixed Reality Book: A New Multimedia Reading Experience

The Mixed Reality Book: A New Multimedia Reading Experience The Mixed Reality Book: A New Multimedia Reading Experience Raphaël Grasset raphael.grasset@hitlabnz.org Andreas Dünser andreas.duenser@hitlabnz.org Mark Billinghurst mark.billinghurst@hitlabnz.org Hartmut

More information

Multisensory Virtual Environment for Supporting Blind Persons' Acquisition of Spatial Cognitive Mapping a Case Study

Multisensory Virtual Environment for Supporting Blind Persons' Acquisition of Spatial Cognitive Mapping a Case Study Multisensory Virtual Environment for Supporting Blind Persons' Acquisition of Spatial Cognitive Mapping a Case Study Orly Lahav & David Mioduser Tel Aviv University, School of Education Ramat-Aviv, Tel-Aviv,

More information

A C A D / C A M. Virtual Reality/Augmented Reality. December 10, Sung-Hoon Ahn

A C A D / C A M. Virtual Reality/Augmented Reality. December 10, Sung-Hoon Ahn 4 4 6. 3 2 6 A C A D / C A M Virtual Reality/Augmented Reality December 10, 2007 Sung-Hoon Ahn School of Mechanical and Aerospace Engineering Seoul National University What is VR/AR Virtual Reality (VR)

More information

A Virtual Environments Editor for Driving Scenes

A Virtual Environments Editor for Driving Scenes A Virtual Environments Editor for Driving Scenes Ronald R. Mourant and Sophia-Katerina Marangos Virtual Environments Laboratory, 334 Snell Engineering Center Northeastern University, Boston, MA 02115 USA

More information

(12) Patent Application Publication (10) Pub. No.: US 2016/ A1

(12) Patent Application Publication (10) Pub. No.: US 2016/ A1 (19) United States US 2016O2538.43A1 (12) Patent Application Publication (10) Pub. No.: US 2016/0253843 A1 LEE (43) Pub. Date: Sep. 1, 2016 (54) METHOD AND SYSTEM OF MANAGEMENT FOR SWITCHINGVIRTUAL-REALITY

More information

REPORT ON THE CURRENT STATE OF FOR DESIGN. XL: Experiments in Landscape and Urbanism

REPORT ON THE CURRENT STATE OF FOR DESIGN. XL: Experiments in Landscape and Urbanism REPORT ON THE CURRENT STATE OF FOR DESIGN XL: Experiments in Landscape and Urbanism This report was produced by XL: Experiments in Landscape and Urbanism, SWA Group s innovation lab. It began as an internal

More information

HandsIn3D: Supporting Remote Guidance with Immersive Virtual Environments

HandsIn3D: Supporting Remote Guidance with Immersive Virtual Environments HandsIn3D: Supporting Remote Guidance with Immersive Virtual Environments Weidong Huang 1, Leila Alem 1, and Franco Tecchia 2 1 CSIRO, Australia 2 PERCRO - Scuola Superiore Sant Anna, Italy {Tony.Huang,Leila.Alem}@csiro.au,

More information

RADAR CHAPTER 3 RADAR

RADAR CHAPTER 3 RADAR RADAR CHAPTER 3 RADAR RDF becomes Radar 1. As World War II approached, scientists and the military were keen to find a method of detecting aircraft outside the normal range of eyes and ears. They found

More information

DEVELOPMENT OF PASSIVE SURVEILLANCE RADAR

DEVELOPMENT OF PASSIVE SURVEILLANCE RADAR DEVELOPMENT OF PASSIVE SURVEILLANCE RADAR Kakuichi Shiomi* and Shuji Aoyama** *Electronic Navigation Research Institute, Japan **IRT Corporation, Japan Keywords: Radar, Passive Radar, Passive Surveillance

More information

Guidance Material for ILS requirements in RSA

Guidance Material for ILS requirements in RSA Guidance Material for ILS requirements in RSA General:- Controlled airspace required with appropriate procedures. Control Tower to have clear and unobstructed view of the complete runway complex. ATC to

More information

FlyRealHUDs Very Brief Helo User s Manual

FlyRealHUDs Very Brief Helo User s Manual FlyRealHUDs Very Brief Helo User s Manual 1 1.0 Welcome! Congratulations. You are about to become one of the elite pilots who have mastered the fine art of flying the most advanced piece of avionics in

More information

Context-Aware Interaction in a Mobile Environment

Context-Aware Interaction in a Mobile Environment Context-Aware Interaction in a Mobile Environment Daniela Fogli 1, Fabio Pittarello 2, Augusto Celentano 2, and Piero Mussio 1 1 Università degli Studi di Brescia, Dipartimento di Elettronica per l'automazione

More information

doi: /

doi: / doi: 10.1117/12.872287 Coarse Integral Volumetric Imaging with Flat Screen and Wide Viewing Angle Shimpei Sawada* and Hideki Kakeya University of Tsukuba 1-1-1 Tennoudai, Tsukuba 305-8573, JAPAN ABSTRACT

More information

Automated Terrestrial EMI Emitter Detection, Classification, and Localization 1

Automated Terrestrial EMI Emitter Detection, Classification, and Localization 1 Automated Terrestrial EMI Emitter Detection, Classification, and Localization 1 Richard Stottler James Ong Chris Gioia Stottler Henke Associates, Inc., San Mateo, CA 94402 Chris Bowman, PhD Data Fusion

More information

A Multimodal Air Traffic Controller Working Position

A Multimodal Air Traffic Controller Working Position DLR.de Chart 1 A Multimodal Air Traffic Controller Working Position The Sixth SESAR Innovation Days, Delft, The Netherlands Oliver Ohneiser, Malte Jauer German Aerospace Center (DLR) Institute of Flight

More information

Evolution from 3D to 4D radar

Evolution from 3D to 4D radar Evolution from 3D to 4D radar MARIA GUTIERREZ (1), GERARDO ARANGUREN (1), MIGUEL RODRIGUEZ (2), JAVIER BILBAO (2), JAVIER GÓMEZ (1) (1) Department of Electronics and Telecommunications (2) Department of

More information

Chapter 2 Understanding and Conceptualizing Interaction. Anna Loparev Intro HCI University of Rochester 01/29/2013. Problem space

Chapter 2 Understanding and Conceptualizing Interaction. Anna Loparev Intro HCI University of Rochester 01/29/2013. Problem space Chapter 2 Understanding and Conceptualizing Interaction Anna Loparev Intro HCI University of Rochester 01/29/2013 1 Problem space Concepts and facts relevant to the problem Users Current UX Technology

More information

Radar and Wind Farms. Dr Laith Rashid Prof Anthony Brown. The University of Manchester

Radar and Wind Farms. Dr Laith Rashid Prof Anthony Brown. The University of Manchester Radar and Wind Farms Dr Laith Rashid Prof Anthony Brown The Microwave and Communication Systems Research Group School of Electrical and Electronic Engineering The University of Manchester Summary Introduction

More information

Augmented Reality. Virtuelle Realität Wintersemester 2007/08. Overview. Part 14:

Augmented Reality. Virtuelle Realität Wintersemester 2007/08. Overview. Part 14: Part 14: Augmented Reality Virtuelle Realität Wintersemester 2007/08 Prof. Bernhard Jung Overview Introduction to Augmented Reality Augmented Reality Displays Examples AR Toolkit an open source software

More information

Integration of surveillance in the ACC automation system

Integration of surveillance in the ACC automation system Integration of surveillance in the ACC automation system ICAO Seminar on the Implementation of Aeronautical Surveillance and Automation Systems in the SAM Region San Carlos de Bariloche 6-8 Decembre 2010

More information

5/17/2009. Digitizing Color. Place Value in a Binary Number. Place Value in a Decimal Number. Place Value in a Binary Number

5/17/2009. Digitizing Color. Place Value in a Binary Number. Place Value in a Decimal Number. Place Value in a Binary Number Chapter 11: Light, Sound, Magic: Representing Multimedia Digitally Digitizing Color Fluency with Information Technology Third Edition by Lawrence Snyder RGB Colors: Binary Representation Giving the intensities

More information

Automatic Dependent Surveillance -ADS-B

Automatic Dependent Surveillance -ADS-B ASECNA Workshop on ADS-B (Dakar, Senegal, 22 to 23 July 2014) Automatic Dependent Surveillance -ADS-B Presented by FX SALAMBANGA Regional Officer, CNS WACAF OUTLINE I Definition II Principles III Architecture

More information

AUGMENTED VIRTUAL REALITY APPLICATIONS IN MANUFACTURING

AUGMENTED VIRTUAL REALITY APPLICATIONS IN MANUFACTURING 6 th INTERNATIONAL MULTIDISCIPLINARY CONFERENCE AUGMENTED VIRTUAL REALITY APPLICATIONS IN MANUFACTURING Peter Brázda, Jozef Novák-Marcinčin, Faculty of Manufacturing Technologies, TU Košice Bayerova 1,

More information

Fig.1 AR as mixed reality[3]

Fig.1 AR as mixed reality[3] Marker Based Augmented Reality Application in Education: Teaching and Learning Gayathri D 1, Om Kumar S 2, Sunitha Ram C 3 1,3 Research Scholar, CSE Department, SCSVMV University 2 Associate Professor,

More information

Design Sketching for Space and Time

Design Sketching for Space and Time Design Sketching for Space and Time Simone Rozzi, William Wong, Paola Amaldi, Peter Woodward, and Bob Fields Middlesex University Hendon London NW4 4BT United Kingdom s.rozzi, w.wong, p.amaldi-trillo,

More information

AR Glossary. Terms. AR Glossary 1

AR Glossary. Terms. AR Glossary 1 AR Glossary Every domain has specialized terms to express domain- specific meaning and concepts. Many misunderstandings and errors can be attributed to improper use or poorly defined terminology. The Augmented

More information

Advancements in Gesture Recognition Technology

Advancements in Gesture Recognition Technology IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) Volume 4, Issue 4, Ver. I (Jul-Aug. 2014), PP 01-07 e-issn: 2319 4200, p-issn No. : 2319 4197 Advancements in Gesture Recognition Technology 1 Poluka

More information