Semantic Based Virtual Environments for Product Design. Antoniou Efstratios. Assistant Professor Dimitris Mourtzis Professor Athanasios Tsakalidis

Size: px
Start display at page:

Download "Semantic Based Virtual Environments for Product Design. Antoniou Efstratios. Assistant Professor Dimitris Mourtzis Professor Athanasios Tsakalidis"

Transcription

1 UNIVERSITY OF PATRAS COMPUTER ENGINEERING AND INFORMATICS DEPARTMENT DIPLOMA THESIS Semantic Based Virtual Environments for Product Design Antoniou Efstratios AM 4150 Assistant Professor Dimitris Mourtzis Professor Athanasios Tsakalidis Diploma Thesis submitted to the, University of Patras PATRA

2 Abstract Nowadays Virtual Reality greatly assists in the testing of products and processes that do not yet exist or exist in a preliminary state. Actual products can be replicated in high detail inside the Virtual Environment and can provide important conclusions about the real-world products. This way adjustments and corrections can be applied far before production, and thus flaws can be removed early in (or before) the production process. As VR systems are becoming a very important part of product design, and are used in modular systems, they have to be able to cooperate with Semantic Modules in order to store data and make it available to the rest of the modules of the general System. The integration of Semantics adds the Knowledge Management component to a system and makes it reusable and flexible. There have been many different approaches in combining VR environments with Semantic Knowledge-Management systems. Some focus on combined systems with heavy-load server applications and lightweight clients while others focus on agentbased distributed systems. Our development aims to produce a system which consists of separate modules that depend on close interaction. These modules are: an immersive VR Environment and a Semantic repository (Ontology) which holds the information about all the interactions that occur in the VE. This study s perspective, is to identify, model and analyse the cockpit elements and procedures and provide a reusable repository in order to capture and distribute the knowledge that is currently secluded into individual developments. On the procedure side, the method includes a basic Human Factors method, namely the Hierarchical Task Analysis (HTA) for modelling. It offers a useful tool in a domain that up to present depends on manual pen-and-paper methods applied by individual experts, which limit the knowledge distribution between the individuals that apply the studies, since there is no plan for storing and distributing it effectively. This study proposes the introduction on Virtual Reality, automation and Semantics in order to produce a semi-automated, modular environment that will enhance productivity of individuals across all the mentioned domains during the early phases of development. Keywords: Semantics, Ontology, Virtual Reality, Product Design, Modular Systems, Human Factors 2

3 Table of Contents Abstract... 2 Table of Contents... 3 Table of Figures Introduction Problem Definition and Motivation Description of the approach Literature Review & State of the Art Virtual Reality and Digital Manufacturing Knowledge Management in Manufacturing Semantic technologies Ontologies Semantic Engine/SPARQL server Human Factor Methods Limitations Proposed Method for Semantic Based Virtual Reality System Architecture Virtual Environment Cockpit Ontology Human Factor Methods Hierarchical Task Analysis (HTA) Querying Development of Semantic Based Virtual Environment System Architecture and Data Flows Developed Semantic Model Procedural Breakdown Functional grouping Industrial Test Case Hardware setup VR Environment and Semantic Modelling Auto-Extracted HTA Querying the Repository and Providing Feedback to User Conclusions and Future work References

4 Table of Figures Figure 1. Typical Virtual Reality System Running a Simulation Figure 2. Editing Interface of 3DVIA Virtools Figure 3. Linked Data and Semantic Web Figure 4. Protégé platform Figure 5. Hierarchical Task Analysis for Ordering a Book Figure 6. VR Module and Semantic Module Figure 7. Outline of the Contents of the Different Modules Figure 8. Virtual Cockpit Model Figure 9. Throttle, flaps and brakes levers Figure 10. Virtual Hand Model Interacting with Throttle Lever Figure 11. Partially Expanded Scenegraph from VR Engine Figure 12: High level queries for the Design Environment Figure 13. Generation of the Scenegraph Ontology Figure 14. Architecture of the System Figure 15. Collision Detection continuously running Figure 16. Introduction of Verb Figure 17. Data Added in the Ontology Figure 18: Verbs as classes of the Ontology Figure 19: Control types of the Control elements of the Cockpit Figure 20 Example of HTA ontology structure Figure 21. Procedure-based Task Classification Figure 22. Detailed modelling of Task, Action and Plan Figure 23: Classification of the body parts that interact with the Cockpit Elements.. 51 Figure 24. LMS CAVE Configuration Figure 25. HMD-Magnetic Tracking Setup Figure 26 Virtual Cockpit Use case Figure 27: Semantic cockpit ontology classification (Part1) Figure 28: Semantic cockpit ontology classification (Part2) Figure 29. Initial HTA Array after the execution of a procedure Figure 30. Transition from the Initial Array to the Final Array

5 Figure 31. Draft HTA extracted from the VR environment Figure 32: Forming a Query

6 1 Introduction 1.1 Problem Definition and Motivation The development of configurable and products, has become the current trend in which traditional manufacturing systems are slowly moving towards [1]. This new paradigm focuses on the specific needs of customers, enabling the design of different configurations of products by increasing the flexibility of the systems [2]. However, this causes increases in the time that products and processes are designed, which can cause significant delays in production. Virtual Reality is widely used by industry in order to overcome these delays and offer to the engineers the capability to improve the product, long before the actual production. This speeds up the design process of the configurable products. Virtual Reality applications are mainly developed in order to serve specific purposes. This is the reason that VEs are often disqualified from more global designs [3]. The use of one particular tool or one Virtual Reality Engine means that a user has to be committed to the development software for several years, and, if they decide to change the development platform, then in most cases they have to develop the whole application again. Manufacturers have to be capable to effectively react to sudden changes in customer demands as well as to cope with unpredictable events such as failures and disruptions. To achieve effective knowledge exchange and integration in open, reconfigurable environments, an explicit definition of semantics is needed to capture the data and information being processed and communicated. Also, the VR environments are designed by VR experts. These experts do not always have full awareness of the domain that their application serves. On the other hand, the users of the environments do not have to be fully aware of the VR domain in order to use the application [4]. This leads to time-consuming adaptations on both sides until the product reaches its final state. Also the cost and time consumption of this process is substantially high. In order for the VR Environment to be reusable and more widely applicable, a backend representation of the VE which holds all the information regarding the components of the scenegraph and allows for deployment of applications on different 6

7 VR Engine is needed. Moreover, a structured back-end in the form of a Semantic Repository can make the development of a VR application more domain-oriented, allowing the designer to modify the environment without being a VR expert. 1.2 Description of the approach Independent modules offer a convenient way of modeling processes by representing agents that are distributed across different domains of the system, reducing complexity, increasing flexibility and enhancing fault tolerance. A modular system can be defined as a network of autonomous intelligent entities where each module has individual goals and capabilities as well as individual result extraction behaviors [5]. Though each module has specific functionality, they lack a global goal and overview. This is why the modules need to communicate with each other in order to achieve common objectives which they cannot reach individually. The use of Linked Data and Semantics principles allows for easy communication and flexible back-end implementations that complement an application in order to provide knowledge management operations. In this approach we have two independent but closely cooperating modules which are the Semantic Module and the Virtual Reality module. The conceptual framework of developing a semantic-based Virtual Environment to support the cockpit designer, incorporates an integration of two equally important developments. The first is the development of a a representational structure of virtual objects and their relationships to be used in operations in the 3D Virtual Environment and the second to develop a formalization of the semantics of interactive elements including functional, spatial, esthetical, environmental and contextual design knowledge to form an initial knowledge base of a flexible Virtual Reality Scenegraph. This approach provides the potential to develop an intelligent and interactive Virtual Environment for cockpit design and evaluation, through providing semantic functionality and knowledge management to the system and offering assistance in designing and testing different virtual models and real-life processes by supporting behavior and semantic-based concepts within Virtual Environments. The Virtual Environment allows the designer to inspect design ideas and discover new solutions 7

8 to a design problem, through other design concepts provided by the semantic engine and triggered by designer s actions. 8

9 2 Literature Review & State of the Art 2.1 Virtual Reality and Digital Manufacturing Digital manufacturing has been considered as a highly promising set of technologies for reducing product development time and cost, as well as for addressing the need for customization, increased product quality, and faster response to the market. As referred in [6] the extensive use of Information Technology (IT) in manufacturing has allowed these technologies to reach the stage of maturity. Their application ranges from simple machining applications, to manufacturing planning and control support. Virtual Reality can be considered as the form of Digital Manufacturing that offers the most immersion and accurate representation of reality to the user as it provides a very useful tool in order to monitor the state of a system or/and procedure far before it reaches the actual production. Virtual Reality environments consist a very specific form of ICT and include very specialized ways of programming user interactions and object interactions inside Virtual Space and many times even in actual space (Augmented Reality). These rapid development technologies that are available to the majority of the public mostly for entertainment purposes, are vastly used by the research and development community in order to assist sciences and engineering. One of the various domains that Virtual Reality is employed is manufacturing. Using the virtual factory applications, companies can design factory layouts and plan manufacturing processes that dramatically improve productivity and decrease timeto-launch. This solution visually integrates an enterprise's complete manufacturing process, from initial product design to final production which reduces overall production planning time, decreases the resources needed to bring a product to market and dramatically improves production line efficiency. A typical VR system must satisfy the following conditions: Visualization Visualization is the technology of presenting to the user complex data and procedures. Typical visualization setups are Head Mounted Displays (HMDs) Powerwalls or Computer Aided Virtual Environments (CAVE) 9

10 Interaction Interaction is the concept of being able to manipulate and alter the parameters of the Virtual Environment in a realistic way. The most important principle of VR in terms of Human Interaction is the Collision Detection of the Virtual Objects. Collision Detection is the ability of the system to detect contact of two objects and react in a user-specified way. An innovative way of using Collision Detection inside VEs is to employ ray casting. This concept in combination with the rapid progress in the infrared technologies allow the development of Eye tracking. Eye tracking is the ability to monitor the gaze of the user. Navigation Navigation is the ability of the user to navigate through the immersive environment and experience it changing as the real environment would. To simulate navigation and interaction, in low-cost setups, simple computer controllers or low-cost trackers are used. In higher quality (higher cost) setups, there are available cutting edge visualization technologies and tracking methods such as CAVE Projections and High Definition Optical Tracking. Virtual Reality (VR) and Augmented Reality (AR) technologies can potentially improve product and process design and development methods when they are applied to assembly/disassembly (A/D) simulations [7], support for assembly and maintenance, ergonomic studies, virtual prototyping in the context of conceptual design and product evaluation [8,9] This potential is true, due to the fact that VR offers the flexibility to perform a number of analyses related to the design of processes by incorporating anthropometrics of the human operator using different arrangements of control and signaling devices accordingly. To conduct these analyses, realistic interaction between the user and the workspace he/she will be working in must be programmed, including collision detection, kinematics of the different devices and possibilities of activating their various functions in relation to other objects in the Virtual Environment [10]. Moreover, on the hardware s side, VR and AR peripherals 10

11 are becoming more accessible for small and medium companies due to their use in more mainstream applications [11]. The manufacturing of product prototypes is very cost intensive and takes generally a couple of up to several weeks. Moreover, a subsequent modification of these real prototypes due to concept changes or undesirable developments is very cost intensive. For this reason real prototypes are only used in the later development phases, in which the designed concept becomes more and more reliable. The use of Virtual Reality (VR) for prototype evolution has an increasing interest in the last years. With digital manufacturing solutions that include VR, manufacturers in aerospace and automotive industry can implement virtual systems that allow greater re-use of engineering data, better control of late engineering changes in the design cycle, and more sophisticated simulations [1]. Digital prototypes based on three dimensional CAD models are visualized on projection walls or CAVE systems, to generate the realistic impression of the actual prototype. Such installations may be expensive and have high space requirements, however, they are the best method for visualizing prototypical components of the product that exist only in digital format (e.g. car body, interior cabin), while imposing this information on existing physical components such as the chassis of the car or the cockpit of an aircraft. New VR user interfaces can facilitate faster and more efficient planning and design process of new cars [12]. Traditionally, the setup of VR simulations requires the transformation and composition of many types of data which is still considered as a big drawback due the resulting authoring times [13]. While these data are often pre-existing, they must be transformed, enriched and combined (geometric simplifications, combinations of 2D and 3D) to enable their use in VR/AR applications. In simulations where the goal is the evaluation of ergonomic features of processes or products, the user is immersed in a VE representing the Real Environment (RE) and reproduces close-to-realistic metaphor of the real task [13]. Virtual Environments (Figure 1) provide powerful interaction and navigation environments within which users and designers interact either in asynchronous or synchronous mode within centralized or distributed environments or share their designs, knowledge and experiences. 11

12 Navigating and manipulating in 3D requires not only 3D geometrical primitives, but also a set of 3D design tools. The focus of the semantic-based agents in a Virtual Environment is in providing constructive informational and design-shaping feedback [13]. The semantic-based framework proposed in this study is distinguished by delivering intelligent response and feedback to the designer during the initial design phase as well as at evaluation stages of designing. Figure 1. Typical Virtual Reality System Running a Simulation Developing and maintaining graphical applications in tools like 3D modellers, CAD software, or in our case, interactive VR engines, highlights a concern in data management: How can virtual objects be easily manipulated by the program and the user, while retaining performance? When discussing answers to this question, three points are of particular interest: Objects should be kept editable and not just be a fixed collection of drawing primitives Interaction logic must be integrated with the graphical representation, allowing for direct manipulation 12

13 The provided toolkit should be extensible and allow applications to implement their own object representation or interaction policies Moreover, an implementation concern for performance is that interactive objects and rendered objects should not be duplicated in memory, or converted to structures required by the graphics engine on each interaction. In this context, Scenegraphs have appeared as an effective solution. They are now very common, in VR engines as well as in toolkits for graphical user interfaces or CAD software. Object-oriented languages played an important part in this widespread acceptance, as in such languages, an abstraction for object representation and their attached operations is directly provided. Scenegraphs also provide a hierarchy on objects, generally implemented as a tree with group nodes and leaf nodes, where group nodes can contain leaf nodes or other group nodes. At the leaves of the tree, group nodes manage leaf nodes which are representing the actual rendered objects. Using group nodes, properties of all attached children nodes can be edited simultaneously. Groups should thus implement the same data structures as leaf nodes. Virtual Reality Engine (Editor) The typical way to create VR software is by using a Virtual Reality engine (VR Editor/Game Engine). For the purpose of this study we will be referring to this tool as the VR Engine. The VR Engine offers to the user the tools to manipulate the virtual world and develop processes and interactions between the different geometries that exist in the VR Environment. 2D and 3D textures are applied to the imported geometries in order to be more realistic, and also materials and physics can be added for an increased level of realism. The geometrical scenegraph, typically generated from CAD software can be edited in order to serve the purposes of each specific use. 3DVIA Virtools Virtools is an authoring application that allows the user to quickly and easily create rich, interactive, 3D content. Industry standard media such as models, animations, images and sounds are brought to life by Virtools' behavior technologies. Virtools is not a modeling application. However, simple media such as Cameras, Lights, Curves, 13

14 interface elements, and 3D Frames (called dummies or helpers in most 3D modeling applications) can be created with the click of an icon. The Render Engine is the part of the Virtools application that draws the image that the user sees on screen. There are two components to the Virtools render engine, either of which can be replaced or customized using the Virtools SDK. Virtools is a behavioral engine in the context that Virtools processes behaviors of the rendered objects. A behavior is simply a description of how a certain element acts in an environment. Virtools provides an extensive collection of reusable behaviors that allow the user to create almost any type of content through the simple, graphical interface of the Schematic editor (Figure 2). The Virtools Scripting Language (VSL) complements the Virtools Schematic editor by providing script level access to the Virtools SDK. Virtools also has a number of managers that help the Behavioral Engine perform its duties. Some of these managers (such as the TimeManager) are an internal part of the Behavioral Engine while others (such as the SoundManager) are external to the Behavioral Engine. The Hierarchy Manager provides the VR scenegraph, which is the basis for the Semantic Modelling is introduced by this study. Figure 2. Editing Interface of 3DVIA Virtools 14

15 VRPN and Peripherals The Virtual-Reality Peripheral Network (VRPN) is a set of classes within a library and a set of servers that are designed to implement a network-transparent interface between application programs and the set of physical devices (tracker, etc.) used in a virtual-reality (VR) system. The idea is to have a workstation or other host at each VR station that controls the peripherals (tracker, button device, haptic device, analog inputs, sound, etc). VRPN provides connections between the application and all of the devices using the appropriate class-of-service for each type of device sharing this link. The application remains unaware of the network topology. It is possible to use VRPN with devices that are directly connected to the machine that the application is running on, either using separate control programs or running all as a single program. VRPN includes drivers for many devices. VRPN also provides an abstraction layer that makes all devices of the same base class look the same; for example, all tracking devices look like they are of the type vrpntracker. This merely means that all trackers produce the same types of reports. At the same time, it is possible for an application that requires access to specialized features of a certain tracking device (for example, telling a certain type of tracker how often to generate reports), to derive a class that communicates with this type of tracker. If this specialized class were used with a tracker that did not understand how to set its update rate, the specialized commands would be ignored by that tracker. The current system types are Analog, Button, Dial, ForceDevice, Imager, Sound, Text, and Tracker. Each of these abstract a set of semantics for a certain type of device. There are one or more servers for each type of device, and a client-side class to read values from the device and control its operation [14]. 2.2 Knowledge Management in Manufacturing Manufacturing activities in the era of mass production are characterized by the repetition of the exact same tasks for the design and development of identical parts and products. On the other hand, product variety has increased immensely reaching occasionally very wide variety. Yet, the repetitiveness of similar design and planning tasks, is still characterizing the majority of the daily process of contemporary industries. In addition, vast amounts of data are generated on a daily basis, which 15

16 however remain underutilized and in most cases completely unexploited. Considering that over 60% of design tasks are common between past and new engineering projects, there is immense potential in exploiting already acquired information and knowledge [15]. Additional facts strengthen this point, since rough estimates indicate that around 20% of a product designer s time is spent on searching and absorbing information, a figure that gets even higher for technical specialists [16]. Knowledge Management (KM) has been accepted for years as a key issue in manufacturing systems [17]. Moreover, KM is a domain that is comprised of technology, people and processes. Furthermore the study in [18] defined knowledge management as the art of creating value from an organization s intangible assets. Additionally, Despres and Chauvel [19] defined KM as: The purpose of knowledge management is to enhance organizational performance by explicitly designing and implementing tools, processes, systems, structures, and cultures to improve the creation, sharing, and use of different types of knowledge that are critical for decisionmaking. KM is about facilitating an environment where work-critical information can be created, structured, shared, distributed, and used. To be effective, such environments must provide users with relevant knowledge, that is, knowledge that enables users to better perform their tasks, at the right time and in the right form. KM has been a predominant trend in business in the recent years [20]. 2.3 Semantic technologies Semantic technologies have been developed to facilitate knowledge sharing and reuse. Semantic technologies are crucial in order to have operations where a combination of outputs from the modules is needed, and to have a unified description from a range of different types of data. 16

17 Figure 3. Linked Data and Semantic Web Figure 3 outlines the basic technologies that define the Semantic Web. Linked Data are replacing more and more the available volume of information on the web in favor of machine-usability and unity across the platforms. Although Internet and data sharing is the most straightforward way to employ the Semantic Web and Linked Data concepts, other domains of ICT and engineering are continuously moving in the same direction. Resource Description Framework and RDF Schema RDF, developed by the W3C for describing Web resources, allows the specification of the semantics of data based on XML in a standardized, interoperable manner. It also provides mechanisms to explicitly represent services, processes, and business models, while allowing recognition of non-explicit information. The RDF data model is equivalent to the semantic networks formalism (Asuncion). It consists of three object types: resources are described by RDF expressions and are always named by URIs plus optional anchor IDs; properties define specific aspects, characteristics, attributes, or relations used to describe a resource; and statements assign a value for a property in a specific resource (this value might be another RDF statement).the RDF data model does not provide mechanisms for defining the 17

18 relationships between properties (attributes) and resources this is the role of RDFS. RDFS offers primitives for defining knowledge models that are closer to frame-based approaches. RDF(S) is widely used as a representation format in many tools and projects, such as Amaya, Protégé, Mozilla, SilRI, and so on [21] [22]. Simple semantic formats (RDF triples) and more complex ones (Ontologies) are rapidly replacing traditional data base systems and offer great flexibility and re-usability to the systems. The most common way to represent Semantic data is the use of RDF format in the saved files. In order for the RDF data to be remotely available [23] and accessible by the rest of the system, there is also the need of a server that handles the triple file (Triple Store/Semantic Server) and a communication (querying language) to access this server (SPARQL). In the next section we display the main principles of semantics that are used in this study Ontologies An ontology is a formal explicit description of concepts in a domain of discourse (classes, sometimes called concepts), properties of each concept describing various features and attributes of the concept (slots, sometimes called roles or properties), and restrictions on slots (facets, sometimes called role restrictions). An ontology together with a set of individual instances of classes constitutes a knowledge base. In reality, there is a fine line where the ontology ends and the knowledge base begins [24]. An ontology defines a common vocabulary for humans or systems who need to share information in a domain. It includes machine-interpretable definitions of basic concepts in the domain and relations among them. The reasons to develop an ontology maybe some of those: To share common understanding of the structure of information among people or software agents. To enable reuse of domain knowledge. To make domain assumptions explicit. To separate domain knowledge from the operational knowledge. To analyze domain knowledge [24]. 18

19 Ontologies have been developed for a variety of domains such as chemistry, biology, finance and education. The broadest of these are upper level ontologies that describe common sense-level knowledge in a machine-interpretable manner. A chemical substances ontology describing the chemical elements and their characteristics used in the context of education and manufacturing has been developed, following the Methontology methodology [25]. Moreover in the context of financial transactions and securities handling, a financial ontology has been developed in [26] aiming to reduce risks and increase operational efficiency. The last years, a series of ontologies have been proposed in the manufacturing domain as well, as a solution for the problem of knowledge representation. CYC is a commercial ontology including 200,000 terms [27] that has not directly a significant impact to manufacturing domain, in particular to mechanical design domain, since is a high-level ontology. However, CYC has been selected by the National Institute of Standards and Technology for further investigation in the manufacturing domain [28] and the Process Specification Language was the output. PSL is a language capable of describing discrete manufacturing and construction process data [29]. Ontologies are also used in artificial intelligence, bookmarking and information architecture as a form of knowledge representation about the world or some part of it. The creation of domain ontologies is also fundamental to the definition and use of an enterprise architecture framework. There have been created ontologies in numerous sectors such as biology, biomedical, education, electronic health, engineering design, finance, infrastructure, product and monetary. The purpose of Ontologies is to allow accurate description of different domains; they increase the level of knowledge by adding semantics to the data and making data available in a machine-understandable way. Ontologies are used in the system for multiple different purposes: During the design process in order to interact with the VR user and capture knowledge about the design, during the pilot testing for capturing knowledge about the tasks, and internally as general data representation formalism. There are 4 major types of information that we can use inside an ontology. These are: Classes, Object properties, Data properties and Individuals. 19

20 Classes OWL classes are interpreted as sets that contain individuals. They are described using formal (mathematical) descriptions that state precisely the requirements for membership of the class. For example, the class Cat would contain all the individuals that are cats in our domain of interest. Classes may be organized into a super-class/subclass hierarchy, which is also known as taxonomy. Subclasses specialize ( are subsumed by ) their super-classes. Individuals Individuals, represent objects in the domain that we are interested in. An important difference between Protégé s implementation of OWL and OWL, as defined in World Wide Web Consortium (W3C) documentation, is that OWL does not use the Unique Name Assumption (UNA). Properties Object Properties are properties that link two classes or individuals or a class and an individual. Several choices can be made to modify their settings. Annotations Additional information about an entity can be held in the ontology in the form of annotations linked to it. Our approach uses both high and low-level description and domain ontologies to allow the designer to express a VR environment in a more domain-oriented way (i.e. using the concepts and the terminology of the domain) and more intuitively (by specifying the design of the VR application at a higher-level. The system also uses ontologies as the underlying representation formalism. In its most simple form, we can say that an ontology is an abstraction of a computerbased vocabulary. Ontology Inference Layer (OIL) OIL [30] is an attempt to develop an ontology language for the Web that has a well-defined semantics and sophisticated reasoning support for ontology 20

21 development and use. The language is constructed in a layered way starting with core-oil, providing a formal semantics for RDF Schema, standard-oil, which is equivalent to an expressive description logic with reasoning support, and Instance OIL that adds the possibility of defining instances. OWL One of the significant limitations of RDF Schema was the inability to make equality claims between individuals. Such equality claims are possible in OWL. Besides equality between instances, OWL Lite also introduces constructions to state equality between classes and between properties. Although such equalities could already be expressed in an indirect way in RDF(S), this can be done directly in OWL. Just as importantly, as making positive claims about equality or subsumption relationships, is stating negative information about inequalities. A significant limitation of RDF(S) is the inability to state such inequalities. Since OWL does not make the unique name assumption, two instances ei and ej are not automatically regarded as different. Another significant limitation of RDF(S) is the inability to state whether a property is optional or required (in other words: should it have at least one value or not), and whether it is single- or multi-valued (in other words: is it allowed to have more than one value or not). Technically, these restrictions constitute 0/1-cardinality constraints on the property. Ontology Editor Ontology editors are applications designed to assist in the creation or manipulation of ontologies. They must support the development of ontology hierarchies with a top level either entirely or partly versioned and attached lower-level ontologies. The libraries of the ontology hierarchy may be formulated in different XML-based specification languages, e.g., OWL variations, RDF, and linked in ontologies either form local or external libraries. Terminology definitions created by deliberations of terminology experts may span several human languages and form the basis for building ontology libraries collaboratively. The following paragraph describes the selected Ontology Editor. Protégé ( 21

22 Protégé is a free, open-source platform that provides a suite of tools to construct domain models and knowledge-based applications with ontologies. It s produced by Stanford University. At its core, Protégé implements a rich set of knowledge-modelling structures and actions that support the creation, visualization, and manipulation of ontologies in various representation formats. Protégé can be customized to provide domain-friendly support for creating knowledge models and entering data. Further, Protégé can be extended by way of a plug-in architecture and a Java-based Application Programming Interface (API) for building knowledge-based tools and applications. The Protégé platform supports two main ways of modelling ontologies, Protégé-Frames editor which enables users to build and populate ontologies that are frame-based, in accordance with the Open Knowledge Base Connectivity protocol (OKBC) and Protégé- OWL editor enables users to build ontologies for the Semantic Web, in particular in the W3C's Web Ontology Language (OWL). Protégé 4.1 provides OWL and RDF support; OWL and RDF(S) files are accessible via the Protégé-OWL API. It also provides reasoning capabilities and RDF query support through SPARQL. Protégé uses projects to store information relating to any interface customizations or editor options selected. Source files (with the extensions.owl,.rdfs, or.rdf) are used to define ontology classes, individuals and properties. There may be several source files depending on how the ontology has been defined. If it is modular and has been created properly, Protégé-OWL will find and load all of the appropriate source files. Protégé supports all three OWL sublanguages OWL Lite OWL DL and OWL Full. Collaborative Protegè and WebProtegè are designed explicitly to support distributed ontology editing and supports different user roles (Figure 4). 22

23 Figure 4. Protégé platform The strength of Protégé is that it supports at the same time tool builders, knowledge engineers and domain specialists. This is the main difference with existing tools, which are typically targeted at the knowledge engineer and lack flexibility for metamodelling. This latter feature makes it easier to adapt Protégé to new requirements and/or changes in the model structure. When starting out on an ontology project, the first and reasonable reaction is to find a suitable ontology software editor [31]. These tools can help acquire, organize, and visualize the domain knowledge before and during the building of a formal ontology Semantic Engine/SPARQL server A SPARQL Server can run as an operating system service, as a Java web application (WAR file), and as a standalone server. It provides security and sometimes has a user interface for server monitoring and administration. It provides the SPARQL protocols for query and update as well as the SPARQL Graph Store protocol. Jena Fuseki for example supports all the above and is tightly integrated with TDB to provide a robust, transactional persistent storage layer, and incorporates Jena text query and Jena spatial query. It can be used to provide the protocol engine for other RDF query and storage systems [32]. 23

24 2.4 Human Factor Methods Virtual reality systems and applications have reached an adequate level of maturity, which increases their probability of adopting such technologies in the everyday industrial practice [33], [34]. However, despite their progress in several technological areas, such as, improved real-time computer graphics, improved display systems, multi-modal interfaces and others, VR systems have an important limitation in the field of human factors. Human-centered design of virtual and augmented interfaces seems to be still behind which restricts the potential use of such technologies by discouraging the users or making them frustrated. VR systems are operating within the limits of Virtual Environments, therefore share common guidelines for human factors design [1], [35].There are several factors that contribute to the efficiency of human performance in augmented reality environments and the most important are the following: Visualization of virtual content The ability to perform visual tasks (i.e. perceive and discriminate colors, judge virtual distance, recognize size of virtual objects, discriminate virtual objects). This can be achieved through realistic sensation of visualization, shades and general illumination, in addition to the user s ability to discriminate virtual objects and their dimensions. Manipulation of virtual objects It is the interaction quality with the environment and the manipulation of virtual objects. The feel that the user is in control of the virtual object is very important, while the natural interaction is the key for reducing the possibility of difficulty for the user [36]. Complexity or usability of the environment The feeling of difficulty in interaction with virtual objects. Easy to use menus, buttons and levers in this environment are useful to make it user-friendly. Task accuracy 24

25 The accuracy of the user's movements for interaction or navigation dictates if a task can be performed successfully. User's experience in VR The flavor of the experience that the user has while using the system. Hierarchical Task Analysis (HTA) For the modelling and breakdown of the procedures of the user we use the Hierarchical Task Analysis method of the Human Factors in order to supplement the geometrical repository with a task repository and therefore, be able to utilize the capabilities of the Semantic Module towards training and task optimization. A structured, objective approach to describing users performance of tasks, hierarchical task analysis originated in human factors [37]. In its most basic form, a hierarchical task analysis provides an understanding of the tasks users need to perform to achieve certain goals. You can break down these tasks into multiple levels of subtasks. In user experience, you can use hierarchical task analysis to describe the interactions between a user and a software system. When designing a new system, hierarchical task analysis lets you explore various possible approaches to completing the same task. When analyzing an existing system, it can help you to optimize particular interactions. Once the user has created a hierarchical task analysis, it can serve as an effective form of system documentation, enabling developers to rapidly understand how users interact with a system. As software engineers are all too aware, the intimate familiarity you may have gained with why users do something in a certain way can quickly fade in just a few days or weeks. A hierarchical task analysis is an effective means of capturing this information. Hierarchical task analysis requires a detailed understanding of users tasks. You can achieve this understanding by identifying users primary goals detailing the steps users must perform to accomplish their goals optimizing these procedures. Below follows an example of a hierarchical task analysis. Our example considers a common task: ordering a book. Figure 5 shows a high-level hierarchical task analysis for this task. 25

26 Figure 5. Hierarchical Task Analysis for Ordering a Book In this hierarchical task analysis, can be seen the breakdown of this task into subtasks, expressing the relationships between the parent task and its subtasks through a numbering scheme. This hierarchical task analysis is very coarse from a user experience standpoint. It does not communicate anything about what is happening at the level of a user s interaction with the system. However, it does give a clear understanding of the task s high-level steps. A more complete task analysis would ultimately get down to the level of user interactions [38]. 2.5 Limitations This section describes the gaps and limitations in the State of the Art. When referring to Virtual Reality Simulations, we mean static, case specific applications that are developed for a certain purpose to immerse the user as much as possible in a real-like version of the Virtual Product or process - and cannot be reused in other designs. Some modules or scripts can be utilized, but the knowledge remains enclosed to the stakeholder that is developing the application each time. The large availability in VR development tools (VR Engines) and the rapid evolution of computer graphics has made it mandatory to invest in more modular and re-usable solutions. A VR application should capture and store knowledge in order to add more value to its existence and save time and cost to the future end-users. This creates the need for external modelling outside of the VR Engine to complement and dictate the scenegraph s hierarchy. This way, the VR Engine will serve more as a replaceable renderer and less as a processing unit. Human Factor analysis is a scientific process that aims in understanding and modelling human behavior during interactions with the system. The HTA during a process, is 26

27 completed through an iterative cycle of reviewing the session of the simulated procedure and the associated checklist document and defining the task steps. This is done manually by using pen-and-paper methods or simple text editing tools. The process includes at least two stakeholders, the user of the system, who is assumed to have a good understanding of the equipment and process, and the HTA expert that has a good knowledge on how to extract the HTA but not necessarily domain-specific Knowledge about the use case. This fact makes it necessary to have physical presence of both the stakeholders in the same place for the whole duration of the analysis. It also makes it necessary for the HTA expert to have the ability to pause the simulation, make questions, make the appropriate entries to the HTA tree and then ask the user to resume the Simulation. When applying Human Factors analyses to Virtual Reality Simulations, the capability to pause a system at any time makes VR an ideal testbed for Human Factors methods for the HF scientists. On the other hand, one of the main concepts of Virtual Reality is the immersion of the user to the Virtual Environment. Therefore the stop-and-go process interferes with the user s immersion. Furthermore, when VR simulates cockpit procedures, it has been proved that the stress and responsibility of the pilot during a real-life flight, are very hard conditions to simulate. The real stress of the pilot (which would affect the carry-out of a procedure and therefore the HTA) cannot be simulated in a safe environment like VR. Semantics are a rapidly developing technology which finds application mainly in the web. The concept of Semantics is to add a certain level of intelligence to previously static repositories. Ontologies are used to model structured data in a more machineoriented way. One type of structured data that can be modelled by ontologies is Virtual Reality scenegraphs. This way, an implementation can be supported by an external scenegraph that is not attached to it and it even can be generic in order to allow for application in different types of geometries. It is understandable that when an external structure is applied to a Virtual Reality Simulation, it is needed for the implementation to be fast and any changes to each end to get to the other before the process creates obstructions in immersion. So one other limitation that needs to be overcome is the need for fast querying process between the modules. 27

28 3 Proposed Method for Semantic Based Virtual Reality To overcome the difficulties addressed in the previous section, the common modelling of the data needed for the Virtual Environment to function depending on an external Semantic Repository, and the data that are required for the automation of the HTA process is developed. With this method, we aim to a re-usable scenegraph for different version of geometries and even different VR Engines. Furthermore, we have a decentralized, human-centered but machine-automated approach to extract the HTA tree. This reduces the need of obstructions to the immersed user and works towards a more realistic VR simulation with the least possible of interference with the user. It also gives the ability to a Human Factors expert to observe more than one auto generated HTAs at once, and interfere to the simulation or the analysis whenever it is considered needed. In this paragraph, the modeling and methodology of the application developed in this work is described. At first, an overview of the system and the description of the Virtual Environment is presented. Then, follows the method for modeling the cockpit procedure based on an algorithm that extracts verbs and 3D entities from the user s actions in Virtual Reality. Lastly, the Development of the Semantic repository and the querying functionality are presented. 3.1 System Architecture The system consists of two independent and closely interconnected modules. The one is the immersive Virtual Environment, which offers an advanced way for the interaction of the user with the Virtual Environment. The other is the Semantic Module that holds the semantic representation of the Virtual Environment in an external online or local repository (Figure 6). Figure 6. VR Module and Semantic Module 28

29 Each module of the system serves a specific individual purpose. Virtual Reality is the tool that allows the users to interact in an immersive manner with the system through accurate visualization, navigation and interaction with the desired objects and participation of the users in the Virtual Environment as aligned part of it. The Semantic Module is developed as an external service, closely interconnected with the VR Module, serving the purpose of a flexible data repository, in the context that it can be accessed both from inside the system as well as from the rest of the network and is both available for interaction with the Virtual Environment and for direct interaction with the users. When a VR environment is loaded in the Virtual Engine, the corresponding Ontology is loaded on the Semantic Server. The Semantic Server is the way to access and manipulate the system s ontology from other agents. Figure 7 shows the different components of each module. Starting from the right to the left, the first column is the pure Virtual Environment data. On the second level (Developed in the VR engine as well) is the first level of modelling of the Virtual World. On the third level, and loaded on a Semantic Repository are the semantic data which are manipulated by the VR Environment through SPARQL querying. Figure 7. Outline of the Contents of the Different Modules 29

30 3.2 Virtual Environment The Virtual Environment is designed and developed for immersive hardware setups, where the user places himself in a realistic environment in order to simulate both the behavior of himself and the interactive elements of the environment in real-life situations aiming to assess and improve it. The gaps identified during the VR session are crucial for the evolving and improvement of the actual real-life product. This can serve both before the actual realization of the product as well as after the production, for the generation of improved versions. Cockpit Geometry The Cockpit Geometry used in this study is a realistically modelled geometry of a cockpit of a commercial aircraft, which is imported to the VR Engine as a CAD model, integrated with high definition textures. While the geometry is formed outside the VR Engine using a CAD software, several modifications are applied to it when imported to the Virtual Environment (Figure 8). 30

31 Figure 8. Virtual Cockpit Model Every time human intervention is required for the computer to complete a task or function, elements of User Interface (UI) are involved. UI includes the hardware devices that comprise a workstation (Monitor(s), keyboards, printers, etc.) as well as more advanced Virtual Reality equipment (Stereoscopic Displays, HMDs, Trackers etc.) where human input takes place, the software that translates user actions and information into computer processing data and the documentation, training and user aids (HELP panels, tutorials, keyboard templates, etc.) it is designed to support or assist the user in performing tasks on the system. The area of advanced visualization within user interfaces is nowadays evolving rapidly. The user interfaces are getting more and more intuitive and natural. Visualization of graphics in user interfaces provides a way of transmitting information. A good 31

32 visualization of data in the given field of view should be clear, informative and easy to interact with, while not obstructing the user from the actual task. As mentioned earlier, our system is a Virtual Environment oriented for use in immersive setups, with advanced visualization systems so that the user has a strong sense of presence inside the Virtual Environment and has the ability to interact with the virtual geometries in a realistic manner. Virtual body parts are representing the user s tracked parts and affect the parameters of the environment. The system is flexible in terms of the hardware to be used, provided that there are adequate markers tracked on his body and realistic Visualization (CAVE. Powerwall, HMD etc.). Interaction The interactive elements of the virtual cockpit are the geometries of the Virtual Environment that can respond to the users actions. Some examples are buttons, levers (Figure 9), screens (Figure 10) and knobs. Figure 9. Throttle, flaps and brakes levers 32

33 Figure 10 LCD Displays of the Virtual Cockpit Collision Detection is used for most of the interactions of the human user with the Virtual Environment along with a draft implementation of eye-tracking. The main factors that participate on the detection of the collisions are the hands/fingers of the pilot. A virtual hand model (Figure 10) is used to replicate the real hand inside the VR environment. A specific algorithm has been developed for the accurate estimation of collision with the geometries of the imported cockpit. Figure 10. Virtual Hand Model Interacting with Throttle Lever 33

34 Scenegraph The Geometry is defined by a scenegraph tree that has been processed as to follow a hierarchical format of the geometry list up to a certain low level that serves to model the lowest level of tasks. The top level of this scenegraph is the GeometryFrame virtual frame, which defines and dictates the position, orientation and scaling of the model inside the Virtual Environment (Figure 11). The data of the variables of this frame are inherited by all the other geometries that consist the Virtual Product. The Virtual Prototype is presented to the user in real scale. The rest of the geometries follow a hierarchical tree, down to the lowest-level parts of the cockpit which are the geometries that the pilot physically interacts with while following a flight procedure. Figure 11. Partially Expanded Scenegraph from VR Engine 3.3 Cockpit Ontology The Cockpit Ontology is developed to serve as the Semantic Representation of the VR scenegraph. The initial classification of the ontology has been created in accordance 34

35 with the VR scenegraph hierarchy. However, in order to serve the purposes of the HTA, another type of classification needs to be applied across the ontology. The extraction of the HTA for a specific task is strongly dependent on the verbs that are included in the HTA sentences. These verbs are chosen to be used in the ontology as additional classes to the scenegraph elements. This way we can define rules between the Task classes and create semantic bonds (triples) that serve as guidelines for the population of the semantic repository with HTA data and to retrieve and reuse HTA (and other HF methods) patterns. In the current prototype implementation, the semantic modelling of the cockpit is targeted in the low-level HTA. As the tool is being developed, along with the implementation of the HTA application for extraction of the higher level HTA entities, the ontology is planned to be extended to support the high level task/verb semantic description. 3.4 Human Factor Methods Hierarchical Task Analysis (HTA) In this work, an HTA has been modelled in order to function as integrated part of the Virtual Environment. The VR Engine was used in order to program the HTA. The extraction and storage of the HTA is accomplished by using and manipulating arrays inside the VR Engine. The main Virtual Environment interaction principle that is used in this development is collision detection. Collision detection, identifies whether or not two or more virtual elements are colliding each other. Our main goal was to automatically generate the HTA verbs in the Virtual Environment, in order to have a tool that would provide us with a valid HTA. During our development and testing, only physical (and primitive visual) interaction monitoring was used (complete visual and cognitive will be included later with the integration of relevant technologies like eye and brain tracking devices, microphones etc.), therefore the actions of the monitored user that were dependent on the principles of those technologies and were generated automatically by virtual events which were triggered by the physical interaction of the user, or other parameters of the virtual world such as the navigation into the virtual workspace. For example, when a user presses a button, it is assumed that the object has previously been identified 35

36 among the other buttons. Although this might alternate the original-traditional character of the HTA, this method is a first approach to an automated HTA conducted in a Virtual Environment. Moreover, beside simple verb extraction from physical collisions in the Virtual Environment, many verbs were recognized by identifying whether two or more extracted verbs where performed simultaneously in the Virtual Environment. For example, when Touch, Grasp and Move are performed simultaneously, it is assumed that the user is performing the verb Carry. The algorithm begins by monitoring a cockpit process. Every time the users hand (which is essentially an object) collides with a virtual object and depending on the programmed conditions that have been set, a verb - task is generated which corresponds to user s real action. This way a human-readable sentence with correct grammar is created for each interaction of the user (corresponding to the sentence that would be written down manually by the HTA expert if he was monitoring the task). By utilizing all of the above information in combination with the Hierarchy Manager (Scenegraph) of the VR platform we are able to clarify levels of abstraction for each task. This way a complete HTA tree is extracted automatically without any assistance from the VR expert, who of course can monitor the process and correct possible faults that the machine cannot detect. 3.5 Querying The actions of the pilot and/or designer in the Virtual Environment are analyzed by monitoring their interactions in the form of VR events that are handled by the VR engine. At the current implementation, these VR events are mainly based on collision detection and ray intersection of the gaze point of the user provided by eye tracking technologies. The VR events can be considered as high level user queries for the system, which correspond to the user s or designer s actions inside the Virtual Cockpit during the simulation of the approach and landing procedure. The VR environment allows a designer to monitor the user sessions and conclude results about the workflows, the cockpit models and the users. The engineer has the ability to make small changes in the variables of the session in order to monitor how 36

37 these affect functionality and ergonomic validity of the design for the pilot and compare different patterns. In Figure 12, an example is presented where an engineer requests from the system through an interactive GUI, to highlight a specific group of buttons in order to make changes to them. More specifically, the designer chooses to highlight all the buttons that are placed on the left side of the pilot flying. In the next chapter it is indicated how these choices form a SPARQL query that gives the user the expected results. Figure 12 indicates how the structure of the semantic relations (ontology) should be formed in order to classify precisely the elements and allow for flexibility in the designer s choices. For example the class Button includes all the buttons that exist in the virtual cockpit. Those buttons can be sorted based on Position, Function (Functional group) and State. State and position can also be added as attributes to all the interactive elements of the cockpit and Functional Group can be used as a containing superclass. Figure 12: High level queries for the Design Environment Triple Store/SPARQL Server The SPARQL Server is used to persist the Semantic Repository and have it available to be queried by the VR component at all times. This means that, during a data-flow of 37

38 the system, the VR module can access the triple store and store/retrieve the HTA (and other Human Factor Method) workflows. Also the VR engine can address the SPARQL queries directly to the Triple Store. The Content of the Triple Store can be considered as three main parts. The Semantic Cockpit Model A Semantic Version of the Cockpit Scenegraph following the hierarchy of the VR Scenegraph The Semantic HTA entities Verbs, Interactive elements, Users (pilots) The mappings between the Cockpit Models of the different modules (VR, HTA, Simulation). The Triple Store apart from the fact that it offers SPARQL connectivity with the VR module and any other endpoint that may need to access it, it also offers the possibility for the stored (or created) HTAs to be fully edited by any VR client that will connect to the server. The modularity of the system, makes it necessary for the VR module to be able to function in an offline mode as well. For this reason, it can sustains a local RDF repository of HTAs in a local SPARQL server for as long as the system is not connected to the server s network. 38

39 4 Development of Semantic Based Virtual Environment 4.1 System Architecture and Data Flows Semantic Scenegraph generation The concept behind the Semantic Virtual Environment is that it is built to be flexible in terms of iteration of different geometries. For that purpose, an algorithm was developed in order to generate the semantic scenegraph, given a VR geometry hierarchy. The algorithm flowchart is displayed in Figure 13. Figure 13. Generation of the Scenegraph Ontology When the user activates the script, the system goes through the geometry hierarchy and produces and saves to a Scenegraph array all the data needed for the creation of the Scenegraph Ontology. The system saves the names of each object, the object s ID in the geometric hierarchy, and the object s parent in order to place it in the correct hierarchical position. Then, the Ontology Generation script takes over, and converts the Static Data from the VR Engine s Array into Semantic RDF data. These data are then uploaded on the Semantic Server and used as the new Scenegraph Ontology 39

40 which makes the repository completely compatible with the newly imported cockpit geometry. HTA Extraction Procedure In Figure 14 can be seen the complete implementation of the discussed method involving the VR Engine (Virtools) and the Semantic Server (Jena Fuseki). Figure 14. Architecture of the System A Collision Detection function is running continuously, waiting for two objects to collide in the VR environment. When two objects collide with each other, the Collision detection parameter becomes true and the system determines that some user interaction has occurred (Figure 15). 40

41 Figure 15. Collision Detection continuously running The Verb-selection script then retrieves the names of the two objects that collided along with the position of collision and the timestamp of the event. Depending on the objects that collided and the metadata of the collision (time period) the system can determine which action is being done in the Virtual Environment and add the corresponding verb to the collision data(figure 16). Figure 16. Introduction of Verb These data, are enhanced with additional text that forms a SPARQL query which is sent to the Semantic Engine through SOH (SPARQL over HTTP) in order to create a new Task entry in the Ontology. On the Ontology side, the data contained in the query are 41

42 resolved into Instances and Data properties and consist a new Task (with all the properties linked to it). The new Task instance will correspond to the action of the pilot as it will be modelled in the HTA (Figure 17). Figure 17. Data Added in the Ontology The query results are then compiled by the Semantic Engine and feedback is directed from the server side back to the VR Engine, where it is resolved and saved on a temporary array (Figure 14). A the end of the workflow, the Virtual Environment gets modified, if needed, according to the data that were received from the response of the semantic server and feedback is given to the user, in the form of visual effects, or 3D text displays on the cockpit. To better demonstrate the structure of the repository and how the HTA data is modelled inside the Ontology, in the next section is presented the Cockpit Ontology in detail. 4.2 Developed Semantic Model The semantic model is formed following the same hierarchy of classes with the one described in scenegraph. The detail level is kept down to the level of the scenegraph geometries. This way a direct mapping is achieved between the scenegraph hierarchy and the Semantic classes hierarchy. Therefore SPARQL query forming can rely on the scenegraph hierarchy in order to communicate with the SPARQL Server (Triple Store). 42

43 Tasks The possible HTA tasks are modeled under the superclass Task, under which a hierarchy of generic (higher levels of the tree) to specific (lower levels of the tree) classes are added, describing in depth the possibilities of the HTA actions that can be derived by the users. Verbs The HTA verbs are implemented in the ontology tree as classes (Figure 18) under the superclass Verb (Figure 18). A different class is created for each verb, in order to distinct the different interactions that each type of interactive elements supports. These subtask entities are linked with the Users, Verbs, 3D Objects by using the Object Properties hasverb, hasuser and has3dobject in order to specify the HTA relations (Figure 20). Figure 18: Verbs as classes of the Ontology Interactive Element Classes The main types of Cockpit Elements that we use at this stage are: Knob, Button, Lever, Switch and Display. As before, the first four refer to collision detection (Figure 19) and the Displays are elements that can be monitored by the pilots (users). The first version of the Cockpit Ontology includes classification based on the five interactive elements described above. All the cockpit elements are added in an Ontology tree that begins from these basic categories (Figure 19). 43

44 It is assumed that every other cockpit that has to be used inside the developed system has to follow the same principles and scenegraph structures. The interactive (and noninteractive) elements of each cockpit are semantically stored as individuals that follow the same rules that have been applied on the general ontology. Figure 19: Control types of the Control elements of the Cockpit User/Pilot Classes The previous chapter described the interactive elements of the cockpit. However, there is still the need, in order to create semantic relations (links), to implement some additional classes to the ontology which refer to the users/pilots. These classes will be used as the domain of the Object Property hasuser (Figure 20). The use of Object Properties is described in the next section. 44

45 Figure 20 Example of HTA ontology structure Object Properties In order to create the rules that produce the links of the Task with the appropriate elements that contain the additional information about the Task, Object Properties have to be employed. Object Properties are conceptual properties of a class in an Ontology. The verbs described as classes for example, are linked to the Task with the Object Property hasverb. Also as shown by Figure 20 the other two classes (User, CockpitElements) are linked with the Task by using the Object Properties hasuser and has3dobject in order to specify the HTA relations. This implementation ensures a working version of the system that supports the basic functionality of the HTA, therefore it focuses on the development of the Task class and its properties regarding the Users, Verbs, 3DObjects. Restrictions The use of the Tasks as classes of the Cockpit Ontology, offers the possibility to define attributes like the number of body parts or users that have to interact with an element (another class) in order to fulfil the requirement of the property. This feature makes the current implementation versatile, as any relation needed by the HTA module can be defined. Data Properties For the implementation of the developed method, Data Properties are the means of linking the classes and individual instances of the ontologies to their literal values. Also they offer the possibility to create data restrictions for the Superclasses so that, by 45

46 employing the ontology inheritance, these restrictions are detected and applied to the member individual. For example a Flap Lever can have a small number of discrete settings (not analogue). This can be set as Data Property to the FlapLever class, and when the individual is created, it will detect and adopt this property. For each particular cockpit that is used, there are different individuals, linked with different Data Properties, but they always follow the rules that are set to the respective classes, of which they are members. For example, to define the exact position of the altitude knob and its state, we need an Individual member of the class and the Data Properties of position (vector) and state (percentage or orientation angle). 4.3 Procedural Breakdown The Procedure Ontology dictates the behaviour of the repository in order to support the auto HTA extraction during the carry-out of the semi-automated work analysis. The Human Factors Expert, can through the VR Engine manually create new entries of tasks based on the observation of the pilot, while the lowest level tasks are autogenerated by the system. The expert, through an interactive procedure with the user, will be populating the repository by adding new nodes to the Ontology. The VR Engine offers structured boxes as an interface for the expert, which in the back-end of the system create new individuals in the Ontology. After the observation and modelling of the task is completed, the application gives the opportunity to the user to link the manually created nodes with the auto-generated low-level tasks and enhances this functionality by being able to generate suggestions based on already conducted HTAs. The Ontology is formed based on a procedure-oriented analysis in order to sustain information about the links of the lowest level (Leaf) tasks to the higher level tasks as well as the goals of the user. This way, it aims to predict and auto-fill or suggest the inferred supertasks of the Leaf-level tasks based on past experience (stored data). Since the prediction cannot always be accurate, the expert is given the choice to deny the system s suggestion and add their own entry to the repository. From the HTA plans, the Sequential plans have been already draft-modelled due to their simplicity, 46

47 in a way that can offer flexibility for the implementation of more complicated plans. The tasks that belong to the lowest level have a Leaf plan. The task hierarchy starts from the bottom level, which is the Leaf-planned tasks which are automatically generated by the actions of the user and compiled by the respective scripts, and is completed with the higher level goals that are manually compiled by the human expert during a session. At the end of the session, the expert is provided with the flexibility to link the bottom level tasks to the upper level ones and to the goal, and is assisted by suggestions, provided that the same hierarchy is already in the repository from past entries. If the current task hierarchy does not exist in the repository, it is created without the assistance of suggestions and serves as a reference HTA for future sessions. Figure 21. Procedure-based Task Classification Each bottom level (Leaf) cockpit procedure, which corresponds to a physical action that takes place in the cockpit, is modelled as an Action. An Action is linked (nested) to a task through the Object Property hasaction. This way an action can belong to different tasks of the same HTA. For example a pilot can pull the break lever backwards two times during both the final approach and the touchdown procedures. The two 47

48 different tasks will have the same physical action. All tasks are linked to their subtasks through the Object Property haschild and to the plan of their subtasks with the Object Property hasplan. Each plan of a task is a new individual that belongs in one of the Plan-classes (Sequential, Conditional, Parallel etc.). Task1.1 and Task1.2 in Figure 21 belong to a Sequential plan. The Sequential plan is modelled based on the principle of the Ordered List in owl [39], by defining all the children (of the same hierarchy level) of a task and each one of them contains an Object Property hasnext that links to the next task in Sequence. The last task of the plan is defined by having a hasnext value of (). All individuals that belong to the class Task, in addition to the plan of their subtasks, are linked to the plan that defines the sequence of their own execution. The Object Property that serves this purpose is belongtoplan (Figure 22). Figure 22. Detailed modelling of Task, Action and Plan The Action individual, as mentioned above, is used to describe the physical low-level actions that take place in a cockpit. The breakdown of Action is based on the different 48

49 types of data that are utilized from the VR engine for their compilation. The VR engine can provide a range of data types in discrete variables through SPARQL. In this draft version, the data that are stored are categorized in a way that is displayed in Figure 22. The Action is linked to its string display name through the Data Property hasname. This string value can be used for visualisation of the instance. It is also linked to an instance of the class User (PilotFlying, PilotNotFlying) through the Object Property hasuser. The hasverb Object Property is used to link the Action with the correspondent physical interaction between the User and the Object which is defined by the Object Property hasobject. The Action can be linked to the Simulator Data-Ref that is affected through the Object Property hasdataref and to the value that is set to that Data-Ref with the Data Property hasvalue. 4.4 Functional grouping Additional to the product breakdown based on scenegraph hierarchy that is described, new classes are added in the Ontology of the cockpit that serve in grouping the interactive elements of the cockpit as well as the displays, and Object Properties are added to create links for this type of Grouping. Functional grouping aims to assist in the extraction of the Hierarchical Task Analysis (HTA) as it distinguishes what verbs will be used in the interaction with each element. For any different cockpit that needs to be implemented, the scenegraph, the Semantic architecture and the HTA capabilities have to follow the same hierarchy. The same stands for each user that needs to use the Virtual Cockpit of the tool. For every different person that uses the system, a new individual will be created under the class PF or PNF depending the position that he holds during the simulation. New instances will be created for every interactive part of the pilot s body and each will be linked with data properties to their individual values (Position, orientation etc.). The Ontology described in the previous sections allows for implementation of different Semantic cockpits and their mapping to the corresponding Virtual Cockpits through 1-1 correspondence with the Scenegraph of the respective Virtual Reality Engine. Distinguished individuals are created for each unique cockpit as well as for the Users that participate in each session. The instances follow the classification described 49

50 and presented above and have unique data properties and naming that distinguishes them from the ones of other Virtual Cockpits. Following the above principles, more than one cockpits can co-exist in a single Triple Store, a capability which enhances comparability of different virtual models. In the current version of the Semantic cockpit, four main categories (classes) are used which contain all the other elements/subclasses. The main properties of all functional elements are: State and Position (3D coordinates in cockpit). Based on the above classification, the states that an interactive element can have vary among (Figure 19): Button: Pressed/Not Pressed Lever: Percentage % OR Discrete Integer States Switch: Discrete Integer States Knob: Percentage OR Discrete Integer States Following the previous description of the class Pilot, there are two users of the virtual cockpit, the Pilot Flying the plane or PF and the Pilot Not Flying or PNF. These entities have to be semantically modelled as classes along with the interactive body parts of each. At this stage the interactive body parts of the pilots are their hands and fingers as well as their eyes which are connected to each as Object properties (haspfbodyparts, haspnfbodyparts). Following the same principle, the fingers, also declared as subclasses of the class BodyParts, are linked to the hands by using the Object Properties haspfrhfinger, haspflhfinger, haspnfrhfinger and haspnflhfinger. Therefore the semantic breakdown of the users is shown in Figure

51 Figure 23: Classification of the body parts that interact with the Cockpit Elements. 51

52 5 Industrial Test Case The use case that has been defined for the system is a pilot-driven procedure. This process results in semi-automated completion of the HTA from the system. The ILS approach and auto-landing procedure was selected, and executed according to a real aircraft manual. This procedure takes place in an aircraft for the actual landing in an airport, and even though it can vary depending on many factors (Aircraft weight, weather, landscape etc.) the basic steps are the same. The selected task was defined by an aircraft manufacturer for the reference HTA and has been applied in a twentyminute procedure being flown in their VR simulator. This was accompanied with prototypical checklist items for this task. The HTA was completed through an iterative process of reviewing the session of the simulated procedure and the associated checklist document and defining the task steps. The scenario includes that the pilot will fly in the VR environment and the designed system will detect interactions of the pilot with the cockpit items using the VR collision detection and a draft implementation of eye-tracking. These tracking methods will result in the automatic selection of the relevant verbs. A list of HTA verbs has currently been defined for the developed tool. It is expected that this verb list will be amended as a future step. At the current state and with the tools that we have available, the mechanism for identification and storage has been developed for two types of verbs. Physical: Physical verbs are at the highest level of maturity as these are the first implementation in the tool. Rules based on the VR principle of collision detection and task time literature (where relevant) have been written and changes in object state were also defined. Aside from singular verb extraction from physical collisions in the Virtual Environment many verbs have been identified by recognizing whether two or more verbs were performed simultaneously in the VR environment, for example if TOUCH + continuous pressure >1000ms + RELEASE is performed simultaneously then it is assumed that the verb PUSH has been performed. Visual: These are the next verbs that were implemented into the tool and have been tested with a prototype wireless wearable eye tracking system. The visual verbs do not result in any changes of object state. 52

53 To better demonstrate the context of the use case, below is presented a small part of the Initial Approach procedure, broken down to simple steps from a HTA expert. Following that, the steps that are actually recognized and of interest from the VR environment point of view are indicated: The HTA expert s extracted procedure steps are the following: 1 Execute managed descent 1.1 Descend aircraft (SYSTEM) 1.2 Display descent on VDEV and speed tape on PFD (SYSTEM) 1.3 Monitor the VDEV and speed indicator on PFD (PF and PNF) 1.4 Move right hand to throttle lever (PF) 1.5 Place right hand on throttle lever (PF) 1.6 Move throttle back (PF) Where PF is Pilot Flying and PNF Pilot Not Flying. From this procedure, the current Virtual Environment that has been developed, can detect the following task sequence: 1 Execute managed descent 1.1 Monitor the VDEV and speed indicator on PFD (PF and PNF) 1.2 Touch Throttle lever (PF) 1.3 Move Throttle Back (PF) It is clear that since we are doing a human-centered study, we are not displaying the system s actions, although the data-refs from the system can be utilized for reaching conclusions on the validity of the Pilots actions inside the Virtual Environment. 53

54 5.1 Hardware setup The realization of the method was done on 3DVIA Virtools. The geometry was imported from a realistic CAD model and the scenegraph hierarchy was edited to serve the purposes of our analysis. The designed aircraft cockpit is based on future industrial designs that follow the Glass cockpit concepts. Full list of interactive elements was included. Two different immersive hardware setups were tested with our developments. Different configurations were made for each, and they are presented in the two following sections CAVE The primary setup of implementation for our system was a complete CAVE configuration (Figure 24). For the purposes of this study, 3DVIA Virtools was set up for the cluster configuration and adjusted to the needs of the custom build CAVE system. The implementation included precise configuration of all the geometric elements of the 3-dimensional environment in order to match the actual user workspace that the hardware of the CAVE offers. Different configurations of VEs in terms of Scaling, Lighting, and Interaction methods were tested in order to determine the configuration that offers the most realistic results in terms of user experience when inside a Virtual Cockpit. 54

55 Figure 24. LMS CAVE Configuration 3DVIA Virtools is running on a synchronized 3-workstation cluster, running 3 HP z820 workstations with a total of 24 processing cores and 3 NVIDIA Quadro 6000 Graphics processing units. For Visualization, connected to the above system there are 3 Barco RLM-W12 Full HD projectors. For the motion tracking of the user, an optical tracking system is used. Three or more reflectors are linked together in the tracker s utility in order to create a virtual object that the system can locate through Infrared Cameras. Once the virtual marker is created in the tracking software, its position and orientation can be distributed along the network through a server that follows the VRPN protocol. This way, the VR engine s VRPN client subscribes to this server and receives the data of the tracked body part. To correlate these data from to the corresponding virtual body part, scripts were developed inside Virtools. In the context of this study, an Eye-Tracking prototype was tested and reviewed, during a limited-time test on our setups and the results were shared with the producer company. This wearable device allows for wireless eye tracking and can be of great use for the monitoring of the gaze of the pilot. 55

56 HMD-Magnetic tracking configuration This configuration (Figure 25) allows for better accuracy of the user s movement inside the workspace but has the downside of being a completely wired and heavy setup. Figure 25. HMD-Magnetic Tracking Setup The nvisor-sx60 by NVIS was used as a visualization device for this setup. For tracking, the HMD is coupled with a magnetic high precision tracker that supports up to 4 magnetic wired markers into a workspace of 3m radius. The electromagnetic transmitter-tracker system used is Polhemus Fastrak. 5.2 VR Environment and Semantic Modelling Our use case value is that the principles used to design and develop this application can be adapted and used with minimal effort to any manufacturing procedure. This means that our initial development, even in its primitive form can be applied in the early phases of the design of products, in the industries where the designers create the first prototypes towards the finalization and roll-out of new aircraft cockpits. There is also potential in this method to become completely independent of the Virtual Environment and have application with no modification to any virtual world. 56

57 The Cockpit selected is an experimental design from an industrial aerospace manufacturer that is being evolved towards future real-life aircraft releases and is displayed in Figure 26. Figure 26 Virtual Cockpit Use case The cockpit, is conceptually broken down in the different workspaces that the pilots are using during the flight. These workspaces, as they are indicated in Figure 26. The conceptual modelling goes further down to the simpler interaction items of the cockpit and was the basis for the semantic scenegraph. The semantic scenegraph that was refined from the VR hierarchy and the conceptual modelling is indicated in Figure 27 and Figure 28. This is the described Scenegraph Ontology, that includes the complete geometry scenegraph as it is used inside the VR environment, modelled following the classification system of the OWL principles. This ontology s purpose is to serve as a remote scenegraph, outside the Virtual Environment, with potential to fully replace the actual VR scenegraph in some cases. This feature is implemented in favor of modularity and re-usability of each agent of the tool in several different setups even outside of the scope of this study. In the following figures, the full hierarchy of the Semantic Scenegraph is expanded in order to indicate the closely connected semantic nodes and the capability for advanced knowledge management across the repository, 57

58 especially if we take into account the coupling of this implementation with the Procedural and Functional ontologies. Figure 27: Semantic cockpit ontology classification (Part1). 58

59 Figure 28: Semantic cockpit ontology classification (Part2). 59

60 By inserting the new Cockpit Geometry in our system and generating the Semantic Scenegraph, we indicate the modularity that this study is oriented towards. Any imported geometry (not necessarily cockpit) can be processed into a Semantic Scenegraph and therefore that makes our System modular in terms of the geometry and use case. On the other hand, if it is assumed that the end-user wants to replace the VR Module with an updated module, the Semantic Scenegraph is not affected and can also provide information that can be possibly lost from the transition between the VR Engines. This way we indicate the added value of our system in terms of modularity and flexibility in order to save time and cost to the potential end users of our developed method. 5.3 Auto-Extracted HTA After the user executes some steps of the above procedures, the system by using collision detection and eye tracking can generate the following array, just by modelling the VR events of these steps. Figure 29. Initial HTA Array after the execution of a procedure Starting to break down the contents of the array from left to right, Column 0 shows the Virtual User that undertakes the task. It can be PF (Pilot Flying) or PNF (Pilot not Flying). Column 1 displays the verb that describes the user s action. Column 2 shows the interactive object that the user has interacted with. Column 3 displays the value of the Data-ref of the simulation that is being affected (if it exists). Column 4 indicates the Starting Time of each task, and Column 5 the duration of the task. Columns 6 to 8 (not displayed here) are used by the algorithm during the processing steps in order to 60

61 determine the correct task Hierarchy. The Array in Figure 29 is a temporary one that serves as a tool for the extraction of the final array that is displayed in Figure 31. The algorithm that creates these arrays is accessing the Semantic Repository in order to refine the resulted temporary array and create the final array. In this early stage, the hierarchical breakdown of the HTA is done based on the DataRef affected (Figure 30). It is assumed that the given rule for the automation of the process is that when the DataRef affected changes, then a new supertask is created. Figure 30. Transition from the Initial Array to the Final Array In Figure 31, in the supertasks with IDs 1, 2, 3, 4, 5 and 6, the verbs and the 3D Objects are not detected by the user s interactions, but they are extracted from the system by utilizing rules that are embedded in the VR Engine and querying the semantic repository. To make it clearer, below is presented the Initial HTA array which includes just the detected steps (Figure 29). Between the detection array and the final array, a series of developed scripts take action in order to retrieve data, process them, and according to the rules produce the final HTA tree. To better clarify the contents of the ontologies we briefly explain the contents of the arrays. 61

62 First of all, the elements are placed as they are detected from the VR, being dependent on the timestamp or here names StartTime (Collumn 4). Figure 31. Draft HTA extracted from the VR environment By auto extracting the HTA tree, we are eliminating the need for an HTA expert in order to produce the HTA tree. Furthermore, by having it in the Array format shown above, we give the ability to the expert to manipulate and change any desired data in order to improve the result. By storing the Analysis to the Semantic Repository we give our system re-usability and a Knowledge repository that does not rely on the individual experience of the expert as the manual methods did. To sum up, by utilizing all the available resources of the VR simulation and the added value of the Semantics in our implementation, we provide a good level of automation to the HTA extraction process as well as the flexibility to manipulate and re-use it. 5.4 Querying the Repository and Providing Feedback to User Apart from the functionality of the auto-detection of the HTA that is presented above, another functionality has been implemented to the Virtual Environment. Manipulating the repository and providing feedback to the user as indicated below. In the selected task, the Repository is asked for all the buttons that belong to the MCDU. The repository can provide this information due to the hierarchical, functional breakdown. These elements are then highlighted to the user by changing their texture in the VR environment. 62

63 During each step of the selected process, once the system detects an action by the designer (or set of actions), it forms a low level query that is posted to the triple sore. The information from the collision detection is processed and the appropriate text is added in order to create a SOH (SPARQL over HTTP) query which is sent outside the VR Engine using the HTTP protocol. An outline on how a query is formed can be seen in Figure 32 serving the function that the user has chosen through the VR engine or the Designer Interface (in our case highlight is directly re-fed to the VR-engine so that the textures of the selected objects are changed to a pre-lit one). Supposing all the buttons of the MCDU unit are subclasses of the MCDU functional group class, this is the way that the Semantic Query is formed. Figure 32: Forming a Query The SPARQL query results are received as string from the VR engine and added in an array after post-processing is applied to them in order to remove the namespaces and keep the actual 3D object names. Then whatever processing of data the user chooses is applied on the selected 3D object and the outcome is again sent and stored to the Semantic Repository and if needed feedback is sent back to the user following the same procedure as the above. The results of the test case have indicated that our method is suitable for the analysis and improvement of complex Virtual Environments, by providing the feature of autoextraction of a process model, Detection and comparison of the present process 63

Methodology for Agent-Oriented Software

Methodology for Agent-Oriented Software ب.ظ 03:55 1 of 7 2006/10/27 Next: About this document... Methodology for Agent-Oriented Software Design Principal Investigator dr. Frank S. de Boer (frankb@cs.uu.nl) Summary The main research goal of this

More information

HELPING THE DESIGN OF MIXED SYSTEMS

HELPING THE DESIGN OF MIXED SYSTEMS HELPING THE DESIGN OF MIXED SYSTEMS Céline Coutrix Grenoble Informatics Laboratory (LIG) University of Grenoble 1, France Abstract Several interaction paradigms are considered in pervasive computing environments.

More information

UNIT-III LIFE-CYCLE PHASES

UNIT-III LIFE-CYCLE PHASES INTRODUCTION: UNIT-III LIFE-CYCLE PHASES - If there is a well defined separation between research and development activities and production activities then the software is said to be in successful development

More information

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL

More information

Agris on-line Papers in Economics and Informatics. Implementation of subontology of Planning and control for business analysis domain I.

Agris on-line Papers in Economics and Informatics. Implementation of subontology of Planning and control for business analysis domain I. Agris on-line Papers in Economics and Informatics Volume III Number 1, 2011 Implementation of subontology of Planning and control for business analysis domain I. Atanasová Department of computer science,

More information

SIU-CAVE. Cave Automatic Virtual Environment. Project Design. Version 1.0 (DRAFT) Prepared for. Dr. Christos Mousas JBU.

SIU-CAVE. Cave Automatic Virtual Environment. Project Design. Version 1.0 (DRAFT) Prepared for. Dr. Christos Mousas JBU. SIU-CAVE Cave Automatic Virtual Environment Project Design Version 1.0 (DRAFT) Prepared for Dr. Christos Mousas By JBU on March 2nd, 2018 SIU CAVE Project Design 1 TABLE OF CONTENTS -Introduction 3 -General

More information

Designing Semantic Virtual Reality Applications

Designing Semantic Virtual Reality Applications Designing Semantic Virtual Reality Applications F. Kleinermann, O. De Troyer, H. Mansouri, R. Romero, B. Pellens, W. Bille WISE Research group, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium

More information

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real... v preface Motivation Augmented reality (AR) research aims to develop technologies that allow the real-time fusion of computer-generated digital content with the real world. Unlike virtual reality (VR)

More information

Effective Iconography....convey ideas without words; attract attention...

Effective Iconography....convey ideas without words; attract attention... Effective Iconography...convey ideas without words; attract attention... Visual Thinking and Icons An icon is an image, picture, or symbol representing a concept Icon-specific guidelines Represent the

More information

Demonstration of DeGeL: A Clinical-Guidelines Library and Automated Guideline-Support Tools

Demonstration of DeGeL: A Clinical-Guidelines Library and Automated Guideline-Support Tools Demonstration of DeGeL: A Clinical-Guidelines Library and Automated Guideline-Support Tools Avner Hatsek, Ohad Young, Erez Shalom, Yuval Shahar Medical Informatics Research Center Department of Information

More information

Industrial applications simulation technologies in virtual environments Part 1: Virtual Prototyping

Industrial applications simulation technologies in virtual environments Part 1: Virtual Prototyping Industrial applications simulation technologies in virtual environments Part 1: Virtual Prototyping Bilalis Nikolaos Associate Professor Department of Production and Engineering and Management Technical

More information

VEWL: A Framework for Building a Windowing Interface in a Virtual Environment Daniel Larimer and Doug A. Bowman Dept. of Computer Science, Virginia Tech, 660 McBryde, Blacksburg, VA dlarimer@vt.edu, bowman@vt.edu

More information

Virtual Foundry Modeling and Its Applications

Virtual Foundry Modeling and Its Applications Virtual Foundry Modeling and Its Applications R.G. Chougule 1, M. M. Akarte 2, Dr. B. Ravi 3, 1 Research Scholar, Mechanical Engineering Department, Indian Institute of Technology, Bombay. 2 Department

More information

Rev. Integr. Bus. Econ. Res. Vol 5(NRRU) 233 ABSTRACT

Rev. Integr. Bus. Econ. Res. Vol 5(NRRU) 233 ABSTRACT Rev. Integr. Bus. Econ. Res. Vol 5(NRRU) 233 A Framework for Ontology-Based Knowledge Management System Case Study of Faculty of Business Administration of Rajamangala University of Technology ISAN Pharkpoom

More information

The Application of Virtual Reality in Art Design: A New Approach CHEN Dalei 1, a

The Application of Virtual Reality in Art Design: A New Approach CHEN Dalei 1, a International Conference on Education Technology, Management and Humanities Science (ETMHS 2015) The Application of Virtual Reality in Art Design: A New Approach CHEN Dalei 1, a 1 School of Art, Henan

More information

Modeling and Simulation: Linking Entertainment & Defense

Modeling and Simulation: Linking Entertainment & Defense Calhoun: The NPS Institutional Archive Faculty and Researcher Publications Faculty and Researcher Publications 1998 Modeling and Simulation: Linking Entertainment & Defense Zyda, Michael 1 April 98: "Modeling

More information

Virtual Reality in E-Learning Redefining the Learning Experience

Virtual Reality in E-Learning Redefining the Learning Experience Virtual Reality in E-Learning Redefining the Learning Experience A Whitepaper by RapidValue Solutions Contents Executive Summary... Use Cases and Benefits of Virtual Reality in elearning... Use Cases...

More information

Prototyping interactive cockpit applications

Prototyping interactive cockpit applications Nationaal Lucht- en Ruimtevaartlaboratorium National Aerospace Laboratory NLR Prototyping interactive cockpit applications R.P.M. Verhoeven and A.J.C. de Reus This report has been based on a paper presented

More information

How to Keep a Reference Ontology Relevant to the Industry: a Case Study from the Smart Home

How to Keep a Reference Ontology Relevant to the Industry: a Case Study from the Smart Home How to Keep a Reference Ontology Relevant to the Industry: a Case Study from the Smart Home Laura Daniele, Frank den Hartog, Jasper Roes TNO - Netherlands Organization for Applied Scientific Research,

More information

VIRTUAL REALITY FOR NONDESTRUCTIVE EVALUATION APPLICATIONS

VIRTUAL REALITY FOR NONDESTRUCTIVE EVALUATION APPLICATIONS VIRTUAL REALITY FOR NONDESTRUCTIVE EVALUATION APPLICATIONS Jaejoon Kim, S. Mandayam, S. Udpa, W. Lord, and L. Udpa Department of Electrical and Computer Engineering Iowa State University Ames, Iowa 500

More information

Towards an MDA-based development methodology 1

Towards an MDA-based development methodology 1 Towards an MDA-based development methodology 1 Anastasius Gavras 1, Mariano Belaunde 2, Luís Ferreira Pires 3, João Paulo A. Almeida 3 1 Eurescom GmbH, 2 France Télécom R&D, 3 University of Twente 1 gavras@eurescom.de,

More information

Immersive Simulation in Instructional Design Studios

Immersive Simulation in Instructional Design Studios Blucher Design Proceedings Dezembro de 2014, Volume 1, Número 8 www.proceedings.blucher.com.br/evento/sigradi2014 Immersive Simulation in Instructional Design Studios Antonieta Angulo Ball State University,

More information

Team Breaking Bat Architecture Design Specification. Virtual Slugger

Team Breaking Bat Architecture Design Specification. Virtual Slugger Department of Computer Science and Engineering The University of Texas at Arlington Team Breaking Bat Architecture Design Specification Virtual Slugger Team Members: Sean Gibeault Brandon Auwaerter Ehidiamen

More information

Knowledge Management for Command and Control

Knowledge Management for Command and Control Knowledge Management for Command and Control Dr. Marion G. Ceruti, Dwight R. Wilcox and Brenda J. Powers Space and Naval Warfare Systems Center, San Diego, CA 9 th International Command and Control Research

More information

Components for virtual environments Michael Haller, Roland Holm, Markus Priglinger, Jens Volkert, and Roland Wagner Johannes Kepler University of Linz

Components for virtual environments Michael Haller, Roland Holm, Markus Priglinger, Jens Volkert, and Roland Wagner Johannes Kepler University of Linz Components for virtual environments Michael Haller, Roland Holm, Markus Priglinger, Jens Volkert, and Roland Wagner Johannes Kepler University of Linz Altenbergerstr 69 A-4040 Linz (AUSTRIA) [mhallerjrwagner]@f

More information

Networked Virtual Environments

Networked Virtual Environments etworked Virtual Environments Christos Bouras Eri Giannaka Thrasyvoulos Tsiatsos Introduction The inherent need of humans to communicate acted as the moving force for the formation, expansion and wide

More information

Component Based Mechatronics Modelling Methodology

Component Based Mechatronics Modelling Methodology Component Based Mechatronics Modelling Methodology R.Sell, M.Tamre Department of Mechatronics, Tallinn Technical University, Tallinn, Estonia ABSTRACT There is long history of developing modelling systems

More information

Chapter 1 - Introduction

Chapter 1 - Introduction 1 "We all agree that your theory is crazy, but is it crazy enough?" Niels Bohr (1885-1962) Chapter 1 - Introduction Augmented reality (AR) is the registration of projected computer-generated images over

More information

The Application of Human-Computer Interaction Idea in Computer Aided Industrial Design

The Application of Human-Computer Interaction Idea in Computer Aided Industrial Design The Application of Human-Computer Interaction Idea in Computer Aided Industrial Design Zhang Liang e-mail: 76201691@qq.com Zhao Jian e-mail: 84310626@qq.com Zheng Li-nan e-mail: 1021090387@qq.com Li Nan

More information

UMI3D Unified Model for Interaction in 3D. White Paper

UMI3D Unified Model for Interaction in 3D. White Paper UMI3D Unified Model for Interaction in 3D White Paper 30/04/2018 Introduction 2 The objectives of the UMI3D project are to simplify the collaboration between multiple and potentially asymmetrical devices

More information

EXERGY, ENERGY SYSTEM ANALYSIS AND OPTIMIZATION Vol. III - Artificial Intelligence in Component Design - Roberto Melli

EXERGY, ENERGY SYSTEM ANALYSIS AND OPTIMIZATION Vol. III - Artificial Intelligence in Component Design - Roberto Melli ARTIFICIAL INTELLIGENCE IN COMPONENT DESIGN University of Rome 1 "La Sapienza," Italy Keywords: Expert Systems, Knowledge-Based Systems, Artificial Intelligence, Knowledge Acquisition. Contents 1. Introduction

More information

VR4D: An Immersive and Collaborative Experience to Improve the Interior Design Process

VR4D: An Immersive and Collaborative Experience to Improve the Interior Design Process VR4D: An Immersive and Collaborative Experience to Improve the Interior Design Process Amine Chellali, Frederic Jourdan, Cédric Dumas To cite this version: Amine Chellali, Frederic Jourdan, Cédric Dumas.

More information

Design Studio of the Future

Design Studio of the Future Design Studio of the Future B. de Vries, J.P. van Leeuwen, H. H. Achten Eindhoven University of Technology Faculty of Architecture, Building and Planning Design Systems group Eindhoven, The Netherlands

More information

RV - AULA 05 - PSI3502/2018. User Experience, Human Computer Interaction and UI

RV - AULA 05 - PSI3502/2018. User Experience, Human Computer Interaction and UI RV - AULA 05 - PSI3502/2018 User Experience, Human Computer Interaction and UI Outline Discuss some general principles of UI (user interface) design followed by an overview of typical interaction tasks

More information

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT TAYSHENG JENG, CHIA-HSUN LEE, CHI CHEN, YU-PIN MA Department of Architecture, National Cheng Kung University No. 1, University Road,

More information

Chinese civilization has accumulated

Chinese civilization has accumulated Color Restoration and Image Retrieval for Dunhuang Fresco Preservation Xiangyang Li, Dongming Lu, and Yunhe Pan Zhejiang University, China Chinese civilization has accumulated many heritage sites over

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

Abstract. Justification. Scope. RSC/RelationshipWG/1 8 August 2016 Page 1 of 31. RDA Steering Committee

Abstract. Justification. Scope. RSC/RelationshipWG/1 8 August 2016 Page 1 of 31. RDA Steering Committee Page 1 of 31 To: From: Subject: RDA Steering Committee Gordon Dunsire, Chair, RSC Relationship Designators Working Group RDA models for relationship data Abstract This paper discusses how RDA accommodates

More information

Using VR and simulation to enable agile processes for safety-critical environments

Using VR and simulation to enable agile processes for safety-critical environments Using VR and simulation to enable agile processes for safety-critical environments Michael N. Louka Department Head, VR & AR IFE Digital Systems Virtual Reality Virtual Reality: A computer system used

More information

An Integrated Simulation Method to Support Virtual Factory Engineering

An Integrated Simulation Method to Support Virtual Factory Engineering International Journal of CAD/CAM Vol. 2, No. 1, pp. 39~44 (2002) An Integrated Simulation Method to Support Virtual Factory Engineering Zhai, Wenbin*, Fan, xiumin, Yan, Juanqi, and Zhu, Pengsheng Inst.

More information

Below is provided a chapter summary of the dissertation that lays out the topics under discussion.

Below is provided a chapter summary of the dissertation that lays out the topics under discussion. Introduction This dissertation articulates an opportunity presented to architecture by computation, specifically its digital simulation of space known as Virtual Reality (VR) and its networked, social

More information

Intelligent Modelling of Virtual Worlds Using Domain Ontologies

Intelligent Modelling of Virtual Worlds Using Domain Ontologies Intelligent Modelling of Virtual Worlds Using Domain Ontologies Wesley Bille, Bram Pellens, Frederic Kleinermann, and Olga De Troyer Research Group WISE, Department of Computer Science, Vrije Universiteit

More information

AGENT BASED MANUFACTURING CAPABILITY ASSESSMENT IN THE EXTENDED ENTERPRISE USING STEP AP224 AND XML

AGENT BASED MANUFACTURING CAPABILITY ASSESSMENT IN THE EXTENDED ENTERPRISE USING STEP AP224 AND XML 17 AGENT BASED MANUFACTURING CAPABILITY ASSESSMENT IN THE EXTENDED ENTERPRISE USING STEP AP224 AND XML Svetan Ratchev and Omar Medani School of Mechanical, Materials, Manufacturing Engineering and Management,

More information

A Modular Architecture for an Interactive Real-Time Simulation and Training Environment for Satellite On-Orbit Servicing

A Modular Architecture for an Interactive Real-Time Simulation and Training Environment for Satellite On-Orbit Servicing A Modular Architecture for an Interactive Real-Time Simulation and Training Environment for Satellite On-Orbit Servicing Robin Wolff German Aerospace Center (DLR), Germany Slide 1 Outline! Motivation!

More information

ABSTRACT. Keywords Virtual Reality, Java, JavaBeans, C++, CORBA 1. INTRODUCTION

ABSTRACT. Keywords Virtual Reality, Java, JavaBeans, C++, CORBA 1. INTRODUCTION Tweek: Merging 2D and 3D Interaction in Immersive Environments Patrick L Hartling, Allen D Bierbaum, Carolina Cruz-Neira Virtual Reality Applications Center, 2274 Howe Hall Room 1620, Iowa State University

More information

Spatial Interfaces and Interactive 3D Environments for Immersive Musical Performances

Spatial Interfaces and Interactive 3D Environments for Immersive Musical Performances Spatial Interfaces and Interactive 3D Environments for Immersive Musical Performances Florent Berthaut and Martin Hachet Figure 1: A musician plays the Drile instrument while being immersed in front of

More information

IED Detailed Outline. Unit 1 Design Process Time Days: 16 days. An engineering design process involves a characteristic set of practices and steps.

IED Detailed Outline. Unit 1 Design Process Time Days: 16 days. An engineering design process involves a characteristic set of practices and steps. IED Detailed Outline Unit 1 Design Process Time Days: 16 days Understandings An engineering design process involves a characteristic set of practices and steps. Research derived from a variety of sources

More information

Leverage 3D Master. Improve Cost and Quality throughout the Product Development Process

Leverage 3D Master. Improve Cost and Quality throughout the Product Development Process Leverage 3D Master Improve Cost and Quality throughout the Product Development Process Introduction With today s ongoing global pressures, organizations need to drive innovation and be first to market

More information

Map of Human Computer Interaction. Overview: Map of Human Computer Interaction

Map of Human Computer Interaction. Overview: Map of Human Computer Interaction Map of Human Computer Interaction What does the discipline of HCI cover? Why study HCI? Overview: Map of Human Computer Interaction Use and Context Social Organization and Work Human-Machine Fit and Adaptation

More information

Knowledge Enhanced Electronic Logic for Embedded Intelligence

Knowledge Enhanced Electronic Logic for Embedded Intelligence The Problem Knowledge Enhanced Electronic Logic for Embedded Intelligence Systems (military, network, security, medical, transportation ) are getting more and more complex. In future systems, assets will

More information

INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY

INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY T. Panayiotopoulos,, N. Zacharis, S. Vosinakis Department of Computer Science, University of Piraeus, 80 Karaoli & Dimitriou str. 18534 Piraeus, Greece themisp@unipi.gr,

More information

Industry 4.0: the new challenge for the Italian textile machinery industry

Industry 4.0: the new challenge for the Italian textile machinery industry Industry 4.0: the new challenge for the Italian textile machinery industry Executive Summary June 2017 by Contacts: Economics & Press Office Ph: +39 02 4693611 email: economics-press@acimit.it ACIMIT has

More information

EXTENDED TABLE OF CONTENTS

EXTENDED TABLE OF CONTENTS EXTENDED TABLE OF CONTENTS Preface OUTLINE AND SUBJECT OF THIS BOOK DEFINING UC THE SIGNIFICANCE OF UC THE CHALLENGES OF UC THE FOCUS ON REAL TIME ENTERPRISES THE S.C.A.L.E. CLASSIFICATION USED IN THIS

More information

INTERACTIVE ARCHITECTURAL COMPOSITIONS INTERACTIVE ARCHITECTURAL COMPOSITIONS IN 3D REAL-TIME VIRTUAL ENVIRONMENTS

INTERACTIVE ARCHITECTURAL COMPOSITIONS INTERACTIVE ARCHITECTURAL COMPOSITIONS IN 3D REAL-TIME VIRTUAL ENVIRONMENTS INTERACTIVE ARCHITECTURAL COMPOSITIONS IN 3D REAL-TIME VIRTUAL ENVIRONMENTS RABEE M. REFFAT Architecture Department, King Fahd University of Petroleum and Minerals, Dhahran, 31261, Saudi Arabia rabee@kfupm.edu.sa

More information

6 System architecture

6 System architecture 6 System architecture is an application for interactively controlling the animation of VRML avatars. It uses the pen interaction technique described in Chapter 3 - Interaction technique. It is used in

More information

Marco Cavallo. Merging Worlds: A Location-based Approach to Mixed Reality. Marco Cavallo Master Thesis Presentation POLITECNICO DI MILANO

Marco Cavallo. Merging Worlds: A Location-based Approach to Mixed Reality. Marco Cavallo Master Thesis Presentation POLITECNICO DI MILANO Marco Cavallo Merging Worlds: A Location-based Approach to Mixed Reality Marco Cavallo Master Thesis Presentation POLITECNICO DI MILANO Introduction: A New Realm of Reality 2 http://www.samsung.com/sg/wearables/gear-vr/

More information

INTERNATIONAL CONFERENCE ON ENGINEERING DESIGN ICED 05 MELBOURNE, AUGUST 15-18, 2005 AUTOMATIC DESIGN OF A PRESS BRAKE FOR SHEET METAL BENDING

INTERNATIONAL CONFERENCE ON ENGINEERING DESIGN ICED 05 MELBOURNE, AUGUST 15-18, 2005 AUTOMATIC DESIGN OF A PRESS BRAKE FOR SHEET METAL BENDING INTERNATIONAL CONFERENCE ON ENGINEERING DESIGN ICED 05 MELBOURNE, AUGUST 15-18, 2005 AUTOMATIC DESIGN OF A PRESS BRAKE FOR SHEET METAL BENDING Giorgio Colombo, Ambrogio Girotti, Edoardo Rovida Keywords:

More information

An Agent-Based Architecture for Large Virtual Landscapes. Bruno Fanini

An Agent-Based Architecture for Large Virtual Landscapes. Bruno Fanini An Agent-Based Architecture for Large Virtual Landscapes Bruno Fanini Introduction Context: Large reconstructed landscapes, huge DataSets (eg. Large ancient cities, territories, etc..) Virtual World Realism

More information

Research of key technical issues based on computer forensic legal expert system

Research of key technical issues based on computer forensic legal expert system International Symposium on Computers & Informatics (ISCI 2015) Research of key technical issues based on computer forensic legal expert system Li Song 1, a 1 Liaoning province,jinzhou city, Taihe district,keji

More information

Implementing BIM for infrastructure: a guide to the essential steps

Implementing BIM for infrastructure: a guide to the essential steps Implementing BIM for infrastructure: a guide to the essential steps See how your processes and approach to projects change as you adopt BIM 1 Executive summary As an ever higher percentage of infrastructure

More information

Software Project Management 4th Edition. Chapter 3. Project evaluation & estimation

Software Project Management 4th Edition. Chapter 3. Project evaluation & estimation Software Project Management 4th Edition Chapter 3 Project evaluation & estimation 1 Introduction Evolutionary Process model Spiral model Evolutionary Process Models Evolutionary Models are characterized

More information

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS Nuno Sousa Eugénio Oliveira Faculdade de Egenharia da Universidade do Porto, Portugal Abstract: This paper describes a platform that enables

More information

Instrumentation and Control

Instrumentation and Control Program Description Instrumentation and Control Program Overview Instrumentation and control (I&C) and information systems impact nuclear power plant reliability, efficiency, and operations and maintenance

More information

Application Areas of AI Artificial intelligence is divided into different branches which are mentioned below:

Application Areas of AI   Artificial intelligence is divided into different branches which are mentioned below: Week 2 - o Expert Systems o Natural Language Processing (NLP) o Computer Vision o Speech Recognition And Generation o Robotics o Neural Network o Virtual Reality APPLICATION AREAS OF ARTIFICIAL INTELLIGENCE

More information

Federico Forti, Erdi Izgi, Varalika Rathore, Francesco Forti

Federico Forti, Erdi Izgi, Varalika Rathore, Francesco Forti Basic Information Project Name Supervisor Kung-fu Plants Jakub Gemrot Annotation Kung-fu plants is a game where you can create your characters, train them and fight against the other chemical plants which

More information

ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit)

ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit) Exhibit R-2 0602308A Advanced Concepts and Simulation ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit) FY 2005 FY 2006 FY 2007 FY 2008 FY 2009 FY 2010 FY 2011 Total Program Element (PE) Cost 22710 27416

More information

Catholijn M. Jonker and Jan Treur Vrije Universiteit Amsterdam, Department of Artificial Intelligence, Amsterdam, The Netherlands

Catholijn M. Jonker and Jan Treur Vrije Universiteit Amsterdam, Department of Artificial Intelligence, Amsterdam, The Netherlands INTELLIGENT AGENTS Catholijn M. Jonker and Jan Treur Vrije Universiteit Amsterdam, Department of Artificial Intelligence, Amsterdam, The Netherlands Keywords: Intelligent agent, Website, Electronic Commerce

More information

Alternative Interfaces. Overview. Limitations of the Mac Interface. SMD157 Human-Computer Interaction Fall 2002

Alternative Interfaces. Overview. Limitations of the Mac Interface. SMD157 Human-Computer Interaction Fall 2002 INSTITUTIONEN FÖR SYSTEMTEKNIK LULEÅ TEKNISKA UNIVERSITET Alternative Interfaces SMD157 Human-Computer Interaction Fall 2002 Nov-27-03 SMD157, Alternate Interfaces 1 L Overview Limitation of the Mac interface

More information

The use of gestures in computer aided design

The use of gestures in computer aided design Loughborough University Institutional Repository The use of gestures in computer aided design This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation: CASE,

More information

VIRTUAL IMMERSION UTILIZATION FOR IMPROVING PERCEPTION OF THE 3D PROTOTYPES

VIRTUAL IMMERSION UTILIZATION FOR IMPROVING PERCEPTION OF THE 3D PROTOTYPES September 2017 Engineering VIRTUAL IMMERSION UTILIZATION FOR IMPROVING PERCEPTION OF THE 3D PROTOTYPES Ghinea MIHALACHE 1 Marinică (Stan) ANCA 2 ABSTRACT: VIRTUAL IMMERSION (OR VR) GETS INTO ATTENTION

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

LINKING CONSTRUCTION INFORMATION THROUGH VR USING AN OBJECT ORIENTED ENVIRONMENT

LINKING CONSTRUCTION INFORMATION THROUGH VR USING AN OBJECT ORIENTED ENVIRONMENT LINKING CONSTRUCTION INFORMATION THROUGH VR USING AN OBJECT ORIENTED ENVIRONMENT G. Aouad 1, T. Child, P. Brandon, and M. Sarshar Research Centre for the Built and Human Environment, University of Salford,

More information

Enhancing Shipboard Maintenance with Augmented Reality

Enhancing Shipboard Maintenance with Augmented Reality Enhancing Shipboard Maintenance with Augmented Reality CACI Oxnard, CA Dennis Giannoni dgiannoni@caci.com (805) 288-6630 INFORMATION DEPLOYED. SOLUTIONS ADVANCED. MISSIONS ACCOMPLISHED. Agenda Virtual

More information

A SERVICE-ORIENTED SYSTEM ARCHITECTURE FOR THE HUMAN CENTERED DESIGN OF INTELLIGENT TRANSPORTATION SYSTEMS

A SERVICE-ORIENTED SYSTEM ARCHITECTURE FOR THE HUMAN CENTERED DESIGN OF INTELLIGENT TRANSPORTATION SYSTEMS Tools and methodologies for ITS design and drivers awareness A SERVICE-ORIENTED SYSTEM ARCHITECTURE FOR THE HUMAN CENTERED DESIGN OF INTELLIGENT TRANSPORTATION SYSTEMS Jan Gačnik, Oliver Häger, Marco Hannibal

More information

Virtual Environments. Ruth Aylett

Virtual Environments. Ruth Aylett Virtual Environments Ruth Aylett Aims of the course 1. To demonstrate a critical understanding of modern VE systems, evaluating the strengths and weaknesses of the current VR technologies 2. To be able

More information

Virtual prototyping based development and marketing of future consumer electronics products

Virtual prototyping based development and marketing of future consumer electronics products 31 Virtual prototyping based development and marketing of future consumer electronics products P. J. Pulli, M. L. Salmela, J. K. Similii* VIT Electronics, P.O. Box 1100, 90571 Oulu, Finland, tel. +358

More information

Executive Summary. Chapter 1. Overview of Control

Executive Summary. Chapter 1. Overview of Control Chapter 1 Executive Summary Rapid advances in computing, communications, and sensing technology offer unprecedented opportunities for the field of control to expand its contributions to the economic and

More information

Semi-Autonomous Parking for Enhanced Safety and Efficiency

Semi-Autonomous Parking for Enhanced Safety and Efficiency Technical Report 105 Semi-Autonomous Parking for Enhanced Safety and Efficiency Sriram Vishwanath WNCG June 2017 Data-Supported Transportation Operations & Planning Center (D-STOP) A Tier 1 USDOT University

More information

Session 3 _ Part A Effective Coordination with Revit Models

Session 3 _ Part A Effective Coordination with Revit Models Session 3 _ Part A Effective Coordination with Revit Models Class Description Effective coordination relies upon a measured strategic approach to using clash detection software. This class will share best

More information

The Development of Computer Aided Engineering: Introduced from an Engineering Perspective. A Presentation By: Jesse Logan Moe.

The Development of Computer Aided Engineering: Introduced from an Engineering Perspective. A Presentation By: Jesse Logan Moe. The Development of Computer Aided Engineering: Introduced from an Engineering Perspective A Presentation By: Jesse Logan Moe What Defines CAE? Introduction Computer-Aided Engineering is the use of information

More information

Modelling of robotic work cells using agent basedapproach

Modelling of robotic work cells using agent basedapproach IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS Modelling of robotic work cells using agent basedapproach To cite this article: A Skala et al 2016 IOP Conf. Ser.: Mater. Sci.

More information

First steps towards a mereo-operandi theory for a system feature-based architecting of cyber-physical systems

First steps towards a mereo-operandi theory for a system feature-based architecting of cyber-physical systems First steps towards a mereo-operandi theory for a system feature-based architecting of cyber-physical systems Shahab Pourtalebi, Imre Horváth, Eliab Z. Opiyo Faculty of Industrial Design Engineering Delft

More information

An Agent-based Heterogeneous UAV Simulator Design

An Agent-based Heterogeneous UAV Simulator Design An Agent-based Heterogeneous UAV Simulator Design MARTIN LUNDELL 1, JINGPENG TANG 1, THADDEUS HOGAN 1, KENDALL NYGARD 2 1 Math, Science and Technology University of Minnesota Crookston Crookston, MN56716

More information

Separation of Concerns in Software Engineering Education

Separation of Concerns in Software Engineering Education Separation of Concerns in Software Engineering Education Naji Habra Institut d Informatique University of Namur Rue Grandgagnage, 21 B-5000 Namur +32 81 72 4995 nha@info.fundp.ac.be ABSTRACT Separation

More information

Developing a VR System. Mei Yii Lim

Developing a VR System. Mei Yii Lim Developing a VR System Mei Yii Lim System Development Life Cycle - Spiral Model Problem definition Preliminary study System Analysis and Design System Development System Testing System Evaluation Refinement

More information

ISO JTC 1 SC 24 WG9 G E R A R D J. K I M K O R E A U N I V E R S I T Y

ISO JTC 1 SC 24 WG9 G E R A R D J. K I M K O R E A U N I V E R S I T Y New Work Item Proposal: A Standard Reference Model for Generic MAR Systems ISO JTC 1 SC 24 WG9 G E R A R D J. K I M K O R E A U N I V E R S I T Y What is a Reference Model? A reference model (for a given

More information

Negotiation Process Modelling in Virtual Environment for Enterprise Management

Negotiation Process Modelling in Virtual Environment for Enterprise Management Association for Information Systems AIS Electronic Library (AISeL) AMCIS 2006 Proceedings Americas Conference on Information Systems (AMCIS) December 2006 Negotiation Process Modelling in Virtual Environment

More information

Mobile Audio Designs Monkey: A Tool for Audio Augmented Reality

Mobile Audio Designs Monkey: A Tool for Audio Augmented Reality Mobile Audio Designs Monkey: A Tool for Audio Augmented Reality Bruce N. Walker and Kevin Stamper Sonification Lab, School of Psychology Georgia Institute of Technology 654 Cherry Street, Atlanta, GA,

More information

Programmable Wireless Networking Overview

Programmable Wireless Networking Overview Programmable Wireless Networking Overview Dr. Joseph B. Evans Program Director Computer and Network Systems Computer & Information Science & Engineering National Science Foundation NSF Programmable Wireless

More information

CMI User Day - Product Strategy

CMI User Day - Product Strategy CMI User Day - Product Strategy CMI User Day 2003 New Orleans, USA CMI User Day 2003 New Orleans, USA Tino Schlitt T-Systems PLM Solutions CATIA Metaphase Interface - Overview Integration of CATIA V4 /

More information

The Role of Computer Science and Software Technology in Organizing Universities for Industry 4.0 and Beyond

The Role of Computer Science and Software Technology in Organizing Universities for Industry 4.0 and Beyond The Role of Computer Science and Software Technology in Organizing Universities for Industry 4.0 and Beyond Prof. dr. ir. Mehmet Aksit m.aksit@utwente.nl Department of Computer Science, University of Twente,

More information

Interactive Design/Decision Making in a Virtual Urban World: Visual Simulation and GIS

Interactive Design/Decision Making in a Virtual Urban World: Visual Simulation and GIS Robin Liggett, Scott Friedman, and William Jepson Interactive Design/Decision Making in a Virtual Urban World: Visual Simulation and GIS Researchers at UCLA have developed an Urban Simulator which links

More information

STRATEGO EXPERT SYSTEM SHELL

STRATEGO EXPERT SYSTEM SHELL STRATEGO EXPERT SYSTEM SHELL Casper Treijtel and Leon Rothkrantz Faculty of Information Technology and Systems Delft University of Technology Mekelweg 4 2628 CD Delft University of Technology E-mail: L.J.M.Rothkrantz@cs.tudelft.nl

More information

Official Documentation

Official Documentation Official Documentation Doc Version: 1.0.0 Toolkit Version: 1.0.0 Contents Technical Breakdown... 3 Assets... 4 Setup... 5 Tutorial... 6 Creating a Card Sets... 7 Adding Cards to your Set... 10 Adding your

More information

The Disappearing Computer. Information Document, IST Call for proposals, February 2000.

The Disappearing Computer. Information Document, IST Call for proposals, February 2000. The Disappearing Computer Information Document, IST Call for proposals, February 2000. Mission Statement To see how information technology can be diffused into everyday objects and settings, and to see

More information

HeroX - Untethered VR Training in Sync'ed Physical Spaces

HeroX - Untethered VR Training in Sync'ed Physical Spaces Page 1 of 6 HeroX - Untethered VR Training in Sync'ed Physical Spaces Above and Beyond - Integrating Robotics In previous research work I experimented with multiple robots remotely controlled by people

More information

Colombia s Social Innovation Policy 1 July 15 th -2014

Colombia s Social Innovation Policy 1 July 15 th -2014 Colombia s Social Innovation Policy 1 July 15 th -2014 I. Introduction: The background of Social Innovation Policy Traditionally innovation policy has been understood within a framework of defining tools

More information

Industry 4.0. Advanced and integrated SAFETY tools for tecnhical plants

Industry 4.0. Advanced and integrated SAFETY tools for tecnhical plants Industry 4.0 Advanced and integrated SAFETY tools for tecnhical plants Industry 4.0 Industry 4.0 is the digital transformation of manufacturing; leverages technologies, such as Big Data and Internet of

More information

Digitalisation as day-to-day-business

Digitalisation as day-to-day-business Digitalisation as day-to-day-business What is today feasible for the company in the future Prof. Jivka Ovtcharova INSTITUTE FOR INFORMATION MANAGEMENT IN ENGINEERING Baden-Württemberg Driving force for

More information

Semantic Privacy Policies for Service Description and Discovery in Service-Oriented Architecture

Semantic Privacy Policies for Service Description and Discovery in Service-Oriented Architecture Western University Scholarship@Western Electronic Thesis and Dissertation Repository August 2011 Semantic Privacy Policies for Service Description and Discovery in Service-Oriented Architecture Diego Zuquim

More information