IMMIView. A multi-user solution for design review in real-time. Real Time Image Processing manuscript No. (will be inserted by the editor)

Size: px
Start display at page:

Download "IMMIView. A multi-user solution for design review in real-time. Real Time Image Processing manuscript No. (will be inserted by the editor)"

Transcription

1 Real Time Image Processing manuscript No. (will be inserted by the editor) Ricardo Jota Bruno Araújo Luís Bruno João M. Pereira Joaquim A. Jorge IMMIView A multi-user solution for design review in real-time the date of receipt and acceptance should be inserted later Abstract IMMIView is an interactive system that rely on multiple modality and multi-user interaction to support collaborative design review. It was designed to offer natural interaction in visualization setups such as large scale displays, head mounted displays or TabletPC computers. To support architectural design, our system provides content creation and manipulation, 3D scene navigation and annotations. Users can interact with the system using laser pointers, speech commands, body gestures and mobile devices. On this paper, we describe how we designed a system to answer architectural user requirements. In particular, our system takes advantage of multiple modalities to provide a natural interaction for design review. We also propose a new graphical user interface adapted to architectural user tasks, such as navigation or annotations. The interface relies on a novel stroke based interaction supported by simple laser pointers as input devices for large scale displays. Furthermore, input devices like speech and body tracking allow IMMIView to support multiple users. Moreover, they allow each user to select different modalities according to their preference and modality adequacy for the user task. We present a multi-modal fusion system developed to support multi-modal commands on a collaborative, colocated, environment, i.e: with two or more users interacting at the same time, on the same system. The multimodal fusion system listens to inputs from all the IM- MIView modules in order to model user actions and issue commands. The multiple modalities are fused based on a simple rule-based submodule developed in IMMIView and presented in this paper. User evaluation performed over IMMIView are presented. The results show that users feel comfortable with the system and suggest that users prefer the multi-modal VIMMI group, INESC-ID Department of Information Systems and Computer Science IST/Technical University of Lisbon Rua Alves Redol, 9, Lisboa, Portugal. rjc@vimmi.inesc-id.pt, brar@vimmi.inesc-id.pt, lbruno@estig.ipbeja.pt, jap@inesc-id.pt, jaj@inesc-id.pt approach to more conventional interactions, such as mouse and menus, for the architectural tasks presented. 1 Introduction The IMPROVE project focused on review of 3D models under a multiple user, loosely coupled collaborative environment. The project included architects and automative users that presented us with specific requirements divided into three areas: navigation, annotations and object editing. Users also defined three scenarios where different artifacts were required for interaction and visualization. On the first scenario, users requested a 3D head mounted displays to physically move around a virtual model. On the second scenario users were required to go on-site and use tablet PC to annotate and edit an architectural model. Finally, on the third scenario users asked to be able to loosely collaborate while interacting with a large scale display. Our challenge in the project was to create a system that provided the desired functionalities, supported such diverse scenarios and still be recognized as a single system. Moreover, the system was required to be collaborative, co-located and distributed, which also meant that the system needed to be multi-user. Our contribution to IMPROVE was IMMIView: a multi-user system for real-time design review. Its graphical user interface is based on pen strokes, instead of point and click. We found this kind of graphical user interface would be applied to all the requested scenario. That is, the system can be used with 3D head mounted displays, large-scale display or multiple tablet PC networked to form a collaborative environment, but the graphical user interface would only suffer minor adjustments. For distributed collaboration, the system includes a communication backbone that is responsible for the synchronization between instances, even if the system is being used on different output devices. For co-located collaboration the system has support for multiple inputs that it is able to mix by means of the multi-user fusion module described in this paper. To enable interaction with the

2 2 GUI, IMMIView includes a number of modalities, pen or laser interaction, speech recognition, mobile devices and body tracking. All of which can be used in conjunction to produce new, multi-modal dialogs more appropriate to some scenarios. For example, on the large scale display scenario, the multi-modal system allows for users to combine body gestures with laser input and voice recognition to resize objects. To measure the success of the IMMIView, we developed tests with users where quantitative and qualitative data, as well some important comments and suggestions, was recorded. The users were handed a questionnaire based on the standardized ISONORM 9241 Part 10 (usability questionnaire) and their results suggest that the IMMIView: (i) is suited for the main tasks (annotation, 3D edition, navigation), (ii) conformed with user expectations related to the interactions techniques and multimodal resources provided (iii) and suited to learning presented in general an comfortable learning curve. The rest of the paper is structured as follows. First we discuss the related work. Afterwards, we detail the functionalities, the system architecture and graphical user interface of the IMMIView. Next, we describe the modalities along with the system support for multi-user and multi-modal fusion. Users tests and others deployments are presented on the results section followed by our conclusions and future work. 2 Related Work During the last decade, several studies showed that virtual and augmented reality environment are suitable to address collaborative design review for architectural design applications ( Iacucci and Wagner (2003); Dvorak et al (2005); Wang (2007)). Tasks supported by existing application include 3D annotations ( Jung et al (2002); Kadobayashi et al (2006); Kleinermann et al (2007); Isenberg et al (2006)), object placing in Virtual environment ( Drettakis et al (2007); Broll (2003)), 3D navigation ( Broll (2003); Kleinermann et al (2007)) and augmented information visualization ( Ishii et al (2002)). In ( Drettakis et al (2007)), a 2D desktop application is presented to place virtual content using a top view in a stereo visualization environment. The evaluation of the system was focused on the rendering quality but also showed that the top view does not allow this to place object with precision. Both ( Kadobayashi et al (2006)) and ( Jung et al (2002)) propose 2D desktop applications where several users can interact with the same virtual environment and share annotations using a billboard representation ( Kadobayashi et al (2006)) or by sketching directly over the 3d model ( Jung et al (2002)). Recently, ( Kleinermann et al (2007)) proposes an enriched annotation system for virtual environment where multimedia contents such as images and videos are supported and is possible to define navigation paths using 3D landmarks placed along the 3D scene. During the site survey analysis, architects deal with large quantities of 2D information, such as notes, maps and photos. To support this architectural process step, several applications were designed for tabletop interaction and augmented reality. Ishii (Ishii et al (2002)) presents the Luminous Table where several users can interact and takes advantage of tangible interface to place virtual objects on the top of 2D drawings. They also complemented their system with a 2D projection over the physicals objects to visualize simulated effects such as wind patterns, traffic or virtual shadows. The ARTHUR project ( Broll (2003)) also take advantage of tangible interfaces and augmented reality. It provides a setup were several users, wearing head mounted displays, can interact with virtual buildings. Using freehand gestures and 3D pointers, users can open 3D floating menus that allow shape creation and selection. A comparison between tangible interfaces and traditional user interface was performed by using a 3D stereo working bench setup( Kim and Maher (2005)), showing that tangible interface helps coordinate multiple actions. While these systems present some solutions for architectural design, they are not suited for design review and do not offer annotations, navigation and shape editing in the same environment. In addition, augmented reality can be limit the user in conceptual design review, since it requires the usage of limited field of view visualization devices such as head-mount display. Furthermore, Tabletops present a good environment for collaboration, but they do not reproduce the experience of a real (non virtual) navigation. Considering the price decrease of projection systems, large scale displays can be used to mix the advantages of both visualization systems. They do not constrain the user field of view and can be used to reproduce the experience of a real navigation. Architects are used to interact and perform modeling activities using 2D based GUI applications. However these applications are usually design on top of complex WIMP (Windows Icons Mouse Pointing) based interfaces, which do not naturally adapt to 3D environment or large scale displays. For large scale displays, the whiteboard analogy is considered more appropriate to describe interactive graphical displays because it affords drawing and sketching. This analogy was subject of research with the introduction of 2D calligraphic interfaces. Taking advantage of emerging input devices such as tablet PCs, systems such as SKETCH ( Zeleznik et al (1996)) and Teddy ( Igarashi et al (August 1999)) better support 3D modeling by taking advantage of the designer sketching skills. Both systems allow direct drawing on a 3D view using a graphical 2D symbol-based grammar or by contour sketching to invoke modeling operations. This approach was later followed by the GIDeS modeling project ( Pereira et al (2003)), which explores new mechanisms, such as a suggestive interface, to support the sketch in-

3 3 terpretation. The SmartPaper system ( Shesh and Chen (2004)) combines 3D reconstruction with sketch-based interaction. More recent work (de Araùjo and Jorge (2003); Nealen et al (2005)) enhance this approach by providing complex 3D modeling operators, based on contour and feature-line editing. While sketch based interfaces are well adapted to 3D modeling, they do not entirely avoid conventional GUI primitives to activate other functionalities such as navigation. Jacoby and Ellis ( Jacoby and Ellis (1992)) present 2D menus over a 3D map view adapted to the navigation in virtual three-dimensional content. However, limitations of the virtual environment technology used contribute with additional constraints to menu design. As menu alternatives, Several 2D Applications such as Hopkins (1991) and Callahan et al (1988) propose circular menus. Holosketch ( Deering (1995)) use a similar approach where all the functionality was exposed using 3D icons organized in a circular layout. In IMMIView, we choose to combine sketching, gestures and voice modalities in order to reduce the needs of traditional menus. We also proposed a novel graphical user interface, more adapted to large scale displays. Recently, different alternatives were proposed to replace or adapt traditional input devices to interaction with large scale displays. Several approaches (Buxton et al (2000); Grossman et al (2002); Bae et al (2004)) proposed to design curves using large screens mimicking the taping technique followed by the car industry. Cao Cao and Balakrishnan (2004) developed an interface based on a wand (a colored stick) tracked by two cameras to interact with large scale displays. Other approaches use laser pointers as input devices to large scale displays (Lapointe and Godin (2005); Davis and Chen (2002); Oh and Stuerzlinger (2002)). Most of them try to adapt the WIMP concept to the laser input device, but run into problems due to lack of precision and unstable jittery movements. While, these approaches explicitly address large scale displays, none of them allows different users to interact with the display simultaneously and address architectural scenarios.s 3 IMMIView The IMMIView is our solution for architectural design review of 3D models. The system was built do meet the architects requirements, providing functionalities that allow users to perform the following tasks in the virtual environment: navigation, annotation, 3D edition and collaborative review. The system architecture of the application relies on the AICI Framework( Hur et al (2006)) and on our own framework, called IMMIView framework, which provides 3D content visualization, supports different input devices (marker tracking system, laser pointers, mice, keyboard and others) and implements interaction modules concepts, such as gesture recognizers or multimodal fusion. The IMMIView offers a graphical user interface that, when compared to traditional window based desktop applications, proposes an alternative layout that relies on a stroke based interaction, instead of the common point and click metaphor. The next sub-sections describe our solution in detail, providing a complete description of the architects requirements and the functionality provided by our system. 3.1 Architectural System Requirements Customer review of architectural 3D models is one of the major tasks performed by architects during a project lifecycle. This section details the system requirements, on both functionalities and on interaction metaphors, given by the architects for that task. For each functionality group (navigation, annotations, 3d edition and collaborative review) interaction requirement details are explained on the following subsections. The project review usually takes place at the office, where customers review the design alternatives taking into account information collected during on-site visits. Therefore, the system should be able to present different interactive design alternatives to support discussion between the architects and the customer. IMMIVIew should allow users to view the design proposal on desktop screens, 3D Head Mounted displays or large scale displays. Moreover, new 3D content and materials can be added to the scene and the lighting can be changed to show different lighting, according to the time of the day. Users must be able to annotate parts of the model using text and drawings. Finally, the system must allow navigation over the virtual 3D content. 2D and 3D content interaction should be natural and support different media types (annotation, 3D models, pictures, 2D drawings). Moreover, the interface should support sketch based interaction for most of its functionalities, taking advantage of user familiarity with pen based devices such as stylus tablets, interactive pen displays or other kind of pointer devices. Thus, interface should resemble the paper/pencil metaphor, but enhanced with multimodal dialogues. In addition, tracking of user bodies and movements will allow to monitor user s actions, such as 2D or 3D gestures and create further possibilites to support multimodal user input. In conclusion, users should to interact with the system either using speech, gestures, the pen based device on the tablet PC or a laser pointer when interacting with the large scale display Navigation To explore the architectural 3D Model, the users need to have adequate means for navigation. The system should provide general approaches like flying, walking and examining. Flying and examining are dedicated to the navigation within the virtual space, since the observer is detached from his/her physical location. The detachment

4 4 provides the user a mean to easily reach locations normally not accessible during early review session. The flying mode offers a better perspective with the maximum degree of freedom, allowing performing zoom operations on the virtual elements. While examining provides a navigation technique that focus on a single object and allows rotations and zoom around the selected object. Walking, on the other hand, may provide a more natural and realistic way for the user to explore the model. For example, int the walking navigation mode, exploring the virtual scene requires the same actions as in reality: walking around and inside the 3D virtual building such as if it was a real one. By using this modality, the user experiences the architectural artifacts from a first person point of view, and has to deal with the elements such as stairs, walls, furniture and doors Annotations Architect designers often take notes (audio, visual or documentary) which might helpful at a later design stage. Annotations allow the users to attach their comments and thoughts to a model entity. Thus, notes possess the character of an addendum to define what cannot be expressed in any other way. By capturing design intentions, modification suggestions and by documenting the reasons for the alterations applied to the model, the user is able to identify areas of interest and create annotations, either in visual (drawings) or in multimedia format (audio/video). The functionality of an annotation system can be separated in two basis functionalities: (i) create the note content and (ii) add the note to the correspondent object. The user interface has to provide the following functionalities: (i) make available a mechanism to hide and unhide annotations; (ii) provide efficient means to create annotations; (iii) enable annotations to include additional documents, such as construction plans or sketches; (iv) to filter the annotations and (v) to delete annotations D Edition The system should include the functionality to visually validate architectural models, but it also the opportunity to change design prototypes by applying minor modifications. Therefore, design modification should address the creation and edition of objects geometry. In particular, the users should be able to create the following graphics primitives: cube, sphere, cone, cylinder and plane. Moreover, to manipulate existing objects the system should provide mechanisms to select objects and perform geometric transformations, like translation, rotation and scaling Collaborative Review IMMIView should provide a set of tools and modalities to the architects in order to review building conceptions under a collaborative enviornment, i.e: with other architects or customers. For collaborative review, users are required to be present on the same location. In this case, visualization support is done using the large scale display scenario, because of its high size and resolution. Although navigation, verification and modification tools are relevant, during collaborative review it is more important to allow users to annotate at the same time, thus providing multiple user annotation functionality in the collaborative review process. Therefore, Annotation and sketching should be possible by all participants allowing each user to annotate a the same time. Moreover, the control of navigation should be accessible by all users, for example a user should be able to change the view in order to share its view to other participants. Users interacting with system at the same time are free to select different modalities. This means that one user can move physically in front of the large screen, and use a particular workspace where he can use own widgets to examine objects of interest while another user is using different modalities like gestures, speech or pointing or tracing strokes to create annotations. 3.2 Our System Architecture Fig. 1 User annotating using a laser pen to draw onscreen The IMMIView application provides innovative multimodal interactions for 3D content visualization using a bus architecture as depicted by Figure 3. The system relies on two framework: the AICI(Hur et al (2006)) Framework for visualization and tracking support and the IM- MIView Framework for interaction support. This framework is similar to the Studierstube project( Schmalstieg et al (2002)), however it relies on OpenSG (2009) instead of OpenInventor, allowing cluster based visualization on a tiled display system. InstantReality(IGD and ZGDV (2009)) presents similar functionality based on OpenSG and X3D description. However, ImmiView multi-modal

5 5 to synchronized the multiple threads, except the visualization thread which is already synchronized inside the AICI Framework Events Fig. 2 Two users collaborating on a Large Scale Display. Communication Backbone Visualization Manager Commander Widget Manager AICI Framework Tracker Application Modules Body Gestures 2D Pen / Mouse CALI PDA Multiple Lasers Voice Input Modules Event Manager Recognizers and Specialists IMMIView Framework Interaction Concept MultiModal Box Fig. 3 The IMMIView System Architecture Domain Modules Shape Annotation Navigation support requires a deeper access to the application event loop than the one proposed by X3D sensor based architecture. The AICI Framework is responsible for the 3D rendering using display configurations such as head mounted displays, multi-projector screens or TabletPC. The Visualization Manager is based on the OpenSG (2009) scenegraph and extended to support physically based rendering and advanced lighting using High Dynamic Range images. It also enables 3D stereo visualization and it is complemented with a tracking component based on OpenTracker (Reitmayr and Schmalstieg (2005)). The tracking support enables 3D stereo using head tracking. It also provides a simple Widget Manager for immersive environments. The event loop of this framework only support tracking binding with the visualization and the usage of traditional input devices such as mice and keyboard. Our system is based on the AICI Framework, thus our visualisation is required to be executed inside the AICI main thread. We also run the event manager main function inside this thread. The reason for doing this is that the event manager main function executes the callback functions, which might include visualization related actions that are required to run inside the AICI main thread. All other modules run on their specific threads. Because our system is based on events, we do not need The main component of our system, the event manager, relies on the IMMIView framework, which was designed to offer innovative multi-modal interaction techniques. Because the AICI event loop doesn t support input devices such as laser pointers or body tracking we felt the need to develop our own event manager. Therefore, the architecture of IMMIView relies on an event manager module that includes a bus where the other modules can publish or subscribe to events. The event manager implements a simple API that offers two functions: publish and subscribe. Using these two functions the event manager knows who is interested in an event type and is able to forward the information to all subscribing modules. The manager publish function is called whenever new information is available in the IMMIView modules. To simplify the event management, each event type is identified by a string correspondent to its type. For example, each module that subscribe to the laserevent type, gets called back, by the event manager, when the laser input module publish new information. The event manager must run on a single thread, so that the callbacks are executed in a serial condition to enforce the correct order of callbacks. Therefore, it implements a waiting line that gets filled inside the publish function and consumed in the event manager main thread function (that eventually calls the subscribe callbacks). According to their event behaviour, the IMMIView modules can be organized into three different classes: publishers, consumers and converters. Publishers are sources of information that do not require further information from the IMMIView system to update their status. Each publisher informs the event manager what kind of event they produce. Whenever there is new information, they insert it into the event manager s publish waiting line. Examples are modules such as: the multiuser laser module, data proxies from hand held devices and body tracking modules. Consumers require information to change their state. Therefore, they subscribe a callback function for a certain type of events. For example, the visualization module subscribes to navigation type events in order to change its camera parameters. Other consumer modules include annotation manager module, shape creation and manipulation module and the widget module that provides the menu interface. Finally, some modules act as both consumers and publishers. They subscribe to multiple events such as laser input and voice commands and compose those commands into more higher level events such as object selection or navigation actions. We call these modules converters.

6 6 Modules included in this class are detailed in the next section User Interaction User interaction is supported by converter type modules that listen to simple events and, in return, publish higher level events into the event manager. We have defined the following converter modules: Body Gesture Recognizer: Analyzes the tracking data obtained from a real time marker based motion capture of the user and published recognized gestures. To obtain user data, we track the user s head and arms and send the information to the recognizer, which is constantly trying to recognize body gesture performed by each user. With this information we can navigate using arm gestures, for example. Cali Gesture Recognizer: Cali is a 2D symbol recognizer. It receives data from 2D input devices such as pen, mouse or lasers. It trigger actions whenever a user draws a gesture. For example, the main menu can be opened by drawing a triangle gesture. Interaction concept Module: This module is able to recognize a set of interaction behavior and generate higher level interaction primitives. It supports object selection or pointing, drag and drop manipulation and lasso based selection. Since this module is also aware of the context of the application and the entities present on the scene, it can provide contextual information about user actions useful for the Multimodal box. Multimodal Box: This module provides an abstract definition of multimodal interaction using a rule based inference mechanism. Using all the information travelling on the event bus and a predefined grammar, the multimodal box is able to compose interaction using several modalities such as body gesture plus voice commands or mixing laser interaction with mobile devices to create new annotations. 3.3 Graphical User Interface The IMMIView application offer a GUI for annotations, navigation, 3D edition and collaborative review. Our GUI is based on a set of menus that overlay the 3D scene, They were adapted to the different visualization environments handled by the IMPROVE project. The GUI proposes an alternative layout, when compared to the traditional window based desktop applications. It relies on a stroke based interaction instead of the common point and click metaphor. The stroke based interaction was selected considering architect sketching skills and its easy of use when interacting with TabletPC or other handled pen-based computers and interactive whiteboards such as laser interaction on large-scale displays. The IMMIView application interprets a stroke in the following ways: User can sketch symbols (2D gestures recognized by the CALI module) to launch IMMIView. For example by drawing a triangle the main menu will appear where the gesture was recognized; To activate and select options of the GUI, the user draws a stroke to cross over the options. This selects options on a menu; To select objects or bring menus, specific to the type of a 3D object (i.e., shape or annotation), the user sketches a lasso over the object, surrounding it. This select the object and pops up a context menu. The IMMIView functionality is exposed through circular layout menus, similar to a torus shape. The menu options can use a textual or iconic representation and are activated by crossing the option. This solution replaces the use of the point and click metaphor, which is less adapted to pen-like input devices. The functionality accessible through the GUI is identifiable by different semi-transparent menu background colors and additional textual tool-tips that appear when sketching over an option. Menus are labelled using captions, however, the background color enables to identify easily the scope of the functionality provided by the menu: i.e annotations, navigation, object creation or transformation and system configuration all have different background col- To support collaborative design review between several instances of the IMMIView application the original AICI framework was extended with an XML communication backbone based on XMLBlaster (2007) middleware. The information related with functional component i.e. annotations, shape, widgets and navigation can be shared with other instances of the IMMIView system. Thanks to a centralized data coordinator several application can view the same data and manipulate it using different visualization systems. For example several user can interact with a large scale display while the scene content is also edited remotely by another user using a head mount display. Fig. 4 Main menu (left) and Notes Menu with the several areas of interaction (right)

7 7 Fig. 6 Three mode navigation menu: 1st Person, Mini-Map, Explore. Fig. 5 Creating and Editing Annotations using the IM- MIView GUI ors. The textual information provided by the tooltips is valuable when interacting with the IMMIView application, since it is the basis of the voice command based interaction, beyond the traditional usage to disambiguate iconic representations of the GUI. Figure 4 depicts two different menu examples available on IMMIView. On the left, the main menu is depicted using a green background and shows a set of textual options (up to eight options per menu). On the right, the annotation menu is depicted using yellow color as background and presents iconic options plus a special interactive area to draw annotations. Menus can take advantage of their layout to propose interactive areas related with the task at hand. Starting from the main menu, all menus and theirs options are accessible within two levels. Some menus cluster related functionalities by providing a left lateral menu, visible on the annotation menu in Figure4. Finally, to support multiple user interaction on large scale display collaborative scenarios, several menus can be opened, moved and controlled using to the peripheral options located on the top right of each circular menu Annotation Menu The functionality related with annotations is available in the yellow menu or by selecting annotations already present in the scene. Figure 5 presents the several steps involved by the annotation creation. The content of the note can be draw in the central interactive area of the Annotation Menu (Top left). To place a note, the user sketch a path from the placing button to the desired 3D location (Top right). The annotation will be snaps automatically to objects. Notes are represented on the scene as floating post-its with anchors (Bottom left). They can be edited, deleted or hidden by selecting them with a lasso. This action brings an Annotation Menu dedicated to the selected note (Bottom right). Fig. 7 Creating Simple Shapes with Menu and editing their attributes Navigation Menu The Navigation Menu propose three different ways to explore the 3D scene (See Figure 6). The first mode is a first person like navigation which is presented as a set of icons. This view allows the camera to move forward, backward, turn left, turn right and control pitch (Left picture). This mode is more suitable for local navigation and similar to a flying mode. The second mode is based on a mini map and compass based representation (Middle). The user can sketch directly over the top view of the map located on the center of the menu, dragging its position. It is also possible to control the orientation by rotating the surrounding compass area. This mode enables fast global navigation. Finally, a third mode is offered by the navigation menu to explore a particular object of the scene (Right). Using a track-ball like representation, the user can zoom and rotate around a object of the 3D scene. Similar to annotations, to select the target object the user draws a stroke between the top left menu option and the desired object Shape and Transformation Menu To create simple shapes over the 3D scene the user need to open the Shape Menu. Spheres, cubes, cones, cylin-

8 8 ders and planes can be created and place anywhere by sketching a path from the menu icon to the wished location, similar to annotations. Moreover, these shapes can be deleted or transform geometrically. To do this, users select a shape using the lasso, this brings up the transformation shape menu. The transformation menu provides translation, scale and rotation options. Figure 7 depicts the creation of a sphere using the shape menu and the selection of a shape and correspondent transformation menu. 4 Multimodal solution IMMIView was designed to support different visualization scenarios such as large scale displays, head mounted displays or tablet PCs. However, the user interface differences between scenarios were required to be minimal, so that users could switch from one scenario to another without the need to learn a new interface. Furthermore, the user interface should be flexible in order to enable several users to collaborate using different scenarios. For example, an user using a large scale display could collaborate with another using a tablet pc sharing the same interaction concepts. The need of a common set of interaction metaphors lead us to create a multimodal system, where a principal modality, that allows for the basic interaction, is available on all scenarios and secondary modalities are available on each scenario taking advantage of each scenario s qualities. The availability of different modalities also means that interaction dialogs could be expanded with multimodal commands. The modalities fusion was implemented in a manner that the way how actions are accessed could be easily re-configured using a script file, thus supporting different scenarios without requiring recompiling the system. The following subsections explain how each modality was implemented, their intent and how the modality fusion was achieved. Fig. 8 User interacting with Immiview. Left: user open an circular menu. Right: User is navigating through the virtual World. ever, the pen is cumbersome and does not allow for direct input on large scale displays. Therefore, we adopted laser pens to draw strokes directly on the large scale display, similar to the pen functionality on tablet PC. The laser position is captured using image processing techniques. We use a IR sensitive camera to reduce the image noise and increase the laser detection. The image is captured, and filtered to identify high intensity pixels, which we consider to be the laser position. Afterwards, each laser position is sent to the application that translates this information to cursor information and constructs strokes. Further information regarding how lasers support multiple users is detailed in section 4.2. Figure 9 depicts the three main steps of the laser recognition algorithm. Fig. 9 Laser detection algorithm steps: Acquisition, Filtering and Stroke Matching 4.1 Inputs and Modalities IMMIView offers several input devices for interaction. For example, the Laser Pointer is used for generic interaction, to cross menus or annotate 3D models. We developed a set of secondary input devices to enhance the interaction and support multi-user scenarios. Laser can be used in combination with Speech based Interaction to activate menu options, or with Mobile Devices to create and insert multimedia note content. Using body Tracking with speech, user can navigate and edit objects naturally. Figure 8 shows an user interacting with IM- MIView using laser and body tracking Laser Input As presented before, our GUI is based on stroke input. On a tablet pc strokes are executed using a pen. How Speech based Interaction The IMMIView speech system enables users to invoke vocal commands to control the application and to perform navigation, modeling, annotation creation and configuration tasks. Speech is mostly used on three occasions: to navigate into the menu based graphical interface, to enhance laser interaction with direct commands, for example Hide This, and to support body tracking navigation. To improve recognition, IMMIView s speech module informs the application clients of interface actions. This enables the speech client to reduce grammar complexity, by only interpreting on-screen available commands. For example, if a menu was opened by the client, the correspondent speech actions are added to the grammar. Likewise, whenever a menu is closed, the speech sub-module informs the client and the correspondent commands are removed from the active grammar.

9 Mobile Device Input The post-it metaphor was largely mentioned whenever the annotations were discussed during user interviews. This metaphor, to some extent, is available on the GUI - by writing an annotation and dragging it to the desired location. Even so, on the large scale display, the size of the display made it difficult for users to write an annotation and them, accurately place it. Therefore, a mobile device was used to simulate physical post-it notes. The user can draw something on a mobile device, or choose a picture and then, using the laser, point to the large scale display and an annotation is created on that position (see figure 10). cameras with infrared beamers to detect infrared reflective spherical markers. Our tracking recognizer module received the labeled position of each reflective marker from the tracking system. Knowing each markers, we are able to recognize postures and gesture command computing the geometrical relationship between hands and shoulders. Fig. 11 Example of actions supported by body tracking: Top - Fly mode with speech commands. Bottom left - Moving an object. Bottom Right - Scaling an object. 4.2 Multi-User Support Fig. 10 Mobile device metaphor example Body Tracking Body tracking enables us to use gesture and body poses to interact more naturally. With Head Mounted Displays, body tracking gives the view position of the user. On large scale displays, body tracking allows users to navigate and edit objects through gestures. To navigate, users issue a spoken command to enter the mode and then point to where they want to fly to. Users control speed, pitch and roll by arm position and inclination. These afford a simple metaphor to fly over the scene. To change shapes, user selects an object and then activate change mode via a spoken command. In this mode, arm gestures get reflect on object size and shape. For example, to shrink an object, we can select it, utter the Edit object command and then move both arms together to scale down the selected shape. The arm position and inclination control speed, pitch and roll offering a simple metaphor to fly over the scene. To resize objects, an user select an object and them issue a speech command to activate the mode, afterwards arm gestures would reflex on the object. Figure 11 depicts the gestures allowed by tracking. Our tracking setup uses 4 IMMIView supports both distributed and co-located collaboration. On distributed collaboration each user interacts with a different instance of the system, that are synced in real-time. Since each user are using their own input setup, this scenario does not present real multimodal issues. On the other hand, when two or more users collaborate on the same system (co-located) the modalities must be aware that there are different users. Multi-user supports works seamlessly when each user is using different modalities, for example one user can be using Mobile device and Laser Interaction while the other is using voice commands to navigate menus. Each user can execute any functionality without interfering with others. One exception is navigation, because it transforms the system view of the model. If the user needs to navigate to different parts of the model, then the suggested collaboration model is the distributed one. However, early tests show that two users use the same modality more often than planned. Again, on most modalities multi-user can be trivially supported. Voice commands can be disambiguated using separate microphones for each user and the tracking system can also be used to follow multiple users. Laser Interaction, however, needed to be adapted to multi-user. On the input level we need to disambiguate between multiple inputs and identify laser inputs as continuous strokes. On the application level, we need to provide users feedback so that users could identify their own input.

10 Disambiguating Laser Input Our main problem was finding out what events belonged to which stroke. The image captured identifies the position of all active lasers, but we need to match the identified position with positions obtained in the previous captured image. Using an Kalman filter, we are able to detect how many users are interacting and maintain their interaction state. The Kalman filter is a well known method for stochastic estimation which combines deterministic models and statistical approaches in order to estimate the variable values of a linear system (Welch and Bishop (2006)). In our system, we use this technique to estimate and predict possible laser positions. The cameras work as clients which are able to identify laser positions. The registration of the camera and its calibration provide an homography that translates camera coordinates to application coordinates. Using the cameras homographies, points are translated to application coordinates and then sent to a single server which is responsible for collecting the information of all cameras and match the input information with the active strokes. Using the Kalman filter predictive behavior, it is possible to match points of the same laser, even when the laser crosses several cameras. The matching identifies when strokes are initiated, remain active and are terminated. The advantage of this approach lies in its support for multi-user interaction. Using several filters, one for each active laser, we can identify strokes, and calculate their status. Figure 12 depicts this workflow. If there is a prediction without matching input, we conclude that the stroke was finished. In case of a point not being matched to any estimation, we assume that a new stroke was started and use the event coordinates location as the stroke s first position. If there is a match between a point and an estimation, the corresponding stroke is updated with a new point and remains active. Thanks to this disambiguation algorithm for our laser input device, we support that several users interact with our GUI using several lasers at the same time over the large scale display. Active Strokes Preview Events No Input Kill Stroke Kalman filtering system Events match Continue Stroke Input Events No prevision New Stroke Fig. 12 Workflow of our stroke estimation approach 4.3 Modality Fusion The IMMIView prototype exposes a set of functionality accessible trough the fusion of several modalities. These composite interactions are defined using a ruled-based definition which is managed by our multi-modal module mentioned as MultiModal Box on the previous System Architecture section. This module is structured into two parts, the first part is the set of rules to define all possible interactions and the second part is the state of the module which is represented by all valid tokens. By tokens, we refer to all the information which can be input from a given modality, recognizer or as a consequence of activating a rule from our multi-modal module Representing Multimodal Interaction Our multi-modal module defines interaction using a set of rules. Each rule is divided into a set of preconditions and actions. In order to apply a rule, all its preconditions need to be fulfilled which results on a set of action changing the status of the multi-modal module. Our system support the following types of preconditions which represent abstract concepts supported by the module: Tokens: an abstract knowledge representation, which can represent a user body action or gesture such as pointing, a speech command or even an application mode. Tokens can be enriched with attributes allowing to represent any kind of event from the user or other interaction modules existing in the application such as recognizers or specialists. Objects: represent identified system entities over which user can perform actions. These objects have a class associated to them identifying a subset of objects. For example, in our system, we support the notion of shapes, annotations, anchors and widget. Regarding the possible set of actions which can be applied by a rule, just two types of actions are supported by our definition. The first set are operators to manage the data matching preconditions allowing to remove tokens or objects or generate new ones, changing the status of the module. The second set of actions are Commands to activate functionality of the IMMIView system provided by other modules such as depicted by the IMMIView architecture. The following list presents three examples of rules used by our multi-modal system where T <>, O <> and C <> represent tokens,objects and commands respectively. The first rules represents the launch of the main menu using the voice command Open Main Menu. Resulting from the activation of this rule, the command is given to our widget manager and the token is removed. The token is an abstract information with no dependency regarding the modality. In our implementation the voice module is able to generate the token T < openmainmenu > when the voice command is recognized. On the other hand, this token is also generate

11 11 when a triangular shape is recognized by our 2D shape recognizer. Rule 1: T < openm ainm enu > C < widget2d : T OK0 >, T < T OK0 > Rule 2: T < moveup >, O < widget > C < widget2d : OBJ : MoveUP >, T < T OK0 > Rule 3: T < selectt his >, T < pointingoverobject > C < Object3D : select : AT T 10 >, C < Context : menu : AT T 10 >, T < T OK0 > The second rule illustrates the activation of an option named MoveUp from a widget object. Finally, the last rule shows an example of a multi-modal interaction to select an object using a pointing gesture combined with a voice command. For each rule, the action can use references to the matching data of a precondition. T OKn refers to the n th token from the precondition list of a rule, OBJn refers to the identifier of the n th object and finally AT T nm refers to a m th attribute of the n th token Inferring Multimodal Interaction The Multimodal module receives information from the other component in the form of new tokens or new objects to update its status. A temporal duration is assign to each knowledge data in order to know until when the information is valid for multimodal status. Currently, several IMMIView sub-components feed this module as independent specialists: The selection component identifies when objects are selected by lassos metaphors. The observer component is responsible to inform when a user is pointing over an object by analyzing laser inputs continuously and notifying when users are entering or leaving shapes or annotations. Gestures such as the triangular shape recognition is supported by the CALI (Fonseca et al (2002)) shape recognizer. The application widget manager notifies which interface objects or menus are available on the user workspace. The speech recognizer feeds the knowledge base with recognized speech command as simple textual tokens. When an input data is received, it is automatically processed by our inference system which will try to fulfill the precondition of existing rules using the status of the multimodal system. If at a given time, all the preconditions from a rule are available, the rule is applied executing its actions. If until the end of its time validity, an input data is not used by any rule, it is discarded by the system. This solution allows to define interaction using an abstract definition independent of the type of modality and also to avoid to deal with sequentially of precondition regarding a rule. By applying this set of rules, we are able to define the behavior of the system combining several modalities using a flexible and extensible way. 5 User Tests IMMIView was tested by twenty-two users during three different rounds of user tests within the European IM- PROVE project. The three rounds were performed to evaluate both interaction techniques and the mix of modalities proposed by our system. During each round, users were exposed to multimodal interaction using speech recognition, laser input and body tracking. Each user test session included three single-user tasks and one multiuser (collaborative) task. Single user tasks were designed with different degrees of difficulty: the first one comprised three easy steps related to navigation, creating and editing annotations. The second one included nine medium difficulty subtasks including navigation, creating, selecting and manipulating notes and 3D objects. Finally, the third task added more specific steps such as geometric transformations to 3D objects including scaling, rotation and translation. For the collaborative task, we asked two users to execute the first task at the same time. The tests were conducted using a 4 x 2.25 meter Tiled Display with 4096x2304 pixel resolution. The input modalities included voice recognition with wireless microphones, stroke interaction using laser pointers and body gesture tracking using a marker-based optical infrared system. The data sources of the user tests were a usability questionnaire given to each user based on the standardized ISONORM 9241 Part 10, as well, the data from user comments, observation task notes and video analysis. 5.1 Evaluation based on ISONORM 9241 (part 10) The standardized ISONORM Part 10 (Ergonomic requirements for office work with visual display terminals) provides requirements and recommendations related to the hardware, software and environment attributes that contribute to usability and the ergonomic principles underlying them. Trough the usage of questionnaires, the goal was to obtain feedback from the users experience related to the following seven principals of this standard: (1) Suitability for the Task, (2) Self Descriptiveness, (3) Controllability, (4) Conformity with User Expectations, (5) Error Tolerance, (6)Suitability for Individualization and (7) Suitability for Learning. Users were asked to rate each question from 1 (the least favorable case) to 6 (the most favorable case). The results are presented in the Table 1. Globally, the average results of all the seven principles are above the mean value (3.0). So, in the general, the system seemed suited to the needs of users in the three main tasks(navigation, annotation and 3D edition). The Suitability for the task results (average is 4.49) show the users find in general the system easy, fast and natural, related to the use of widgets, different modalities (strokes, gestures and speech) and different input devices

12 12 Average Std. Deviation Suitability for the task Self Descriptiveness Controllability Conformity with User Expectations Error Tolerance Suitabilty for individualisation Suitabiltiy for Learning Table 1 Results of ISONORM questionnaire on the scale 1 to 6. (laser pointer and tracker). The Self Descriptiveness results (average is 4.56) allows us to conclude a moderate level of users satisfaction who appear to be able to differentiate and understand system functionalities by the employed codes and metaphors. The Controllability results (average is 4.26) revealed the users could control the input devices, modalities, widgets and other objects relatively well. However, some improvements could be done on the accuracy of speech recognizer, tracker and laser pointer, and on some interaction metaphors. The Conformity with User Expectations results (average is 4.27) are positive but some issues could be improved. The users found the correct usage of speech commands very important to invoke commands, but in their opinion this method was too error prone. The users found also they were unable to express themselves on the notes given the space provided. The Error Tolerance results (average is 3.22) although positives presented the lowest scores when compared with other ISONORM principles. Some actions performed by users are a little bit difficult to revert. Regarding the navigation task, it is not easy for the user to correct some actions in flying mode using gestures. Using gestures, the geometric transformation, the creation and the placement of the notes on the correct position are difficult to achieve. The Suitability for individualization results (average is 4.50) revealed this system is relatively easy to adapt the user s needs, due to the usage of different modes (stokes, speech and gestures), to perform the same tasks. I.e: the users found easy to customize their work space with widgets. The Suitability for Learning results (average is 4.62) are positive but some problems were found: the users had some difficulties to remember new speech commands or which arm gestures required to be executed for a given geometric transformation or navigation. when using three different combination of multi-modal modes. With these results, we can conclude that the users did more errors on navigation task when using the Menus and Speech mode and spent more time using the Gestures and Speech mode. The first conclusion is due the failures of the speech recognition system to recognize certain commands. Some of the users had scottish regional accents that are not completely compatible with the configured American-English grammar. The second conclusion is due to the fact that the users spent more time and cognitive effort to adjust their position and orientation when using gestures on the flying mode. The results revealed the users did less errors using the Menu and Strokes/Laser mode which is the interaction approach more similar to tradition desktop applications. On the other hand, the usage of speech commands over the opened menus permitted to spend less time to perform the tasks because, when the speech recognizer works well, the time to activate the commands is very short. To perform the annotation manipulation tasks over the note objects, user could invoke multi-modal commands like select this, delete this or hide this. It could be done using the following modes: Menus and Strokes/Laser or Pointing and Speech. The Table 3 provides the rate of errors and the time spent by users for each multimodal combinations. Although, the Pointing and Speech mode was chosen by the great majority of the users, the errors rate and time spent are higher than using the more traditional interface Menu and Strokes mode. The failures on the speech recognition system and the lack of accuracy to pick note objects when they are away (or it has small size) are the reasons for the lack of performance of the Pointing and Speech mode. It s important to highlight users didn t do any error using Menu and Strokes mode. 5.3 Multi-modal Preferences for users tasks IMMIView allows the users to interact with the system using different modalities (multi-modal) which allow each user to perform the same task using a different combination of modalities, which are best fitted to their preferences or skills. Also, if the user had difficul- 5.2 Users Tasks Performance Regarding the functionality proposed by our system, quantitative data was collected from the tests related to navigation and annotation tasks to identify the level of performance of each multi-modal metaphor used by user. To perform the navigation task, the user could use the following modes: Menus and Strokes / Laser, Menus and Speech and Gestures and Speech. The Table 2 provides the rate of errors and the time per usage done by users Error/Usage Time/Usage Menus/Laser :37 Menus/Speech :59 Gestures/Speech :50 Table 2 Navigation performance data by modes. Error/Usage Time/Usage Menus/Laser :30 Pointing/Speech :51 Table 3 Annotation performance data by modes.

13 13 ties with a particular action, he could promptly switch to another modality that he felt more comfortable. The results presented below were gathered from three different tasks: navigation, notes manipulation and geometric transformations over 3D objects. The combination of modalities used was the following: laser stroke interaction plus menus, speech interaction plus menus and both arm gestures and laser pointing combined with speech commands. this and hide this are examples of speech commands. The results are presented in Table 5. 1st Choice 2nd Choice Laser/Menus 0.00% % Pointing/Speech % 0.00% % from previous % Table 5 Percentage of user notes manipulating choices by modalities combination User Multimodal preferences for navigation Regarding navigation, we collected data about the first, second and third preferred modality combinations used to perform seven tasks of varying degrees of difficulty. These included (a) Laser and Menus (using laser strokes to open menus and to activate menu options), (b) Speech and Menus (invoking commands using speech and context menus) and (c) Arm Gestures and Speech (performing the navigation fly task using the arm gestures complemented with speech commands to change some parameters of navigation such as velocity). The results are presented in Table 4. 1st Choice 2nd Choice 3rd Choice Menus/Laser 32.73% 54.55% % Speech/Menus 27.27% 27.27% 0.00% Gestures/Speech 40.00% 18.18% 0.00% % from previous % 4.55% Table 4 Percentage of user navigation choices by modalities combination. With these results, we conclude that users used different combinations of modalities to perform the same kind of tasks. The 1st choice is balanced among the three different combinations which illustrates the system flexibility in accommodating user preferences. When users chose a second combination of modalities (40% of the number of the first choices), a slight majority (54.55%) opted for the laser pointer plus menus combo. This solution is more similar to the desktop environment, so users might feel more comfortable interacting with this resource, thus falling back to this solution when having difficulties. Only one user elected to combine three modalities to perform a particular task User Multi-modal preferences for annotation Regarding annotations, we collected the data on the first and second user s choices of modality combination used to perform five tasks of different degrees of difficulty. The combinations of modalities are: Laser and Menus (using laser strokes to open note menus and to activate their options) and Pointing and Speech (pointing one note object and invoke speech commands to change the state of the note). The commands select this, delete The combination Pointing and Speech was picked by all users as the first choice to perform annotation tasks. This interaction modality seemed very natural to users and reflected a similar usage on the real world. The second choice (24.49% of first choices) was Laser plus Menus. The reason of this second choice was related to some difficulties of the speech recognizer to handle strong accents of some users, which caused user to fall back on more familiar, or less troublesome, modalities User Multimodal preferences for geometry manipulation For geometric transformations of 3D objects (translation, rotation and scale over the three axes), we collected data on the first and the second choices of modalities used in combination to perform the proposed three tasks. These included (a) Laser plus Menus (using laser strokes to open geometric transformation menus and to activate their options/operators), (b) Speech plus Arm Gestures, whereby geometric transformation commands are composed by two parts: first, the kind of transformation (rotation and scale) is invoked using a voice commands, secondly the user quantifies and controls the transformation using arm gestures. The results are presented in Table 6. 1st Choice 2nd Choice Laser/Menus 11.54% % Speech/Arm Gestures 88.46% 0.00% % from previous % Table 6 Percentage of user geometric transformations choices by modalities combination. The combination Speech/Arm Gestures was picked first by the majority of the users (88.46%) to perform geometric transformations on 3D objects. The users classify this kind of interaction as natural and direct. One reason might be that users could better tune the transformation using gestures rather than using stroke interaction and menus. To conclude, IMMIView is a system with strong multimodal and multi-user components that integrates scalable and redundant technologies. It is clear that having different modalities and interaction techniques benefits the users. The redundancy of modalities to access the

14 14 same functionality is important to better fit and adapt the interaction to users wishes. The usage of the speech commands while pointing to a particular object was considered to be a interactive powerful resource. On the majority of the cases, the laser plus menus, was often the 2nd choice. The navigation using arm gestures and speech commands was also appreciated by many users, which noted the flexibility and versatility of the multimodal combinations available. 5.4 Deployments Along this research project, other setups were also deployed, but no user tests were executed on them. They were deployed as technical achievements, prior to the final user tests. Early on, the communication backbone was tested with two improve system instances, a large scale display in Portugal and another in Germany. This first test allows us to verify that the system could provide distributed collaborative actions and that annotations passed from one instance to the other without notable delay. Tests with mixed setups, involving different artifacts were also deployed. Tablet PC with head mounted displays allowed the users to navigate using the head mounted displays and interact using the tablet pc. Moreover, co-located collaboration was also tested with one user running the system on a tablet pc and another user interacting with large scale display (Figure 13). Finally, to test the mobility of our system - one of the user requirements - the system was deployed on an architectural site, in Glasgow, using an simple large scale display (one projector and an rental projection screen). Here, speech and laser modalities were also available for interaction illustration the versatility of our proposed design environment. 6 Conclusion This work presents the IMMIView as an interactive system relying on multi-modality and multi-user interaction to support collaborative design review of 3D models. Our task analysis have identified user requirements for architectural design review and the functionalities that should be implemented to enable users to develop their main tasks: navigation, annotation, 3D edition and collaborative review. In order to integrate these functionalities, we proposed a system architecture based on a set of modules coordinated around an event bus where functional components, interaction experts and visualization modules coexist. Many experiments and decisions were taken in order to implement a flexible and modular system that offer 3D visualization, support for different kind of input devices, interaction using different modalities (pointing, gestures, speech and stokes) and GUI interface. Some issues were solved in the design phase of this system to Fig. 13 Deployments setups. Top: single display projector with multiple users and mobile device support. bottom: large scale display and tablet PC collaboration. permit a coherent and consistent data flow between their different components. IMMIView offer several alternative interfaces so that the user can optimize their interaction according to their goals and tasks. A novel stroke interface were adapted to work with laser pointer limitations on Large Scale Display presenting a innovative architectural environment similar to an augmented white-board. On the other hand, both arm gestures based interaction and speech commands were available to users controlling their actions and offering more natural or productive metaphor for their tasks. To manage the simultaneous multi-modal information, an abstract knowledge representation and inference mechanism were designed to handle multi-modal action ambiguities, even when several users are interacting at the same time. This mechanism permits to unconstrain the user expressivity when several user interacts with different modalities at the same time. To assess our approach and the tasks supported by our prototype, three different kind of user tests were performed. Based on the ISONORM part 10, the

Direct Manipulation. and Instrumental Interaction. CS Direct Manipulation

Direct Manipulation. and Instrumental Interaction. CS Direct Manipulation Direct Manipulation and Instrumental Interaction 1 Review: Interaction vs. Interface What s the difference between user interaction and user interface? Interface refers to what the system presents to the

More information

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception

More information

What was the first gestural interface?

What was the first gestural interface? stanford hci group / cs247 Human-Computer Interaction Design Studio What was the first gestural interface? 15 January 2013 http://cs247.stanford.edu Theremin Myron Krueger 1 Myron Krueger There were things

More information

Immersive Visualization and Collaboration with LS-PrePost-VR and LS-PrePost-Remote

Immersive Visualization and Collaboration with LS-PrePost-VR and LS-PrePost-Remote 8 th International LS-DYNA Users Conference Visualization Immersive Visualization and Collaboration with LS-PrePost-VR and LS-PrePost-Remote Todd J. Furlong Principal Engineer - Graphics and Visualization

More information

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT TAYSHENG JENG, CHIA-HSUN LEE, CHI CHEN, YU-PIN MA Department of Architecture, National Cheng Kung University No. 1, University Road,

More information

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright E90 Project Proposal 6 December 2006 Paul Azunre Thomas Murray David Wright Table of Contents Abstract 3 Introduction..4 Technical Discussion...4 Tracking Input..4 Haptic Feedack.6 Project Implementation....7

More information

Chapter 1 - Introduction

Chapter 1 - Introduction 1 "We all agree that your theory is crazy, but is it crazy enough?" Niels Bohr (1885-1962) Chapter 1 - Introduction Augmented reality (AR) is the registration of projected computer-generated images over

More information

Building a bimanual gesture based 3D user interface for Blender

Building a bimanual gesture based 3D user interface for Blender Modeling by Hand Building a bimanual gesture based 3D user interface for Blender Tatu Harviainen Helsinki University of Technology Telecommunications Software and Multimedia Laboratory Content 1. Background

More information

Spatial Interfaces and Interactive 3D Environments for Immersive Musical Performances

Spatial Interfaces and Interactive 3D Environments for Immersive Musical Performances Spatial Interfaces and Interactive 3D Environments for Immersive Musical Performances Florent Berthaut and Martin Hachet Figure 1: A musician plays the Drile instrument while being immersed in front of

More information

Virtual Environments. Ruth Aylett

Virtual Environments. Ruth Aylett Virtual Environments Ruth Aylett Aims of the course 1. To demonstrate a critical understanding of modern VE systems, evaluating the strengths and weaknesses of the current VR technologies 2. To be able

More information

A Kinect-based 3D hand-gesture interface for 3D databases

A Kinect-based 3D hand-gesture interface for 3D databases A Kinect-based 3D hand-gesture interface for 3D databases Abstract. The use of natural interfaces improves significantly aspects related to human-computer interaction and consequently the productivity

More information

A Quick Spin on Autodesk Revit Building

A Quick Spin on Autodesk Revit Building 11/28/2005-3:00 pm - 4:30 pm Room:Americas Seminar [Lab] (Dolphin) Walt Disney World Swan and Dolphin Resort Orlando, Florida A Quick Spin on Autodesk Revit Building Amy Fietkau - Autodesk and John Jansen;

More information

AECOsim Building Designer. Quick Start Guide. Chapter 2 Making the Mass Model Intelligent Bentley Systems, Incorporated.

AECOsim Building Designer. Quick Start Guide. Chapter 2 Making the Mass Model Intelligent Bentley Systems, Incorporated. AECOsim Building Designer Quick Start Guide Chapter 2 Making the Mass Model Intelligent 2012 Bentley Systems, Incorporated www.bentley.com/aecosim Table of Contents Making the Mass Model Intelligent...3

More information

- applications on same or different network node of the workstation - portability of application software - multiple displays - open architecture

- applications on same or different network node of the workstation - portability of application software - multiple displays - open architecture 12 Window Systems - A window system manages a computer screen. - Divides the screen into overlapping regions. - Each region displays output from a particular application. X window system is widely used

More information

Tangible interaction : A new approach to customer participatory design

Tangible interaction : A new approach to customer participatory design Tangible interaction : A new approach to customer participatory design Focused on development of the Interactive Design Tool Jae-Hyung Byun*, Myung-Suk Kim** * Division of Design, Dong-A University, 1

More information

3D Data Navigation via Natural User Interfaces

3D Data Navigation via Natural User Interfaces 3D Data Navigation via Natural User Interfaces Francisco R. Ortega PhD Candidate and GAANN Fellow Co-Advisors: Dr. Rishe and Dr. Barreto Committee Members: Dr. Raju, Dr. Clarke and Dr. Zeng GAANN Fellowship

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

A novel click-free interaction technique for large-screen interfaces

A novel click-free interaction technique for large-screen interfaces A novel click-free interaction technique for large-screen interfaces Takaomi Hisamatsu, Buntarou Shizuki, Shin Takahashi, Jiro Tanaka Department of Computer Science Graduate School of Systems and Information

More information

Direct Manipulation. and Instrumental Interaction. Direct Manipulation

Direct Manipulation. and Instrumental Interaction. Direct Manipulation Direct Manipulation and Instrumental Interaction Direct Manipulation 1 Direct Manipulation Direct manipulation is when a virtual representation of an object is manipulated in a similar way to a real world

More information

Outline. Paradigms for interaction. Introduction. Chapter 5 : Paradigms. Introduction Paradigms for interaction (15)

Outline. Paradigms for interaction. Introduction. Chapter 5 : Paradigms. Introduction Paradigms for interaction (15) Outline 01076568 Human Computer Interaction Chapter 5 : Paradigms Introduction Paradigms for interaction (15) ดร.ชมพ น ท จ นจาคาม [kjchompo@gmail.com] สาขาว ชาว ศวกรรมคอมพ วเตอร คณะว ศวกรรมศาสตร สถาบ นเทคโนโลย

More information

Using low cost devices to support non-visual interaction with diagrams & cross-modal collaboration

Using low cost devices to support non-visual interaction with diagrams & cross-modal collaboration 22 ISSN 2043-0167 Using low cost devices to support non-visual interaction with diagrams & cross-modal collaboration Oussama Metatla, Fiore Martin, Nick Bryan-Kinns and Tony Stockman EECSRR-12-03 June

More information

MASA. (Movement and Action Sequence Analysis) User Guide

MASA. (Movement and Action Sequence Analysis) User Guide MASA (Movement and Action Sequence Analysis) User Guide PREFACE The MASA software is a game analysis software that can be used for scientific analyses or in sports practice in different types of sports.

More information

Air Marshalling with the Kinect

Air Marshalling with the Kinect Air Marshalling with the Kinect Stephen Witherden, Senior Software Developer Beca Applied Technologies stephen.witherden@beca.com Abstract. The Kinect sensor from Microsoft presents a uniquely affordable

More information

Direct Manipulation. and Instrumental Interaction. Direct Manipulation 1

Direct Manipulation. and Instrumental Interaction. Direct Manipulation 1 Direct Manipulation and Instrumental Interaction Direct Manipulation 1 Direct Manipulation Direct manipulation is when a virtual representation of an object is manipulated in a similar way to a real world

More information

Chapter 2 Understanding and Conceptualizing Interaction. Anna Loparev Intro HCI University of Rochester 01/29/2013. Problem space

Chapter 2 Understanding and Conceptualizing Interaction. Anna Loparev Intro HCI University of Rochester 01/29/2013. Problem space Chapter 2 Understanding and Conceptualizing Interaction Anna Loparev Intro HCI University of Rochester 01/29/2013 1 Problem space Concepts and facts relevant to the problem Users Current UX Technology

More information

REPRESENTATION, RE-REPRESENTATION AND EMERGENCE IN COLLABORATIVE COMPUTER-AIDED DESIGN

REPRESENTATION, RE-REPRESENTATION AND EMERGENCE IN COLLABORATIVE COMPUTER-AIDED DESIGN REPRESENTATION, RE-REPRESENTATION AND EMERGENCE IN COLLABORATIVE COMPUTER-AIDED DESIGN HAN J. JUN AND JOHN S. GERO Key Centre of Design Computing Department of Architectural and Design Science University

More information

ThumbsUp: Integrated Command and Pointer Interactions for Mobile Outdoor Augmented Reality Systems

ThumbsUp: Integrated Command and Pointer Interactions for Mobile Outdoor Augmented Reality Systems ThumbsUp: Integrated Command and Pointer Interactions for Mobile Outdoor Augmented Reality Systems Wayne Piekarski and Bruce H. Thomas Wearable Computer Laboratory School of Computer and Information Science

More information

VEWL: A Framework for Building a Windowing Interface in a Virtual Environment Daniel Larimer and Doug A. Bowman Dept. of Computer Science, Virginia Tech, 660 McBryde, Blacksburg, VA dlarimer@vt.edu, bowman@vt.edu

More information

Adobe Photoshop CC 2018 Tutorial

Adobe Photoshop CC 2018 Tutorial Adobe Photoshop CC 2018 Tutorial GETTING STARTED Adobe Photoshop CC 2018 is a popular image editing software that provides a work environment consistent with Adobe Illustrator, Adobe InDesign, Adobe Photoshop,

More information

Understanding OpenGL

Understanding OpenGL This document provides an overview of the OpenGL implementation in Boris Red. About OpenGL OpenGL is a cross-platform standard for 3D acceleration. GL stands for graphics library. Open refers to the ongoing,

More information

Getting Started. Chapter. Objectives

Getting Started. Chapter. Objectives Chapter 1 Getting Started Autodesk Inventor has a context-sensitive user interface that provides you with the tools relevant to the tasks being performed. A comprehensive online help and tutorial system

More information

Advanced Tools for Graphical Authoring of Dynamic Virtual Environments at the NADS

Advanced Tools for Graphical Authoring of Dynamic Virtual Environments at the NADS Advanced Tools for Graphical Authoring of Dynamic Virtual Environments at the NADS Matt Schikore Yiannis E. Papelis Ginger Watson National Advanced Driving Simulator & Simulation Center The University

More information

Adobe Photoshop CS5 Tutorial

Adobe Photoshop CS5 Tutorial Adobe Photoshop CS5 Tutorial GETTING STARTED Adobe Photoshop CS5 is a popular image editing software that provides a work environment consistent with Adobe Illustrator, Adobe InDesign, Adobe Photoshop

More information

A Hybrid Immersive / Non-Immersive

A Hybrid Immersive / Non-Immersive A Hybrid Immersive / Non-Immersive Virtual Environment Workstation N96-057 Department of the Navy Report Number 97268 Awz~POved *om prwihc?e1oaa Submitted by: Fakespace, Inc. 241 Polaris Ave. Mountain

More information

REPORT ON THE CURRENT STATE OF FOR DESIGN. XL: Experiments in Landscape and Urbanism

REPORT ON THE CURRENT STATE OF FOR DESIGN. XL: Experiments in Landscape and Urbanism REPORT ON THE CURRENT STATE OF FOR DESIGN XL: Experiments in Landscape and Urbanism This report was produced by XL: Experiments in Landscape and Urbanism, SWA Group s innovation lab. It began as an internal

More information

Virtual Reality Calendar Tour Guide

Virtual Reality Calendar Tour Guide Technical Disclosure Commons Defensive Publications Series October 02, 2017 Virtual Reality Calendar Tour Guide Walter Ianneo Follow this and additional works at: http://www.tdcommons.org/dpubs_series

More information

1 Publishable summary

1 Publishable summary 1 Publishable summary 1.1 Introduction The DIRHA (Distant-speech Interaction for Robust Home Applications) project was launched as STREP project FP7-288121 in the Commission s Seventh Framework Programme

More information

Introduction to Autodesk Inventor for F1 in Schools (Australian Version)

Introduction to Autodesk Inventor for F1 in Schools (Australian Version) Introduction to Autodesk Inventor for F1 in Schools (Australian Version) F1 in Schools race car In this course you will be introduced to Autodesk Inventor, which is the centerpiece of Autodesk s Digital

More information

VICs: A Modular Vision-Based HCI Framework

VICs: A Modular Vision-Based HCI Framework VICs: A Modular Vision-Based HCI Framework The Visual Interaction Cues Project Guangqi Ye, Jason Corso Darius Burschka, & Greg Hager CIRL, 1 Today, I ll be presenting work that is part of an ongoing project

More information

Geometry Controls and Report

Geometry Controls and Report Geometry Controls and Report 2014 InnovMetric Software Inc. All rights reserved. Reproduction in part or in whole in any way without permission from InnovMetric Software is strictly prohibited except for

More information

Photoshop CS2. Step by Step Instructions Using Layers. Adobe. About Layers:

Photoshop CS2. Step by Step Instructions Using Layers. Adobe. About Layers: About Layers: Layers allow you to work on one element of an image without disturbing the others. Think of layers as sheets of acetate stacked one on top of the other. You can see through transparent areas

More information

Getting started with AutoCAD mobile app. Take the power of AutoCAD wherever you go

Getting started with AutoCAD mobile app. Take the power of AutoCAD wherever you go Getting started with AutoCAD mobile app Take the power of AutoCAD wherever you go Getting started with AutoCAD mobile app Take the power of AutoCAD wherever you go i How to navigate this book Swipe the

More information

User Guide V10 SP1 Addendum

User Guide V10 SP1 Addendum Alibre Design User Guide V10 SP1 Addendum Copyrights Information in this document is subject to change without notice. The software described in this document is furnished under a license agreement or

More information

Generative Drafting (ISO)

Generative Drafting (ISO) CATIA Training Foils Generative Drafting (ISO) Version 5 Release 8 January 2002 EDU-CAT-E-GDRI-FF-V5R8 1 Table of Contents (1/2) 1. Introduction to Generative Drafting Generative Drafting Workbench Presentation

More information

DepthTouch: Using Depth-Sensing Camera to Enable Freehand Interactions On and Above the Interactive Surface

DepthTouch: Using Depth-Sensing Camera to Enable Freehand Interactions On and Above the Interactive Surface DepthTouch: Using Depth-Sensing Camera to Enable Freehand Interactions On and Above the Interactive Surface Hrvoje Benko and Andrew D. Wilson Microsoft Research One Microsoft Way Redmond, WA 98052, USA

More information

SolidWorks Part I - Basic Tools SDC. Includes. Parts, Assemblies and Drawings. Paul Tran CSWE, CSWI

SolidWorks Part I - Basic Tools SDC. Includes. Parts, Assemblies and Drawings. Paul Tran CSWE, CSWI SolidWorks 2015 Part I - Basic Tools Includes CSWA Preparation Material Parts, Assemblies and Drawings Paul Tran CSWE, CSWI SDC PUBLICATIONS Better Textbooks. Lower Prices. www.sdcpublications.com Powered

More information

UNIT-III LIFE-CYCLE PHASES

UNIT-III LIFE-CYCLE PHASES INTRODUCTION: UNIT-III LIFE-CYCLE PHASES - If there is a well defined separation between research and development activities and production activities then the software is said to be in successful development

More information

Omni-Directional Catadioptric Acquisition System

Omni-Directional Catadioptric Acquisition System Technical Disclosure Commons Defensive Publications Series December 18, 2017 Omni-Directional Catadioptric Acquisition System Andreas Nowatzyk Andrew I. Russell Follow this and additional works at: http://www.tdcommons.org/dpubs_series

More information

A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems

A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems F. Steinicke, G. Bruder, H. Frenz 289 A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems Frank Steinicke 1, Gerd Bruder 1, Harald Frenz 2 1 Institute of Computer Science,

More information

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data Hrvoje Benko Microsoft Research One Microsoft Way Redmond, WA 98052 USA benko@microsoft.com Andrew D. Wilson Microsoft

More information

Touch & Gesture. HCID 520 User Interface Software & Technology

Touch & Gesture. HCID 520 User Interface Software & Technology Touch & Gesture HCID 520 User Interface Software & Technology Natural User Interfaces What was the first gestural interface? Myron Krueger There were things I resented about computers. Myron Krueger

More information

RV - AULA 05 - PSI3502/2018. User Experience, Human Computer Interaction and UI

RV - AULA 05 - PSI3502/2018. User Experience, Human Computer Interaction and UI RV - AULA 05 - PSI3502/2018 User Experience, Human Computer Interaction and UI Outline Discuss some general principles of UI (user interface) design followed by an overview of typical interaction tasks

More information

Design and Implementation of Interactive Contents Authoring Tool for MPEG-4

Design and Implementation of Interactive Contents Authoring Tool for MPEG-4 Design and Implementation of Interactive Contents Authoring Tool for MPEG-4 Hsu-Yang Kung, Che-I Wu, and Jiun-Ju Wei Department of Management Information Systems National Pingtung University of Science

More information

Virtual Reality RPG Spoken Dialog System

Virtual Reality RPG Spoken Dialog System Virtual Reality RPG Spoken Dialog System Project report Einir Einisson Gísli Böðvar Guðmundsson Steingrímur Arnar Jónsson Instructor Hannes Högni Vilhjálmsson Moderator David James Thue Abstract 1 In computer

More information

LOOKING AHEAD: UE4 VR Roadmap. Nick Whiting Technical Director VR / AR

LOOKING AHEAD: UE4 VR Roadmap. Nick Whiting Technical Director VR / AR LOOKING AHEAD: UE4 VR Roadmap Nick Whiting Technical Director VR / AR HEADLINE AND IMAGE LAYOUT RECENT DEVELOPMENTS RECENT DEVELOPMENTS At Epic, we drive our engine development by creating content. We

More information

Lesson 4 Extrusions OBJECTIVES. Extrusions

Lesson 4 Extrusions OBJECTIVES. Extrusions Lesson 4 Extrusions Figure 4.1 Clamp OBJECTIVES Create a feature using an Extruded protrusion Understand Setup and Environment settings Define and set a Material type Create and use Datum features Sketch

More information

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Hafid NINISS Forum8 - Robot Development Team Abstract: The purpose of this work is to develop a man-machine interface for

More information

RASim Prototype User Manual

RASim Prototype User Manual 7 th Framework Programme This project has received funding from the European Union s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 610425

More information

Getting started with. Getting started with VELOCITY SERIES.

Getting started with. Getting started with VELOCITY SERIES. Getting started with Getting started with SOLID EDGE EDGE ST4 ST4 VELOCITY SERIES www.siemens.com/velocity 1 Getting started with Solid Edge Publication Number MU29000-ENG-1040 Proprietary and Restricted

More information

Table of Contents. Lesson 1 Getting Started

Table of Contents. Lesson 1 Getting Started NX Lesson 1 Getting Started Pre-reqs/Technical Skills Basic computer use Expectations Read lesson material Implement steps in software while reading through lesson material Complete quiz on Blackboard

More information

Abstract. Keywords: Multi Touch, Collaboration, Gestures, Accelerometer, Virtual Prototyping. 1. Introduction

Abstract. Keywords: Multi Touch, Collaboration, Gestures, Accelerometer, Virtual Prototyping. 1. Introduction Creating a Collaborative Multi Touch Computer Aided Design Program Cole Anagnost, Thomas Niedzielski, Desirée Velázquez, Prasad Ramanahally, Stephen Gilbert Iowa State University { someguy tomn deveri

More information

ReVRSR: Remote Virtual Reality for Service Robots

ReVRSR: Remote Virtual Reality for Service Robots ReVRSR: Remote Virtual Reality for Service Robots Amel Hassan, Ahmed Ehab Gado, Faizan Muhammad March 17, 2018 Abstract This project aims to bring a service robot s perspective to a human user. We believe

More information

SIMULATION MODELING WITH ARTIFICIAL REALITY TECHNOLOGY (SMART): AN INTEGRATION OF VIRTUAL REALITY AND SIMULATION MODELING

SIMULATION MODELING WITH ARTIFICIAL REALITY TECHNOLOGY (SMART): AN INTEGRATION OF VIRTUAL REALITY AND SIMULATION MODELING Proceedings of the 1998 Winter Simulation Conference D.J. Medeiros, E.F. Watson, J.S. Carson and M.S. Manivannan, eds. SIMULATION MODELING WITH ARTIFICIAL REALITY TECHNOLOGY (SMART): AN INTEGRATION OF

More information

Exercise 1: The AutoCAD Civil 3D Environment

Exercise 1: The AutoCAD Civil 3D Environment Exercise 1: The AutoCAD Civil 3D Environment AutoCAD Civil 3D Interface Object Base Layer Object Component Layers 1-1 Introduction to Commercial Site Grading Plans AutoCAD Civil 3D Interface AutoCAD Civil

More information

12. Creating a Product Mockup in Perspective

12. Creating a Product Mockup in Perspective 12. Creating a Product Mockup in Perspective Lesson overview In this lesson, you ll learn how to do the following: Understand perspective drawing. Use grid presets. Adjust the perspective grid. Draw and

More information

Tangible User Interfaces

Tangible User Interfaces Tangible User Interfaces Seminar Vernetzte Systeme Prof. Friedemann Mattern Von: Patrick Frigg Betreuer: Michael Rohs Outline Introduction ToolStone Motivation Design Interaction Techniques Taxonomy for

More information

Touch & Gesture. HCID 520 User Interface Software & Technology

Touch & Gesture. HCID 520 User Interface Software & Technology Touch & Gesture HCID 520 User Interface Software & Technology What was the first gestural interface? Myron Krueger There were things I resented about computers. Myron Krueger There were things I resented

More information

COLLABORATION WITH TANGIBLE AUGMENTED REALITY INTERFACES.

COLLABORATION WITH TANGIBLE AUGMENTED REALITY INTERFACES. COLLABORATION WITH TANGIBLE AUGMENTED REALITY INTERFACES. Mark Billinghurst a, Hirokazu Kato b, Ivan Poupyrev c a Human Interface Technology Laboratory, University of Washington, Box 352-142, Seattle,

More information

3D Interaction Techniques

3D Interaction Techniques 3D Interaction Techniques Hannes Interactive Media Systems Group (IMS) Institute of Software Technology and Interactive Systems Based on material by Chris Shaw, derived from Doug Bowman s work Why 3D Interaction?

More information

The Amalgamation Product Design Aspects for the Development of Immersive Virtual Environments

The Amalgamation Product Design Aspects for the Development of Immersive Virtual Environments The Amalgamation Product Design Aspects for the Development of Immersive Virtual Environments Mario Doulis, Andreas Simon University of Applied Sciences Aargau, Schweiz Abstract: Interacting in an immersive

More information

Kismet Interface Overview

Kismet Interface Overview The following tutorial will cover an in depth overview of the benefits, features, and functionality within Unreal s node based scripting editor, Kismet. This document will cover an interface overview;

More information

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real... v preface Motivation Augmented reality (AR) research aims to develop technologies that allow the real-time fusion of computer-generated digital content with the real world. Unlike virtual reality (VR)

More information

Team Breaking Bat Architecture Design Specification. Virtual Slugger

Team Breaking Bat Architecture Design Specification. Virtual Slugger Department of Computer Science and Engineering The University of Texas at Arlington Team Breaking Bat Architecture Design Specification Virtual Slugger Team Members: Sean Gibeault Brandon Auwaerter Ehidiamen

More information

4. GAMBIT MENU COMMANDS

4. GAMBIT MENU COMMANDS GAMBIT MENU COMMANDS 4. GAMBIT MENU COMMANDS The GAMBIT main menu bar includes the following menu commands. Menu Item File Edit Solver Help Purposes Create, open and save sessions Print graphics Edit and/or

More information

Working With Drawing Views-I

Working With Drawing Views-I Chapter 12 Working With Drawing Views-I Learning Objectives After completing this chapter you will be able to: Generate standard three views. Generate Named Views. Generate Relative Views. Generate Predefined

More information

Realtime 3D Computer Graphics Virtual Reality

Realtime 3D Computer Graphics Virtual Reality Realtime 3D Computer Graphics Virtual Reality Virtual Reality Input Devices Special input devices are required for interaction,navigation and motion tracking (e.g., for depth cue calculation): 1 WIMP:

More information

Virtual Reality Devices in C2 Systems

Virtual Reality Devices in C2 Systems Jan Hodicky, Petr Frantis University of Defence Brno 65 Kounicova str. Brno Czech Republic +420973443296 jan.hodicky@unbo.cz petr.frantis@unob.cz Virtual Reality Devices in C2 Systems Topic: Track 8 C2

More information

Instructions.

Instructions. Instructions www.itystudio.com Summary Glossary Introduction 6 What is ITyStudio? 6 Who is it for? 6 The concept 7 Global Operation 8 General Interface 9 Header 9 Creating a new project 0 Save and Save

More information

The Mixed Reality Book: A New Multimedia Reading Experience

The Mixed Reality Book: A New Multimedia Reading Experience The Mixed Reality Book: A New Multimedia Reading Experience Raphaël Grasset raphael.grasset@hitlabnz.org Andreas Dünser andreas.duenser@hitlabnz.org Mark Billinghurst mark.billinghurst@hitlabnz.org Hartmut

More information

Create styles that control the display of Civil 3D objects. Copy styles from one drawing to another drawing.

Create styles that control the display of Civil 3D objects. Copy styles from one drawing to another drawing. NOTES Module 03 Settings and Styles In this module, you learn about the various settings and styles that are used in AutoCAD Civil 3D. A strong understanding of these basics leads to more efficient use

More information

All Creative Suite Design documents are saved in the same way. Click the Save or Save As (if saving for the first time) command on the File menu to

All Creative Suite Design documents are saved in the same way. Click the Save or Save As (if saving for the first time) command on the File menu to 1 The Application bar is new in the CS4 applications. It combines the menu bar with control buttons that allow you to perform tasks such as arranging multiple documents or changing the workspace view.

More information

Advanced Computer Aided Design COURSE OUTLINE

Advanced Computer Aided Design COURSE OUTLINE Advanced Computer Aided Design COURSE OUTLINE 1. Course Title: Advanced Computer Aided Design 2. CBEDS Title: Computer Aided Drafting/Design 3. CBEDS Number: 5705 4. Job Titles: Framers Construction Inspectors

More information

House Design Tutorial

House Design Tutorial Chapter 2: House Design Tutorial This House Design Tutorial shows you how to get started on a design project. The tutorials that follow continue with the same plan. When you are finished, you will have

More information

GEN20604 Intelligent AutoCAD Model Documentation Made Easy

GEN20604 Intelligent AutoCAD Model Documentation Made Easy GEN20604 Intelligent AutoCAD Model Documentation Made Easy David Cohn 4D Technologies Learning Objectives Learn how to create base views and projected views from 3D models Learn how to create and control

More information

Prasanth. Lathe Machining

Prasanth. Lathe Machining Lathe Machining Overview Conventions What's New? Getting Started Open the Part to Machine Create a Rough Turning Operation Replay the Toolpath Create a Groove Turning Operation Create Profile Finish Turning

More information

Multi-Modal User Interaction

Multi-Modal User Interaction Multi-Modal User Interaction Lecture 4: Multiple Modalities Zheng-Hua Tan Department of Electronic Systems Aalborg University, Denmark zt@es.aau.dk MMUI, IV, Zheng-Hua Tan 1 Outline Multimodal interface

More information

SPACES FOR CREATING CONTEXT & AWARENESS - DESIGNING A COLLABORATIVE VIRTUAL WORK SPACE FOR (LANDSCAPE) ARCHITECTS

SPACES FOR CREATING CONTEXT & AWARENESS - DESIGNING A COLLABORATIVE VIRTUAL WORK SPACE FOR (LANDSCAPE) ARCHITECTS SPACES FOR CREATING CONTEXT & AWARENESS - DESIGNING A COLLABORATIVE VIRTUAL WORK SPACE FOR (LANDSCAPE) ARCHITECTS Ina Wagner, Monika Buscher*, Preben Mogensen, Dan Shapiro* University of Technology, Vienna,

More information

Welcome to Corel DESIGNER, a comprehensive vector-based package for technical graphic users and technical illustrators.

Welcome to Corel DESIGNER, a comprehensive vector-based package for technical graphic users and technical illustrators. Workspace tour Welcome to Corel DESIGNER, a comprehensive vector-based package for technical graphic users and technical illustrators. This tutorial will help you become familiar with the terminology and

More information

Beginner s Guide to SolidWorks Alejandro Reyes, MSME Certified SolidWorks Professional and Instructor SDC PUBLICATIONS

Beginner s Guide to SolidWorks Alejandro Reyes, MSME Certified SolidWorks Professional and Instructor SDC PUBLICATIONS Beginner s Guide to SolidWorks 2008 Alejandro Reyes, MSME Certified SolidWorks Professional and Instructor SDC PUBLICATIONS Schroff Development Corporation www.schroff.com www.schroff-europe.com Part Modeling

More information

MRT: Mixed-Reality Tabletop

MRT: Mixed-Reality Tabletop MRT: Mixed-Reality Tabletop Students: Dan Bekins, Jonathan Deutsch, Matthew Garrett, Scott Yost PIs: Daniel Aliaga, Dongyan Xu August 2004 Goals Create a common locus for virtual interaction without having

More information

Overview. The Game Idea

Overview. The Game Idea Page 1 of 19 Overview Even though GameMaker:Studio is easy to use, getting the hang of it can be a bit difficult at first, especially if you have had no prior experience of programming. This tutorial is

More information

Study in User Preferred Pen Gestures for Controlling a Virtual Character

Study in User Preferred Pen Gestures for Controlling a Virtual Character Study in User Preferred Pen Gestures for Controlling a Virtual Character By Shusaku Hanamoto A Project submitted to Oregon State University in partial fulfillment of the requirements for the degree of

More information

Activities at SC 24 WG 9: An Overview

Activities at SC 24 WG 9: An Overview Activities at SC 24 WG 9: An Overview G E R A R D J. K I M, C O N V E N E R I S O J T C 1 S C 2 4 W G 9 Mixed and Augmented Reality (MAR) ISO SC 24 and MAR ISO-IEC JTC 1 SC 24 Have developed standards

More information

Functional Tolerancing and Annotations

Functional Tolerancing and Annotations Functional Tolerancing and Annotations Preface Getting Started Basic Tasks Advanced Tasks Workbench Description Customizing Glossary Index Dassault Systèmes 1994-2000. All rights reserved. Preface CATIA

More information

Interface Design V: Beyond the Desktop

Interface Design V: Beyond the Desktop Interface Design V: Beyond the Desktop Rob Procter Further Reading Dix et al., chapter 4, p. 153-161 and chapter 15. Norman, The Invisible Computer, MIT Press, 1998, chapters 4 and 15. 11/25/01 CS4: HCI

More information

House Design Tutorial

House Design Tutorial House Design Tutorial This House Design Tutorial shows you how to get started on a design project. The tutorials that follow continue with the same plan. When you are finished, you will have created a

More information

user guide for windows creative learning tools

user guide for windows creative learning tools user guide for windows creative learning tools Page 2 Contents Welcome to MissionMaker! Please note: This user guide is suitable for use with MissionMaker 07 build 1.5 and MissionMaker 2.0 This guide will

More information

Using Dynamic Views. Module Overview. Module Prerequisites. Module Objectives

Using Dynamic Views. Module Overview. Module Prerequisites. Module Objectives Using Dynamic Views Module Overview The term dynamic views refers to a method of composing drawings that is a new approach to managing projects. Dynamic views can help you to: automate sheet creation;

More information

On completion of this exercise you will have:

On completion of this exercise you will have: Prerequisite Knowledge To complete this exercise you will need; to be familiar with the SolidWorks interface and the key commands. basic file management skills the ability to rotate views and select faces

More information