EyeScope: A 3D Interaction Technique for Accurate Object Selection in Immersive Environments

Size: px
Start display at page:

Download "EyeScope: A 3D Interaction Technique for Accurate Object Selection in Immersive Environments"

Transcription

1 EyeScope: A 3D Interaction Technique for Accurate Object Selection in Immersive Environments Cleber S. Ughini 1, Fausto R. Blanco 1, Francisco M. Pinto 1, Carla M.D.S. Freitas 1, Luciana P. Nedel 1 1 Instituto de Informática Universidade Federal do Rio Grande do Sul (UFRGS) Caixa Postal Porto Alegre RS Brazil {ughini,frblanco,fmpinto,carla,nedel}@inf.ufrgs.br Abstract. This paper proposes a technique for object selection in immersive environments using a head-mounted display (HMD) and a data glove. The developed technique extends the well known gaze-directed selection technique. We propose a zoom control to focus a specific region of interest thus providing more precision in object selection. During zoom, the viewing direction obtained from the head-mounted display orientation becomes finer avoiding sudden changes in the view due to the combination of zoom and direction modification. These characteristics make the object selection easier, more accurate and more comfortable. Tests performed with 24 subjects and 9 scenarios confirmed our hypotheses. 1. Introduction Users of immersive virtual environments usually intend to manipulate objects that are part of the scene. Reaching objects may also require navigation through the virtual environment since they might not be near to the user. Manipulation requires a previous selection of the object of interest, and navigation is also frequently based on selecting a target point to indicate the path that the user wants to follow. Figure 1. Four hand positions to control zoom level. From left to right: open, almost open, almost closed, and closed. Developing simple and comfortable selection and manipulation techniques for 3D environments has been a research issue for many years, and there are several possibilities depending on the application-specific tasks, different devices and interaction metaphors. At a high level, these techniques can be classified in two categories according to the interaction metaphor used: the exocentric and the egocentric metaphors [Poupyrev et al. 1998]. In the exocentric metaphor, the proportions between the user and the objects are not maintained, assuming that the user interacts with the environment

2 from outside its reference system. This is known as god s eye viewpoint. In the egocentric metaphor, the user is part of the virtual world maintaining the dimensional coherence between him and the objects manipulated. This class is further subdivided into two metaphors: virtual hand, where the user reaches and grabs the object of interest with a virtual hand, and virtual pointer, where the user interacts with the target object by pointing at it. Although there are several techniques described in the literature [Bowman et al. 2005], some of them were designed for desktop environments and others are best suited for immersive environments. In the case of wearable devices, such as head-mounted displays and data gloves, one can take advantage of the degrees of freedom provided by these devices and extend classical and simple techniques to improve user performance. This paper focuses on object selection, more precisely, on a new approach for object selection in 3D immersive environments using a head-mounted display and a data glove, the eyescope. The use of head-mounted displays provides an egocentric immersion experience, i.e. the user is proportional to the virtual world, being immersed in it. The data glove is the natural device for implementing the virtual hand metaphor. The proposed technique extends the gaze-directed selection technique, but the user does not need to navigate through the environment. We propose a zoom control using the data glove to focus a specific region of interest, enlarging it and, at the same time, the viewing direction control given by the head-mounted display orientation becomes finer. This more accurate control is needed because the gaze-directed technique is very sensitive to the low resolution of the display, noise in the sensor s orientation and target objects distribution in the scene, especially when there are distant, small or semi-occluded objects. The extended technique makes the object selection easier, more accurate and comfortable. This paper is organized as follows. In Section 2 we discuss some existing techniques according to requirements for object selection. In Section 3 we explain the eyescope technique, detailing all issues involved in the selection operation, and how the technique handles the user s reaction. In Section 4 we show the testbed application developed to evaluate the technique and compare it with the gaze-directed selection. In Section 5 we describe the evaluation criteria and parameters, while in Section 6 we explain how the experiments were performed. Finally, in Section 7 we present and discuss the results obtained from the experiments and, in Section 8, we conclude this work with some observations. 2. Related Work Most three-dimensional interaction techniques are based on few basic metaphors. For example, navigation can be accomplished by gaze-directed steering, which implies that the user moves in the direction he/she is looking at. The same looking-at metaphor can be used to select objects: the user can point at the object of interest using 2D, 3D gaze or 3D hand techniques [Bowman et al. 2005]. A selection technique needs also a confirmation of selection by means of some implicit or explicit command and a feedback. Through observation of existing techniques described in the literature, we identify some basic requirements for object selection techniques: 1. Accuracy. The technique must provide the user with an easy and precise way

3 to select the desired object, independent of its size, location or orientation, minimizing wrong or null selection, which slows down user performance or even may cause him to abandon the task. 2. Reachability. It must be possible to reach any desired object without moving through the environment. 3. Time saving. Users, specially advanced ones, like to minimize the time to select an object. This is precision dependent because incorrect selections waste time. 4. Comfort. The user must feel comfortable during the whole procedure, and self confident concerning correct selection. 5. Simplicity. Device manipulation must be intuitive and with few controls, yielding easy memorization and short learning time. 6. Immersion sense. Traditional 2D interaction devices, like mouse and keyboard, as well as menus, lists, labels and buttons should be avoided when adopting wearable devices. These techniques can clutter scene visualization and disrupt user attention. When the visualization device has a poor resolution this can be a more serious problem. 7. Error and constraints tolerance. It must be possible to use devices with low quality interaction or restricted degrees of freedom without loosing performance. A useful selection technique must tolerate inaccurate or limited interaction devices, in order to be employed with various hardware platforms. Several authors report the implementation of selection and manipulation techniques in virtual environments like virtual hand and virtual pointers, either with the regular mouse or data gloves [Poupyrev et al. 1996, Bowman and Wingrave 2001, Nedel et al. 2003]. In the virtual hand technique, the user s hand is explicitly represented in the virtual environment. Its position and orientation in the VRE are provided to the system by specific input devices. To select an object, the user should intercept the object with the virtual hand and inform his/her intention to the application. This can be provided simply by pressing a button on the input device or performing a specific gesture with the fingers or the whole hand in the case the user is interacting by means of a data glove [Poupyrev et al. 1996, Bowman and Wingrave 2001, Nedel et al. 2003]. However, as stated by Poupyrev et al. [Poupyrev et al. 1998], selection by pointing provides better performance than virtual hand-based techniques because the latter techniques require more physical hand movement from the user. So, although fulfilling most requirements above, virtual hand-based techniques fail to comply with requirement comfort, which can be decisive for user performance. Selection by pointing techniques can be accomplished through many methods but we restrict ourselves to comment those that can be implemented with the data glove. Ray-casting is one of the most used techniques to select and manipulate objects [Poupyrev et al. 1998, Bowman et al. 1999]. This technique consists of the representation of an infinite and semi-transparent ray from the user s hand towards the point indicated by the hand orientation. To complete the selection, once the object of interest is pointed at, the user issues a specific command to the application using the input device. This technique is very powerful but it can fail the requirement accuracy when high precision selection is required, especially in a dense environment or with faraway objects

4 [Poupyrev et al. 1998, Bowman et al. 1999]. Jacob [Jacob 1990] describes a technique for selecting an object by tracking the user s eye movement, and determining to where the user is looking at. This gaze-directed technique [Bowman et al. 2005] has been used in several works [Bleser and Sibert 1990, Tanaka 1999, Yamato et al. 2000], and seems to fit well to the requirements reachability, time saving, comfort, simplicity, and immersion. However, it is not accurate, failing the first requirement. In our technique, we extend the gaze-directed technique by controlling the location of a cursor on the target object through zooming into it and enlarging its projection. We use the head-mounted display orientation to indicate the target s direction and the data glove openness to control zooming. Depending on the hand closing gesture the image is enlarged, facilitating cursor positioning on the target object. During zooming, re-orientation of the HMD is attenuated to avoid sudden changes. A closed hand corresponds to the confirmation of selection. In this way, eyescope fulfills all the requirements listed above as we will discuss in the following sections. 3. Eyescope In this section, we describe the eyescope technique in detail, showing the mapping between the input from both the head-mounted display and the data glove to operations performed in the virtual world. The essential characteristic in our extended gaze-directed selection is the control of the viewing orientation during zooming, when sudden changes of orientation could make the user to loose control of the cursor Technique design When the user is immersed in a virtual world, he/she can look at any direction as in the real world. At the center of the view there is a small target in the form of a cross, which is used to point to the desired object. Hand closing gestures control zooming level, which actually provides a way of enlarging the scene visualization. In order to have more accuracy, the user can point to a cluster of objects and activate the zoom-in command by slowly closing the hand with the data glove, causing the scaling of the scene. This allows a more precise indication of the object of interest. In the case of changing the target, it is possible to zoom out by opening the hand, and returning to the initial state to choose a new target. The actual selection of the object can be made at anytime, just by closing the hand when the cross cursor is on the target s projection Range definition In the virtual world, users are allowed to rotate 360 degrees around their own body. The up and down head movement is limited to 180 degrees. Combining these two rotations gives the user full angular navigation. A roll movement with the head (usually also with 180 degrees) to the left or to the right is also allowed, but it affects only the scene visualization. The zoom level starts at a user proportional view of the environment, i.e. in a 1:1 view. The maximum zoom is limited to 500 times the normal scene size in the application. Moreover, the zooming is executed through the enlargement of the projected scene and not with a real approximation, so passing through objects is avoided.

5 3.3. Mapping hand gestures to operations in the virtual world Hand gestures are used both to control zooming and confirm the selection of the object pointed by the cross target. A first approach of controlling the zooming level was to make it proportional to the hand closing movement, i.e., depending on how much closed the hand was the greater was the zoom level. However, that solution proved to be too inadequate due to device inaccuracy or noise (fast oscillations, and even some oscillations when the hand is static). Both cases introduced instability in objects visualization, keeping their projections changing constantly in the final image. Noise during the capture of the viewing direction makes the objects oscillate vertically and horizontally, and this inconvenient would be worse with a conventional zoom, increasing the oscillations. So, with the zooming level proportional to the closed hand degree, noises would do the size of objects change repeatedly. Thus, the alternative for zooming level control was to use a state-based control, being the closing states of the hand classified in four different situations: entirely open hand, almost open hand, almost closed hand, and entirely closed hand. This allows the definition of larger ranges for the boundary values defining hand s states, so the noise rarely is high enough to cause a sudden change of state. Also, the noise can be smoothed by applying filtering. We used a simple mean filter considering only the five or 10 last measured values to avoid delay in the device s response time while smoothing the transition between zooming levels. Figure 2 shows data glove s collected data demonstrating different hand situations (Figure 1) along a certain time: fully open, almost open, almost closed and closed hand. Even with the noise peaks in the graph of Figure 2, the data collected is kept in the same interval, thus avoiding problems related to low quality devices. Figure 2. Zooming intervals with noise When the user starts closing the data glove, hand is almost open, zooming-in is activated and the focused image starts to enlarge proportionally. An almost closed hand causes zooming to stop and the zooming level is kept steady. A closed hand means that the object pointed by the target should be marked as selected. The hand fully open means no zoom or terminating zooming-in with a smooth return to initial state. In terms of implementation, to increase performance, zooming is calculated in image space, being the objects scaled after their projection.

6 3.4. Mapping head movements to operations in virtual world As mentioned before, the head orientation controls the viewing direction in the virtual world. The observer is not allowed to move through the environment: he/she is just allowed to rotate the body thus rotating the head-mounted display. By doing this, an object is brought to the center of the field of view and the selection command can be issued. The observation direction in the virtual environment is defined by the head-mounted display angular position, its coordinates being recorded constantly and transferred to the virtual environment to set the camera parameters in the same way that the HMD is settled in the real world. Such control is simple and intuitive [Jacob 1990], so the user can just move the head like he was physically in the virtual world. Once the object of interest is at the center of his/her vision, with the cross target on it, the selection command can be sent by closing the hand wearing the data glove. With the introduction of zooming, a finer control of how HMD orientation affects viewing direction is needed. In the simple gaze-directed selection technique, the angles obtained from the HMD trackers are directly used in the virtual environment imposing a viewing direction in a 1:1 scale, as shown in Figure 3 (b). Zooming without changing orientation would be a simple matter of enlarging the image. (a) (b) (c) Figure 3. Correlating mappings: real world movement (a), simple mapping (b), eyescope mapping (c). However, when a zooming command is issued by a closing hand movement, the relation between the HMD orientation and the viewing direction must be adapted to prevent from large changes in the viewing direction. Instead of simply passing HMD orientation to the virtual world, an angular variation is computed as the difference between the new HMD orientation and the previous one. These relative values are used to change the viewing direction as a scale factor. Thus, the larger the zooming level is, the smaller will be the angular variation in the viewing direction for the same variation in the headmounted display orientation as is shown in Figure 3 (c). Of course, the use of this non-proportional mapping has a side-effect: there will be a difference between the head-mounted display angular position and the virtual world angular position caused by the scaling. During zooming out, the angular positions are adjusted accordingly, so when the fully open hand situation is reached the orientations will be the same: the zooming level reduction and the reference realignment occur simultaneously, in a gradual but fast way. In this way, we maintain the accuracy in the control

7 of the viewing direction while providing a fast way to point at the target by means of the zooming level increase. 4. Testbed Application To evaluate the technique we built a simple three-dimensional environment, containing spherical objects floating over a chessboard-like floor. This 3D environment was built as a GLUT-based OpenGL application written in the C++ programming language Apparatus We used a VFX3D head-mounted display with yaw, pitch and roll orientation sensors, maximum screen resolution of 640 by 480 pixels, and 60 frustum. For capturing hand gestures, we used a 5DT Data Glove from Fifth Dimension Technologies. This data glove has a rotation and orientation sensor as well as five flexion sensors for the fingers Scenario and experiment design The application takes as input a series of text files. Each file describes a collection of objects in the 3D virtual environment; for each object the file contains an id, its (x,y,z) coordinates, size and color. We designed 9 scenarios (Figure 4) using random distributions, regular cluster of objects and partial occluded cluster of objects. A target object is specified in each input file, being in a different situation in each scenario. In all the cases the user has the task of selecting the target object. 5. Evaluation To evaluate the eyescope technique we chose to compare it with its similar gaze-directed selection technique. The comparison took into account data logged by the application as well as data collected from questionnaires Hypotheses We considered three hypotheses, as explained below. H1. Eyescope is more efficient in the cases of far, small or partially occluded objects. With this hypothesis we verified the time spent for selecting objects in dense or faraway clusters, since the gaze-directed selection was known as not having good performance in such conditions. H2. Eyescope is always more accurate, avoiding wrong selections. With this hypothesis we verified the accuracy and error tolerance of eyescope. H3. Eyescope will be chosen as better than gaze-directed selection, and the users will feel more confident with it. This hypothesis was verified through the analysis of post-experiment questionnaires Independent and dependent variables As independent variables we used: (1) age; (2) gender; (3) previous use of virtual reality devices; (4) previous use of 3D applications, like games, CAD, etc.; (5) previous participation in experimental studies; (6) the technique being used (eyescope=with zoom/gazedirected=without zoom), and finally, (7) the scenario.

8 Figure 4. Scenarios 1 to 9 (from left to right, top to bottom) We defined as dependent variables (1) task completion time and (2) number of errors until task completion. Task completion time was measured from the time the scenario is displayed with the order until the correct selection of the object of interest. Number of errors was computed from the number of wrong selections (selection of an object different from the task specification) and number of clicks issued in empty space. 6. Experiment The experiment consisted of the selection of a target object in each scenario. Data about selection time and number of errors during selection was recorded automatically by the application, while that about the users and their satisfaction was collected through a simple questionnaire Subjects and tasks We have performed the tests with 24 subjects. Our population was heterogeneous, consisting of 9 women and 15 men aging from 19 to 37 years old. Each of them tested the two techniques with the 9 different scenarios (Figure 4), resulting in 18 tests per user, and a total of 432 tests.

9 6.2. Procedure Before the real test, the users were given a training scene (Figure 5) where they were able to practice with the techniques. The duration of this phase was defined by each user: they could skip it and start the real test at any time. The order in which a user tested each of the techniques was interleaved so that 12 series of tests were performed starting with the eyescope and 12 others starting with gaze-directed selection. Figure 5. User in training phase A pre-test form was answered by the users, which provided us with the following information about the subjects: 45% of the users have already had experiences using virtual reality equipments (like gloves, HMD, 3D mouse), all of them have access or work with computers daily, 59% have never participated in users tests and 55% were not used to highly immersive interactive applications. As soon as each scene was loaded for testing, the instruction with the goal was shown at the bottom of the screen, always notifying the user about which technique was enabled at that time. We made use of background music to improve the immersive sensation in our tests [Kyriakakis 2001] and we also play sounds for feedback in some events, for instance to indicate wrong object selection and task completion. The developed application was instrumented to collect all desired information through logging. As each user test ended, a log file was generated containing the time the user spent in each scenario and how many empty and wrong selections were made. 7. Results and discussion For the statistical analysis of the collected data, 9 variables were considered: sex, age, previous use of virtual reality equipment (yes or no), familiarity with immersive environments (yes or no), previous participation in experiments, test scenario, technique (eyescope: zoom=yes; gaze-directed: zoom=no), completion time and number of errors. Before analyzing each hypotesis, we verified a possible correlation of completion time and nuber of errors with the other 7 variables. Table 1 shows the correlation between each variable. In this table, only the absolute value is relevant. The left inferior side of the table shows the isolated correlation, whereas the right superior side shows the correlation with the influence of all the other variables. The analysis of these results shows that the

10 number of errors and the time for task completion is highly correlated with the use of the zoom (eyescope technique). Also, according to these results, it can be observed that the variables Time and Errors does not have relevant influence from the other variables. They are intimately correlated only between themselves and with Zoom. Table 1. Correlations table 7.1. Hypothesis H1 Testing the first hypothesis Eyescope is more efficient in the cases of far, small or partially occluded objects required analysis of task completion time in all scenarios, recalling that scenarios 2 and 4 corresponded to far objects; scenarios 3, 6 and 9 exemplified occluded objects; and 1, 5, 7 and 8 represented randomly distributed objects. Time data for all the 9 scenarios can be observed in Figure 6(a). Task completion time for scenarios 1, 3, 5 and 6 with the eyescope technique were greater than those for completing the task with simple gaze-directed selection technique, but in scenarios 2, 4 and 9, users performed much better with the eyescope technique. In average, taking into account all the scenarios, users were 1.72 seconds faster when using eyescope. ANOVA test applied to the data showed that the differences are significant only in scenario 2 and 6. In scenario 2, users performed better when using eyescope (F = ; p < ) while in scenario 6, task completion time was significantly lower when using the gazedirected technique (F = 6.754; p < ) When considering all the scenarios, ANOVA test showed significant difference between eyescope and gaze-directed selection techniques (F = ; p < ), with eyescope showing a mean time of seconds and standard deviation of 6.47, and gaze-directed selection, mean time of seconds and standard deviation of Hypothesis H2 The second hypothesis Eyescope is always more accurate, avoiding wrong selections required testing the number of errors during the planned tasks. The overall number of errors was smaller when the users employed the new technique. This was even more evident in the scenes where the objects were closest to each other, partially occluded or very distant from the user. Figure 6 (b) shows number of errors per scenario. ANOVA test also showed significant difference between eyescope and gaze-directed selection technique (F=38.291; p < E 09). When observing each scenario separately, all scenarios except 1 and 6 showed significant differences. In scenario 2, where the objects are distant from the user and closest to each other, the average number of errors decreased from 4.17, to 0.29, with eyescope, this difference being significant (F = 9.524; p < ). A large difference was also observed for

11 (a) (b) Figure 6. Task completion time for each scenario (a), and number of errors (wrong selections) in each scenario (b). scenario 4, where the objects are rather clustered (F = 6.872; p < ), and for scenario 9, with partially occluded objects (F = 9.632; p < ) Hypothesis H3 From the questionnaires, it was observed that the proposed selection technique showed a good acceptance rate among the users. Only 2 of the 24 users preferred the gaze-directed selection technique in the majority of the scenes. While only 29.71% said the zoom sometimes confused them, most of them (92%) said the proposed technique can be used instead of the other one in all the presented cases. 8. Conclusions We proposed eyescope, an extension of the well-known gaze-directed technique to select objects in an immersive environment using a head-mounted display and a data glove. Eyescope provides a zoom control to enlarge the area where the object of interest is located. A key characteristic is that it controls the variation of viewing orientation during zooming to avoid sudden changes of orientation that could make the user loose control of the cursor used to point at the target object. Tests were performed with 24 users selecting objects in 9 different scenarios. From the results we concluded that the envisioned acceptance and efficiency of eyescope was confirmed, both for task completion time and number of wrong selections. Taking into account all the scenes, the total time to accomplish the tasks dropped from seconds with the gaze-directed selection to seconds with eyescope. In the same way, the mean number of errors until the user had successfully completed all the tasks varied from to Therefore, the technique has shown to be effective in all the situations where a fine control was necessary. Zoom helped the users to reduce the number of erroneous selections as well as the time needed to select the correct object. Regarding future work we intend to use the other degrees of freedom of the data glove to encode different commands to allow either navigation through the environment or manipulation of the selected object.

12 9. Acknowledgments This work has been supported by CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico) and CAPES (Comissão de Aperfeiçoamento de Pessoal de Nível Superior), both Brazilian national science funding agencies. We also thank the colleagues that participated as subjects for their help and suggestions. References Bleser, T. W. and Sibert, J. (1990). Toto: a tool for selecting interaction techniques. In UIST 90: Proceedings of the 3rd annual ACM SIGGRAPH symposium on User interface software and technology, pages , New York, NY, USA. ACM Press. Bowman, D., Kruijff, E., LaViola, J., and Poupyrev, I. (2005). 3D User Interfaces - Theory and Practice. Addison Wesley. Bowman, D. A., Johnson, D., and Hodges, L. F. (1999). Testbed evaluation of ve interaction techniques. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology (VRST 99), pages Bowman, D. A. and Wingrave, C. A. (2001). Design and evaluation of menu systems for immersive virtual environments. In Proceedings of the IEEE Virtual Reality Conference 2001 (VR 01), pages Jacob, R. J. K. (1990). What you look at is what you get: eye movement-based interaction techniques. In CHI 90: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 11 18, New York, NY, USA. ACM Press. Kyriakakis, C. (2001). Fundamental and technological limitations of immersive audio systems. In Jeffay, K. and Zhang, H., editors, Readings in multimedia computing and networking, page 0. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA. Nedel, L. P., Freitas, C. M. D. S., Jacob, L. J., and Pimenta., M. (2003). Testing the use of egocentric interactive techniques in immersive virtual environments. In Interact 2003, pages IFIP. Poupyrev, I., Billinghurst, M., Weghorst, S., and Ichikawa, T. (1996). The go-go interaction technique: non-linear mapping for direct manipulation in vr. In UIST 96: Proceedings of the 9th annual ACM symposium on User interface software and technology, pages 79 80, New York, NY, USA. ACM Press. Poupyrev, I., Weghorst, S., Billinghurst, M., and Ichikawa, T. (1998). Egocentric object manipulation in virtual environments: Empirical evaluation of interaction techniques. In Computer Graphics Forum, EUROGRAPHICS 98 Issue, pages Tanaka, K. (1999). A robust selection system using real-time multi-modal user-agent interactions. In IUI 99: Proceedings of the 4th international conference on Intelligent user interfaces, pages , New York, NY, USA. ACM Press. Yamato, M., Inoue, K., Monden, A., Torii, K., and ichi Matsumoto, K. (2000). Button selection for general guis using eye and hand together. In AVI 00: Proceedings of the working conference on Advanced visual interfaces, pages , New York, NY, USA. ACM Press.

Guidelines for choosing VR Devices from Interaction Techniques

Guidelines for choosing VR Devices from Interaction Techniques Guidelines for choosing VR Devices from Interaction Techniques Jaime Ramírez Computer Science School Technical University of Madrid Campus de Montegancedo. Boadilla del Monte. Madrid Spain http://decoroso.ls.fi.upm.es

More information

CSC 2524, Fall 2017 AR/VR Interaction Interface

CSC 2524, Fall 2017 AR/VR Interaction Interface CSC 2524, Fall 2017 AR/VR Interaction Interface Karan Singh Adapted from and with thanks to Mark Billinghurst Typical Virtual Reality System HMD User Interface Input Tracking How can we Interact in VR?

More information

Réalité Virtuelle et Interactions. Interaction 3D. Année / 5 Info à Polytech Paris-Sud. Cédric Fleury

Réalité Virtuelle et Interactions. Interaction 3D. Année / 5 Info à Polytech Paris-Sud. Cédric Fleury Réalité Virtuelle et Interactions Interaction 3D Année 2016-2017 / 5 Info à Polytech Paris-Sud Cédric Fleury (cedric.fleury@lri.fr) Virtual Reality Virtual environment (VE) 3D virtual world Simulated by

More information

Interaction Techniques for Immersive Virtual Environments: Design, Evaluation, and Application

Interaction Techniques for Immersive Virtual Environments: Design, Evaluation, and Application Interaction Techniques for Immersive Virtual Environments: Design, Evaluation, and Application Doug A. Bowman Graphics, Visualization, and Usability Center College of Computing Georgia Institute of Technology

More information

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception

More information

Using Whole-Body Orientation for Virtual Reality Interaction

Using Whole-Body Orientation for Virtual Reality Interaction Using Whole-Body Orientation for Virtual Reality Interaction Vitor A.M. Jorge, Juan M.T. Ibiapina, Luis F.M.S. Silva, Anderson Maciel, Luciana P. Nedel Instituto de Informática Universidade Federal do

More information

Affordances and Feedback in Nuance-Oriented Interfaces

Affordances and Feedback in Nuance-Oriented Interfaces Affordances and Feedback in Nuance-Oriented Interfaces Chadwick A. Wingrave, Doug A. Bowman, Naren Ramakrishnan Department of Computer Science, Virginia Tech 660 McBryde Hall Blacksburg, VA 24061 {cwingrav,bowman,naren}@vt.edu

More information

VEWL: A Framework for Building a Windowing Interface in a Virtual Environment Daniel Larimer and Doug A. Bowman Dept. of Computer Science, Virginia Tech, 660 McBryde, Blacksburg, VA dlarimer@vt.edu, bowman@vt.edu

More information

Interaction in VR: Manipulation

Interaction in VR: Manipulation Part 8: Interaction in VR: Manipulation Virtuelle Realität Wintersemester 2007/08 Prof. Bernhard Jung Overview Control Methods Selection Techniques Manipulation Techniques Taxonomy Further reading: D.

More information

Using Pinch Gloves for both Natural and Abstract Interaction Techniques in Virtual Environments

Using Pinch Gloves for both Natural and Abstract Interaction Techniques in Virtual Environments Using Pinch Gloves for both Natural and Abstract Interaction Techniques in Virtual Environments Doug A. Bowman, Chadwick A. Wingrave, Joshua M. Campbell, and Vinh Q. Ly Department of Computer Science (0106)

More information

Collaborative Interaction through Spatially Aware Moving Displays

Collaborative Interaction through Spatially Aware Moving Displays Collaborative Interaction through Spatially Aware Moving Displays Anderson Maciel Universidade de Caxias do Sul Rod RS 122, km 69 sn 91501-970 Caxias do Sul, Brazil +55 54 3289.9009 amaciel5@ucs.br Marcelo

More information

A Study of Navigation and Selection Techniques in Virtual Environments Using Microsoft Kinect

A Study of Navigation and Selection Techniques in Virtual Environments Using Microsoft Kinect A Study of Navigation and Selection Techniques in Virtual Environments Using Microsoft Kinect Peter Dam 1, Priscilla Braz 2, and Alberto Raposo 1,2 1 Tecgraf/PUC-Rio, Rio de Janeiro, Brazil peter@tecgraf.puc-rio.br

More information

Fly Over, a 3D Interaction Technique for Navigation in Virtual Environments Independent from Tracking Devices

Fly Over, a 3D Interaction Technique for Navigation in Virtual Environments Independent from Tracking Devices Author manuscript, published in "10th International Conference on Virtual Reality (VRIC 2008), Laval : France (2008)" Fly Over, a 3D Interaction Technique for Navigation in Virtual Environments Independent

More information

Simultaneous Object Manipulation in Cooperative Virtual Environments

Simultaneous Object Manipulation in Cooperative Virtual Environments 1 Simultaneous Object Manipulation in Cooperative Virtual Environments Abstract Cooperative manipulation refers to the simultaneous manipulation of a virtual object by multiple users in an immersive virtual

More information

A Kinect-based 3D hand-gesture interface for 3D databases

A Kinect-based 3D hand-gesture interface for 3D databases A Kinect-based 3D hand-gesture interface for 3D databases Abstract. The use of natural interfaces improves significantly aspects related to human-computer interaction and consequently the productivity

More information

RV - AULA 05 - PSI3502/2018. User Experience, Human Computer Interaction and UI

RV - AULA 05 - PSI3502/2018. User Experience, Human Computer Interaction and UI RV - AULA 05 - PSI3502/2018 User Experience, Human Computer Interaction and UI Outline Discuss some general principles of UI (user interface) design followed by an overview of typical interaction tasks

More information

Issues and Challenges of 3D User Interfaces: Effects of Distraction

Issues and Challenges of 3D User Interfaces: Effects of Distraction Issues and Challenges of 3D User Interfaces: Effects of Distraction Leslie Klein kleinl@in.tum.de In time critical tasks like when driving a car or in emergency management, 3D user interfaces provide an

More information

Cooperative Object Manipulation in Collaborative Virtual Environments

Cooperative Object Manipulation in Collaborative Virtual Environments Cooperative Object Manipulation in s Marcio S. Pinho 1, Doug A. Bowman 2 3 1 Faculdade de Informática PUCRS Av. Ipiranga, 6681 Phone: +55 (44) 32635874 (FAX) CEP 13081-970 - Porto Alegre - RS - BRAZIL

More information

Evaluating Visual/Motor Co-location in Fish-Tank Virtual Reality

Evaluating Visual/Motor Co-location in Fish-Tank Virtual Reality Evaluating Visual/Motor Co-location in Fish-Tank Virtual Reality Robert J. Teather, Robert S. Allison, Wolfgang Stuerzlinger Department of Computer Science & Engineering York University Toronto, Canada

More information

Virtual Environment Interaction Based on Gesture Recognition and Hand Cursor

Virtual Environment Interaction Based on Gesture Recognition and Hand Cursor Virtual Environment Interaction Based on Gesture Recognition and Hand Cursor Chan-Su Lee Kwang-Man Oh Chan-Jong Park VR Center, ETRI 161 Kajong-Dong, Yusong-Gu Taejon, 305-350, KOREA +82-42-860-{5319,

More information

3D Interaction Techniques

3D Interaction Techniques 3D Interaction Techniques Hannes Interactive Media Systems Group (IMS) Institute of Software Technology and Interactive Systems Based on material by Chris Shaw, derived from Doug Bowman s work Why 3D Interaction?

More information

Building a bimanual gesture based 3D user interface for Blender

Building a bimanual gesture based 3D user interface for Blender Modeling by Hand Building a bimanual gesture based 3D user interface for Blender Tatu Harviainen Helsinki University of Technology Telecommunications Software and Multimedia Laboratory Content 1. Background

More information

Shopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction

Shopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction Shopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction Minghao Cai 1(B), Soh Masuko 2, and Jiro Tanaka 1 1 Waseda University, Kitakyushu, Japan mhcai@toki.waseda.jp, jiro@aoni.waseda.jp

More information

Are Existing Metaphors in Virtual Environments Suitable for Haptic Interaction

Are Existing Metaphors in Virtual Environments Suitable for Haptic Interaction Are Existing Metaphors in Virtual Environments Suitable for Haptic Interaction Joan De Boeck Chris Raymaekers Karin Coninx Limburgs Universitair Centrum Expertise centre for Digital Media (EDM) Universitaire

More information

Universidade de Aveiro Departamento de Electrónica, Telecomunicações e Informática. Interaction in Virtual and Augmented Reality 3DUIs

Universidade de Aveiro Departamento de Electrónica, Telecomunicações e Informática. Interaction in Virtual and Augmented Reality 3DUIs Universidade de Aveiro Departamento de Electrónica, Telecomunicações e Informática Interaction in Virtual and Augmented Reality 3DUIs Realidade Virtual e Aumentada 2017/2018 Beatriz Sousa Santos Interaction

More information

Eliminating Design and Execute Modes from Virtual Environment Authoring Systems

Eliminating Design and Execute Modes from Virtual Environment Authoring Systems Eliminating Design and Execute Modes from Virtual Environment Authoring Systems Gary Marsden & Shih-min Yang Department of Computer Science, University of Cape Town, Cape Town, South Africa Email: gaz@cs.uct.ac.za,

More information

R (2) Controlling System Application with hands by identifying movements through Camera

R (2) Controlling System Application with hands by identifying movements through Camera R (2) N (5) Oral (3) Total (10) Dated Sign Assignment Group: C Problem Definition: Controlling System Application with hands by identifying movements through Camera Prerequisite: 1. Web Cam Connectivity

More information

Quick Button Selection with Eye Gazing for General GUI Environment

Quick Button Selection with Eye Gazing for General GUI Environment International Conference on Software: Theory and Practice (ICS2000) Quick Button Selection with Eye Gazing for General GUI Environment Masatake Yamato 1 Akito Monden 1 Ken-ichi Matsumoto 1 Katsuro Inoue

More information

Testbed Evaluation of Virtual Environment Interaction Techniques

Testbed Evaluation of Virtual Environment Interaction Techniques Testbed Evaluation of Virtual Environment Interaction Techniques Doug A. Bowman Department of Computer Science (0106) Virginia Polytechnic & State University Blacksburg, VA 24061 USA (540) 231-7537 bowman@vt.edu

More information

HandsIn3D: Supporting Remote Guidance with Immersive Virtual Environments

HandsIn3D: Supporting Remote Guidance with Immersive Virtual Environments HandsIn3D: Supporting Remote Guidance with Immersive Virtual Environments Weidong Huang 1, Leila Alem 1, and Franco Tecchia 2 1 CSIRO, Australia 2 PERCRO - Scuola Superiore Sant Anna, Italy {Tony.Huang,Leila.Alem}@csiro.au,

More information

A Dynamic Gesture Language and Graphical Feedback for Interaction in a 3D User Interface

A Dynamic Gesture Language and Graphical Feedback for Interaction in a 3D User Interface EUROGRAPHICS 93/ R. J. Hubbold and R. Juan (Guest Editors), Blackwell Publishers Eurographics Association, 1993 Volume 12, (1993), number 3 A Dynamic Gesture Language and Graphical Feedback for Interaction

More information

The Mixed Reality Book: A New Multimedia Reading Experience

The Mixed Reality Book: A New Multimedia Reading Experience The Mixed Reality Book: A New Multimedia Reading Experience Raphaël Grasset raphael.grasset@hitlabnz.org Andreas Dünser andreas.duenser@hitlabnz.org Mark Billinghurst mark.billinghurst@hitlabnz.org Hartmut

More information

Virtual Grasping Using a Data Glove

Virtual Grasping Using a Data Glove Virtual Grasping Using a Data Glove By: Rachel Smith Supervised By: Dr. Kay Robbins 3/25/2005 University of Texas at San Antonio Motivation Navigation in 3D worlds is awkward using traditional mouse Direct

More information

ThumbsUp: Integrated Command and Pointer Interactions for Mobile Outdoor Augmented Reality Systems

ThumbsUp: Integrated Command and Pointer Interactions for Mobile Outdoor Augmented Reality Systems ThumbsUp: Integrated Command and Pointer Interactions for Mobile Outdoor Augmented Reality Systems Wayne Piekarski and Bruce H. Thomas Wearable Computer Laboratory School of Computer and Information Science

More information

The Gender Factor in Virtual Reality Navigation and Wayfinding

The Gender Factor in Virtual Reality Navigation and Wayfinding The Gender Factor in Virtual Reality Navigation and Wayfinding Joaquin Vila, Ph.D. Applied Computer Science Illinois State University javila@.ilstu.edu Barbara Beccue, Ph.D. Applied Computer Science Illinois

More information

Interactive intuitive mixed-reality interface for Virtual Architecture

Interactive intuitive mixed-reality interface for Virtual Architecture I 3 - EYE-CUBE Interactive intuitive mixed-reality interface for Virtual Architecture STEPHEN K. WITTKOPF, SZE LEE TEO National University of Singapore Department of Architecture and Fellow of Asia Research

More information

This is an author-deposited version published in: Handle ID:.http://hdl.handle.net/10985/6681

This is an author-deposited version published in:  Handle ID:.http://hdl.handle.net/10985/6681 Science Arts & Métiers (SAM) is an open access repository that collects the work of Arts et Métiers ParisTech researchers and makes it freely available over the web where possible. This is an author-deposited

More information

A Formal Description of Multimodal Interaction Techniques for Immersive Virtual Reality Applications

A Formal Description of Multimodal Interaction Techniques for Immersive Virtual Reality Applications A Formal Description of Multimodal Interaction Techniques for Immersive Virtual Reality Applications David Navarre 1, Philippe Palanque 1, Rémi Bastide 1, Amélie Schyn 1, Marco Winckler 1, Luciana P. Nedel

More information

Mid-term report - Virtual reality and spatial mobility

Mid-term report - Virtual reality and spatial mobility Mid-term report - Virtual reality and spatial mobility Jarl Erik Cedergren & Stian Kongsvik October 10, 2017 The group members: - Jarl Erik Cedergren (jarlec@uio.no) - Stian Kongsvik (stiako@uio.no) 1

More information

A novel click-free interaction technique for large-screen interfaces

A novel click-free interaction technique for large-screen interfaces A novel click-free interaction technique for large-screen interfaces Takaomi Hisamatsu, Buntarou Shizuki, Shin Takahashi, Jiro Tanaka Department of Computer Science Graduate School of Systems and Information

More information

Multimodal Interaction Concepts for Mobile Augmented Reality Applications

Multimodal Interaction Concepts for Mobile Augmented Reality Applications Multimodal Interaction Concepts for Mobile Augmented Reality Applications Wolfgang Hürst and Casper van Wezel Utrecht University, PO Box 80.089, 3508 TB Utrecht, The Netherlands huerst@cs.uu.nl, cawezel@students.cs.uu.nl

More information

Cosc VR Interaction. Interaction in Virtual Environments

Cosc VR Interaction. Interaction in Virtual Environments Cosc 4471 Interaction in Virtual Environments VR Interaction In traditional interfaces we need to use interaction metaphors Windows, Mouse, Pointer (WIMP) Limited input degrees of freedom imply modality

More information

A Virtual Reality Framework to Validate Persuasive Interactive Systems to Change Work Habits

A Virtual Reality Framework to Validate Persuasive Interactive Systems to Change Work Habits A Virtual Reality Framework to Validate Persuasive Interactive Systems to Change Work Habits Florian Langel 1, Yuen C. Law 1, Wilken Wehrt 2, Benjamin Weyers 1 Virtual Reality and Immersive Visualization

More information

Using the Non-Dominant Hand for Selection in 3D

Using the Non-Dominant Hand for Selection in 3D Using the Non-Dominant Hand for Selection in 3D Joan De Boeck Tom De Weyer Chris Raymaekers Karin Coninx Hasselt University, Expertise centre for Digital Media and transnationale Universiteit Limburg Wetenschapspark

More information

Input devices and interaction. Ruth Aylett

Input devices and interaction. Ruth Aylett Input devices and interaction Ruth Aylett Contents Tracking What is available Devices Gloves, 6 DOF mouse, WiiMote Why is it important? Interaction is basic to VEs We defined them as interactive in real-time

More information

Virtuelle Realität. Overview. Part 13: Interaction in VR: Navigation. Navigation Wayfinding Travel. Virtuelle Realität. Prof.

Virtuelle Realität. Overview. Part 13: Interaction in VR: Navigation. Navigation Wayfinding Travel. Virtuelle Realität. Prof. Part 13: Interaction in VR: Navigation Virtuelle Realität Wintersemester 2006/07 Prof. Bernhard Jung Overview Navigation Wayfinding Travel Further information: D. A. Bowman, E. Kruijff, J. J. LaViola,

More information

Interaction Styles in Development Tools for Virtual Reality Applications

Interaction Styles in Development Tools for Virtual Reality Applications Published in Halskov K. (ed.) (2003) Production Methods: Behind the Scenes of Virtual Inhabited 3D Worlds. Berlin, Springer-Verlag Interaction Styles in Development Tools for Virtual Reality Applications

More information

A Study on Interaction of Gaze Pointer-Based User Interface in Mobile Virtual Reality Environment

A Study on Interaction of Gaze Pointer-Based User Interface in Mobile Virtual Reality Environment S S symmetry Article A Study on Interaction of Gaze Pointer-Based User Interface in Mobile Virtual Reality Environment Mingyu Kim, Jiwon Lee ID, Changyu Jeon and Jinmo Kim * ID Department of Software,

More information

Direct 3D Interaction with Smart Objects

Direct 3D Interaction with Smart Objects Direct 3D Interaction with Smart Objects Marcelo Kallmann EPFL - LIG - Computer Graphics Lab Swiss Federal Institute of Technology, CH-1015, Lausanne, EPFL LIG +41 21-693-5248 kallmann@lig.di.epfl.ch Daniel

More information

Chapter 1 - Introduction

Chapter 1 - Introduction 1 "We all agree that your theory is crazy, but is it crazy enough?" Niels Bohr (1885-1962) Chapter 1 - Introduction Augmented reality (AR) is the registration of projected computer-generated images over

More information

3D User Interfaces. Using the Kinect and Beyond. John Murray. John Murray

3D User Interfaces. Using the Kinect and Beyond. John Murray. John Murray Using the Kinect and Beyond // Center for Games and Playable Media // http://games.soe.ucsc.edu John Murray John Murray Expressive Title Here (Arial) Intelligence Studio Introduction to Interfaces User

More information

An asymmetric 2D Pointer / 3D Ray for 3D Interaction within Collaborative Virtual Environments

An asymmetric 2D Pointer / 3D Ray for 3D Interaction within Collaborative Virtual Environments An asymmetric 2D Pointer / 3D Ray for 3D Interaction within Collaborative Virtual Environments Cedric Fleury IRISA INSA de Rennes UEB France Thierry Duval IRISA Université de Rennes 1 UEB France Figure

More information

TRAVEL IN SMILE : A STUDY OF TWO IMMERSIVE MOTION CONTROL TECHNIQUES

TRAVEL IN SMILE : A STUDY OF TWO IMMERSIVE MOTION CONTROL TECHNIQUES IADIS International Conference Computer Graphics and Visualization 27 TRAVEL IN SMILE : A STUDY OF TWO IMMERSIVE MOTION CONTROL TECHNIQUES Nicoletta Adamo-Villani Purdue University, Department of Computer

More information

Effective Iconography....convey ideas without words; attract attention...

Effective Iconography....convey ideas without words; attract attention... Effective Iconography...convey ideas without words; attract attention... Visual Thinking and Icons An icon is an image, picture, or symbol representing a concept Icon-specific guidelines Represent the

More information

A Conceptual Image-Based Data Glove for Computer-Human Interaction

A Conceptual Image-Based Data Glove for Computer-Human Interaction A Conceptual Image-Based Data Glove for Computer-Human Interaction Leandro A. F. Fernandes 1 Vitor F. Pamplona 1 João L. Prauchner 1 Luciana P. Nedel 1 Manuel M. Oliveira 1 Abstract: Data gloves are devices

More information

Haptic Camera Manipulation: Extending the Camera In Hand Metaphor

Haptic Camera Manipulation: Extending the Camera In Hand Metaphor Haptic Camera Manipulation: Extending the Camera In Hand Metaphor Joan De Boeck, Karin Coninx Expertise Center for Digital Media Limburgs Universitair Centrum Wetenschapspark 2, B-3590 Diepenbeek, Belgium

More information

Immersive Simulation in Instructional Design Studios

Immersive Simulation in Instructional Design Studios Blucher Design Proceedings Dezembro de 2014, Volume 1, Número 8 www.proceedings.blucher.com.br/evento/sigradi2014 Immersive Simulation in Instructional Design Studios Antonieta Angulo Ball State University,

More information

A Novel Human Computer Interaction Paradigm for Volume Visualization in Projection-Based. Environments

A Novel Human Computer Interaction Paradigm for Volume Visualization in Projection-Based. Environments Virtual Environments 1 A Novel Human Computer Interaction Paradigm for Volume Visualization in Projection-Based Virtual Environments Changming He, Andrew Lewis, and Jun Jo Griffith University, School of

More information

I R UNDERGRADUATE REPORT. Hardware and Design Factors for the Implementation of Virtual Reality as a Training Tool. by Walter Miranda Advisor:

I R UNDERGRADUATE REPORT. Hardware and Design Factors for the Implementation of Virtual Reality as a Training Tool. by Walter Miranda Advisor: UNDERGRADUATE REPORT Hardware and Design Factors for the Implementation of Virtual Reality as a Training Tool by Walter Miranda Advisor: UG 2006-10 I R INSTITUTE FOR SYSTEMS RESEARCH ISR develops, applies

More information

Project Multimodal FooBilliard

Project Multimodal FooBilliard Project Multimodal FooBilliard adding two multimodal user interfaces to an existing 3d billiard game Dominic Sina, Paul Frischknecht, Marian Briceag, Ulzhan Kakenova March May 2015, for Future User Interfaces

More information

Comparing Input Methods and Cursors for 3D Positioning with Head-Mounted Displays

Comparing Input Methods and Cursors for 3D Positioning with Head-Mounted Displays Comparing Input Methods and Cursors for 3D Positioning with Head-Mounted Displays Junwei Sun School of Interactive Arts and Technology Simon Fraser University junweis@sfu.ca Wolfgang Stuerzlinger School

More information

THE WII REMOTE AS AN INPUT DEVICE FOR 3D INTERACTION IN IMMERSIVE HEAD-MOUNTED DISPLAY VIRTUAL REALITY

THE WII REMOTE AS AN INPUT DEVICE FOR 3D INTERACTION IN IMMERSIVE HEAD-MOUNTED DISPLAY VIRTUAL REALITY IADIS International Conference Gaming 2008 THE WII REMOTE AS AN INPUT DEVICE FOR 3D INTERACTION IN IMMERSIVE HEAD-MOUNTED DISPLAY VIRTUAL REALITY Yang-Wai Chow School of Computer Science and Software Engineering

More information

Enabling Cursor Control Using on Pinch Gesture Recognition

Enabling Cursor Control Using on Pinch Gesture Recognition Enabling Cursor Control Using on Pinch Gesture Recognition Benjamin Baldus Debra Lauterbach Juan Lizarraga October 5, 2007 Abstract In this project we expect to develop a machine-user interface based on

More information

Consumer Behavior when Zooming and Cropping Personal Photographs and its Implications for Digital Image Resolution

Consumer Behavior when Zooming and Cropping Personal Photographs and its Implications for Digital Image Resolution Consumer Behavior when Zooming and Cropping Personal Photographs and its Implications for Digital Image Michael E. Miller and Jerry Muszak Eastman Kodak Company Rochester, New York USA Abstract This paper

More information

3D interaction techniques in Virtual Reality Applications for Engineering Education

3D interaction techniques in Virtual Reality Applications for Engineering Education 3D interaction techniques in Virtual Reality Applications for Engineering Education Cristian Dudulean 1, Ionel Stareţu 2 (1) Industrial Highschool Rosenau, Romania E-mail: duduleanc@yahoo.com (2) Transylvania

More information

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data Hrvoje Benko Microsoft Research One Microsoft Way Redmond, WA 98052 USA benko@microsoft.com Andrew D. Wilson Microsoft

More information

CS 315 Intro to Human Computer Interaction (HCI)

CS 315 Intro to Human Computer Interaction (HCI) CS 315 Intro to Human Computer Interaction (HCI) Direct Manipulation Examples Drive a car If you want to turn left, what do you do? What type of feedback do you get? How does this help? Think about turning

More information

NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS

NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS Xianjun Sam Zheng, George W. McConkie, and Benjamin Schaeffer Beckman Institute, University of Illinois at Urbana Champaign This present

More information

A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems

A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems F. Steinicke, G. Bruder, H. Frenz 289 A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems Frank Steinicke 1, Gerd Bruder 1, Harald Frenz 2 1 Institute of Computer Science,

More information

Navigating the Space: Evaluating a 3D-Input Device in Placement and Docking Tasks

Navigating the Space: Evaluating a 3D-Input Device in Placement and Docking Tasks Navigating the Space: Evaluating a 3D-Input Device in Placement and Docking Tasks Elke Mattheiss Johann Schrammel Manfred Tscheligi CURE Center for Usability CURE Center for Usability ICT&S, University

More information

SIMULATION MODELING WITH ARTIFICIAL REALITY TECHNOLOGY (SMART): AN INTEGRATION OF VIRTUAL REALITY AND SIMULATION MODELING

SIMULATION MODELING WITH ARTIFICIAL REALITY TECHNOLOGY (SMART): AN INTEGRATION OF VIRTUAL REALITY AND SIMULATION MODELING Proceedings of the 1998 Winter Simulation Conference D.J. Medeiros, E.F. Watson, J.S. Carson and M.S. Manivannan, eds. SIMULATION MODELING WITH ARTIFICIAL REALITY TECHNOLOGY (SMART): AN INTEGRATION OF

More information

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT TAYSHENG JENG, CHIA-HSUN LEE, CHI CHEN, YU-PIN MA Department of Architecture, National Cheng Kung University No. 1, University Road,

More information

An asymmetric 2D Pointer / 3D Ray for 3D Interaction within Collaborative Virtual Environments

An asymmetric 2D Pointer / 3D Ray for 3D Interaction within Collaborative Virtual Environments An asymmetric 2D Pointer / 3D Ray for 3D Interaction within Collaborative Virtual Environments Thierry Duval, Cédric Fleury To cite this version: Thierry Duval, Cédric Fleury. An asymmetric 2D Pointer

More information

Development of a telepresence agent

Development of a telepresence agent Author: Chung-Chen Tsai, Yeh-Liang Hsu (2001-04-06); recommended: Yeh-Liang Hsu (2001-04-06); last updated: Yeh-Liang Hsu (2004-03-23). Note: This paper was first presented at. The revised paper was presented

More information

Image Manipulation Interface using Depth-based Hand Gesture

Image Manipulation Interface using Depth-based Hand Gesture Image Manipulation Interface using Depth-based Hand Gesture UNSEOK LEE JIRO TANAKA Vision-based tracking is popular way to track hands. However, most vision-based tracking methods can t do a clearly tracking

More information

PERFORMANCE IN A HAPTIC ENVIRONMENT ABSTRACT

PERFORMANCE IN A HAPTIC ENVIRONMENT ABSTRACT PERFORMANCE IN A HAPTIC ENVIRONMENT Michael V. Doran,William Owen, and Brian Holbert University of South Alabama School of Computer and Information Sciences Mobile, Alabama 36688 (334) 460-6390 doran@cis.usouthal.edu,

More information

Advancements in Gesture Recognition Technology

Advancements in Gesture Recognition Technology IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) Volume 4, Issue 4, Ver. I (Jul-Aug. 2014), PP 01-07 e-issn: 2319 4200, p-issn No. : 2319 4197 Advancements in Gesture Recognition Technology 1 Poluka

More information

3D UIs 101 Doug Bowman

3D UIs 101 Doug Bowman 3D UIs 101 Doug Bowman Welcome, Introduction, & Roadmap 3D UIs 101 3D UIs 201 User Studies and 3D UIs Guidelines for Developing 3D UIs Video Games: 3D UIs for the Masses The Wii Remote and You 3D UI and

More information

Touch Feedback in a Head-Mounted Display Virtual Reality through a Kinesthetic Haptic Device

Touch Feedback in a Head-Mounted Display Virtual Reality through a Kinesthetic Haptic Device Touch Feedback in a Head-Mounted Display Virtual Reality through a Kinesthetic Haptic Device Andrew A. Stanley Stanford University Department of Mechanical Engineering astan@stanford.edu Alice X. Wu Stanford

More information

Interaction: Full and Partial Immersive Virtual Reality Displays

Interaction: Full and Partial Immersive Virtual Reality Displays Interaction: Full and Partial Immersive Virtual Reality Displays Jesper Kjeldskov Aalborg University Department of Computer Science Fredrik Bajers Vej 7 DK-9220 Aalborg East Denmark jesper@cs.auc.dk Abstract.

More information

AN APPROACH TO 3D CONCEPTUAL MODELING

AN APPROACH TO 3D CONCEPTUAL MODELING AN APPROACH TO 3D CONCEPTUAL MODELING Using Spatial Input Device CHIE-CHIEH HUANG Graduate Institute of Architecture, National Chiao Tung University, Hsinchu, Taiwan scottie@arch.nctu.edu.tw Abstract.

More information

Welcome. My name is Jason Jerald, Co-Founder & Principal Consultant at Next Gen Interactions I m here today to talk about the human side of VR

Welcome. My name is Jason Jerald, Co-Founder & Principal Consultant at Next Gen Interactions I m here today to talk about the human side of VR Welcome. My name is Jason Jerald, Co-Founder & Principal Consultant at Next Gen Interactions I m here today to talk about the human side of VR Interactions. For the technology is only part of the equationwith

More information

Integration of Hand Gesture and Multi Touch Gesture with Glove Type Device

Integration of Hand Gesture and Multi Touch Gesture with Glove Type Device 2016 4th Intl Conf on Applied Computing and Information Technology/3rd Intl Conf on Computational Science/Intelligence and Applied Informatics/1st Intl Conf on Big Data, Cloud Computing, Data Science &

More information

Toward an Augmented Reality System for Violin Learning Support

Toward an Augmented Reality System for Violin Learning Support Toward an Augmented Reality System for Violin Learning Support Hiroyuki Shiino, François de Sorbier, and Hideo Saito Graduate School of Science and Technology, Keio University, Yokohama, Japan {shiino,fdesorbi,saito}@hvrl.ics.keio.ac.jp

More information

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp

More information

Out-of-Reach Interactions in VR

Out-of-Reach Interactions in VR Out-of-Reach Interactions in VR Eduardo Augusto de Librio Cordeiro eduardo.augusto.cordeiro@ist.utl.pt Instituto Superior Técnico, Lisboa, Portugal October 2016 Abstract Object selection is a fundamental

More information

ithrow : A NEW GESTURE-BASED WEARABLE INPUT DEVICE WITH TARGET SELECTION ALGORITHM

ithrow : A NEW GESTURE-BASED WEARABLE INPUT DEVICE WITH TARGET SELECTION ALGORITHM ithrow : A NEW GESTURE-BASED WEARABLE INPUT DEVICE WITH TARGET SELECTION ALGORITHM JONG-WOON YOO, YO-WON JEONG, YONG SONG, JUPYUNG LEE, SEUNG-HO LIM, KI-WOONG PARK, AND KYU HO PARK Computer Engineering

More information

MOBAJES: Multi-user Gesture Interaction System with Wearable Mobile Device

MOBAJES: Multi-user Gesture Interaction System with Wearable Mobile Device MOBAJES: Multi-user Gesture Interaction System with Wearable Mobile Device Enkhbat Davaasuren and Jiro Tanaka 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8577 Japan {enkhee,jiro}@iplab.cs.tsukuba.ac.jp Abstract.

More information

Ubiquitous Computing Summer Episode 16: HCI. Hannes Frey and Peter Sturm University of Trier. Hannes Frey and Peter Sturm, University of Trier 1

Ubiquitous Computing Summer Episode 16: HCI. Hannes Frey and Peter Sturm University of Trier. Hannes Frey and Peter Sturm, University of Trier 1 Episode 16: HCI Hannes Frey and Peter Sturm University of Trier University of Trier 1 Shrinking User Interface Small devices Narrow user interface Only few pixels graphical output No keyboard Mobility

More information

Virtual Object Manipulation using a Mobile Phone

Virtual Object Manipulation using a Mobile Phone Virtual Object Manipulation using a Mobile Phone Anders Henrysson 1, Mark Billinghurst 2 and Mark Ollila 1 1 NVIS, Linköping University, Sweden {andhe,marol}@itn.liu.se 2 HIT Lab NZ, University of Canterbury,

More information

UMI3D Unified Model for Interaction in 3D. White Paper

UMI3D Unified Model for Interaction in 3D. White Paper UMI3D Unified Model for Interaction in 3D White Paper 30/04/2018 Introduction 2 The objectives of the UMI3D project are to simplify the collaboration between multiple and potentially asymmetrical devices

More information

The use of gestures in computer aided design

The use of gestures in computer aided design Loughborough University Institutional Repository The use of gestures in computer aided design This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation: CASE,

More information

tracker hardware data in tracker CAVE library coordinate system calibration table corrected data in tracker coordinate system

tracker hardware data in tracker CAVE library coordinate system calibration table corrected data in tracker coordinate system Line of Sight Method for Tracker Calibration in Projection-Based VR Systems Marek Czernuszenko, Daniel Sandin, Thomas DeFanti fmarek j dan j tomg @evl.uic.edu Electronic Visualization Laboratory (EVL)

More information

Assessing the Effects of Orientation and Device on (Constrained) 3D Movement Techniques

Assessing the Effects of Orientation and Device on (Constrained) 3D Movement Techniques Assessing the Effects of Orientation and Device on (Constrained) 3D Movement Techniques Robert J. Teather * Wolfgang Stuerzlinger Department of Computer Science & Engineering, York University, Toronto

More information

Tactile Interface for Navigation in Underground Mines

Tactile Interface for Navigation in Underground Mines XVI Symposium on Virtual and Augmented Reality SVR 2014 Tactile Interface for Navigation in Underground Mines Victor Adriel de J. Oliveira, Eduardo Marques, Rodrigo Peroni and Anderson Maciel Universidade

More information

Application of 3D Terrain Representation System for Highway Landscape Design

Application of 3D Terrain Representation System for Highway Landscape Design Application of 3D Terrain Representation System for Highway Landscape Design Koji Makanae Miyagi University, Japan Nashwan Dawood Teesside University, UK Abstract In recent years, mixed or/and augmented

More information

Integrating PhysX and OpenHaptics: Efficient Force Feedback Generation Using Physics Engine and Haptic Devices

Integrating PhysX and OpenHaptics: Efficient Force Feedback Generation Using Physics Engine and Haptic Devices This is the Pre-Published Version. Integrating PhysX and Opens: Efficient Force Feedback Generation Using Physics Engine and Devices 1 Leon Sze-Ho Chan 1, Kup-Sze Choi 1 School of Nursing, Hong Kong Polytechnic

More information

HUMAN COMPUTER INTERFACE

HUMAN COMPUTER INTERFACE HUMAN COMPUTER INTERFACE TARUNIM SHARMA Department of Computer Science Maharaja Surajmal Institute C-4, Janakpuri, New Delhi, India ABSTRACT-- The intention of this paper is to provide an overview on the

More information

An Implementation Review of Occlusion-Based Interaction in Augmented Reality Environment

An Implementation Review of Occlusion-Based Interaction in Augmented Reality Environment An Implementation Review of Occlusion-Based Interaction in Augmented Reality Environment Mohamad Shahrul Shahidan, Nazrita Ibrahim, Mohd Hazli Mohamed Zabil, Azlan Yusof College of Information Technology,

More information

COLLABORATIVE VIRTUAL ENVIRONMENT TO SIMULATE ON- THE-JOB AIRCRAFT INSPECTION TRAINING AIDED BY HAND POINTING.

COLLABORATIVE VIRTUAL ENVIRONMENT TO SIMULATE ON- THE-JOB AIRCRAFT INSPECTION TRAINING AIDED BY HAND POINTING. COLLABORATIVE VIRTUAL ENVIRONMENT TO SIMULATE ON- THE-JOB AIRCRAFT INSPECTION TRAINING AIDED BY HAND POINTING. S. Sadasivan, R. Rele, J. S. Greenstein, and A. K. Gramopadhye Department of Industrial Engineering

More information