A Human Subjects Study on the Relative Benefit. of Immersive Visualization Technologies. Derrick Turner

Size: px
Start display at page:

Download "A Human Subjects Study on the Relative Benefit. of Immersive Visualization Technologies. Derrick Turner"

Transcription

1 A Human Subjects Study on the Relative Benefit of Immersive Visualization Technologies Derrick Turner A project submitted to the faculty of Brigham Young University in partial fulfillment of the requirements for the degree of Master of Science Daniel Ames, Chair Kevin Franke Gus Williams Department of Civil and Environmental Engineering Brigham Young University June 2014 Copyright 2014 Derrick Turner All Rights Reserved

2

3 ABSTRACT A Human Subjects Study on the Relative Benefit of Immersive Visualization Technologies Derrick Turner Department of Civil and Environmental Engineering, BYU Master of Science Large-scale stereoscopic immersive visualization environments typically include 3D stereo displays and head/hand tracking to create an immersive user experience. These components add cost and complication to such a system that may not be warranted if the components do not significantly improve data navigation and interpretation. This paper presents a two-part human subjects study to investigate the relative value of head tracking and stereoscopic technologies in terms of improved data navigation and interpretation. Ninety-six individuals performed specified tasks using several datasets in one of four different system configurations including: motion tracking with stereoscopic 3D, motion tracking with no stereoscopic 3D, no motion tracking with stereoscopic 3D, and no motion tracking with no stereoscopic 3D. Subjects were not informed of their specific configuration and each task was timed and a score was assigned based on the task performance accuracy. Results from Part A of the study (simple navigation, measuring distance, and image interpretation) indicated a lack of statistically significant difference between the performance metrics of each of the test groups; whereas performance metrics in Part B of the study (complex navigation) were greatest in the case of head tracking with no stereoscopic 3D. Some possible explanation of these results and their potential implication are provided. Keywords: Stereoscopic 3D, Head tracking, 3D immersive visualization

4

5 ACKNOWLEDGEMENTS I would like to thank my wife, Anika, for always supporting me and dealing with the long hours spent in the Clyde Building. I would also like the thank Dr. Ames for giving me the opportunity to further my studies.

6

7 TABLE OF CONTENTS LIST OF FIGURES... ix 1 Introduction Virtual Environments Stereoscopic 3D and Head Tracking Research Goals Methods Hardware and Software Human Subjects and Training System Configurations Datasets Tasks Assessing Tasks Results Task A1: Human Foot Horizontal Orientation Task A2: Human Foot Vertical Orientation Task A3: Highway Overpass Task A4: Change Detection of the Crowne Plaza Hotel, San Diego Task B: Location A Task B: Location B Task B: Location D Task B: End Location Task B: Total Score Discussion v

8 5 Conclusion References vi

9 LIST OF TABLES Table 1: Student s t-test Calculated t-values from the Human Foot Horizontal Orientation Task...20 Table 2: Student s t-test Calculated t-values from the Human Foot Vertical Orientation Task...22 Table 3: Student s t-test Calculated t-values from the Change Detection of the Crowne Plaza Hotel Task...24 Table 4: Student's t-test Calculated t-values from Location A...25 Table 5: Student's t-test Calculated t-values from Location B...26 Table 6: Student's t-test Calculated t-values from Location D...28 Table 7: Student's t-test Calculated t-values from the End Location...29 Table 8: Student's t-test Calculated t-values from the Total Score...31 vii

10 viii

11 LIST OF FIGURES Figure 1: The VuePod large-scale stereoscopic visualization system used for this study....8 Figure 2: Scan of the human foot horizontal orientation starting position...11 Figure 3: LiDAR scan of a bridge overpass...12 Figure 4: Artist s rendition of the Crowne Plaza Hotel property when built in Figure 5: 2005 LiDAR scan of the Crowne Plaza Hotel property...13 Figure 6: Plan view of the cavern...14 Figure 7: Required view at location A...15 Figure 8: Image of the end location...15 Figure 9: Human foot horizontal orientation scaled score results...19 Figure 10: Human foot vertical orientation scaled score results plot...21 Figure 11: Change detection Crowne Plaza Hotel scaled score results plot...23 Figure 12: Location A scaled score results...24 Figure 13: Location B scaled score results...26 Figure 14: Location D scaled score results...27 Figure 15: End location scaled score results...28 Figure 16: Total score scaled score results...30 ix

12 x

13 1 INTRODUCTION Large-scale stereoscopic immersive visualization has been demonstrated as a useful tool for viewing, interpreting, and analyzing scientific data (Koller, Lindstrom et al. 1995, Sulbaran and Baker 2000, Lin, Chen et al. 2013). Two key components of large-scale stereoscopic 3D immersive environments include stereoscopic displays and head tracking devices (Cruz-Neira, Sandin et al. 1992, Dodgson 2005). The approach assumes that stereoscopic images paired with head tracking creates a more visually immersive experience, leading to better manipulation and understanding of data that can result in more effective use of time and resources (Ware, Arthur et al. 1993, Dodgson 2005, Bowman and McMahan 2007). This paper presents a human subjects study of the relative value of stereoscopic 3D displays and head tracking devices with respect to simple data analysis and interpretation tasks. 1.1 Virtual Environments A Computer Automatic Virtual Environment or CAVE, is an immersive virtual reality environment that uses projectors or a large number of LCD video screens to show images on three, four, or six walls typically in a cube shaped room. CAVE s have been constructed using several different configurations for specific applications, but all generally are built with the goal of aiding scientists, engineers, technicians, teachers and others by providing users with a viewer-centered perspective of complex data (Cruz-Neira, Sandin et al. 1992). Advances in computer science and 1

14 technology have allowed CAVE system to improve rapidly in recent years (Peterka, Kooima et al. 2008, DeFanti, Acevedo et al. 2011). A common CAVE configuration uses four to six projection screens forming walls, a floor and/or ceilings with a rear projection projecting onto each screen (Cruz-Neira, Sandin et al. 1992, Browning, Cruz-Neira et al. 1994, Sherman, O'Leary et al. 2010, DeFanti, Acevedo et al. 2011). In this environment, the user wears stereoscopic glasses that allow them to interact with the screens (DeFanti, Acevedo et al. 2011). Motion tracking with six-degrees of freedom is used to interact with a sensor worn by the user that communicates with the software the location of the user s head or hands (DeFanti, Acevedo et al. 2011, Kreylos 2013). A similar configuration can be made with 3D LCD televisions and motion tracking system (Hayden, Ames et al. 2014). CAVE and related virtual environments are usually intended for one or more of the following three common uses: 1) as a tool for multi-dimensional spatial analysis and interaction, 2) as a platform for process-based simulation of dynamic geographic phenomena, and/or 3) as a workspace for multi-participant-based collaborative geographic experiments (Waly and Thabet 2002, Lin, Chen et al. 2013). In all cases, a virtual environment can serve as a means for users to better utilize and/or gain greater information from their data (Fisher, McGreevy et al. 1986). It has been argued that when a user is in control of a virtual environment, a feeling of immersion is created where the user becomes a critical part of the displayed data; this allows the user to be visually stimulated in ways that do not happen when using a simple single 2D display computer system (Kruger, Bohn et al. 1995, Slater 2003). 1.2 Stereoscopic 3D and Head Tracking Stereoscopic 3D displays that require the use of special glasses to create 3D images are generally considered a fundamental component of any CAVE or virtual environment. Several 2

15 hardware and software solutions for 3D glasses have been developed including the common theater-style glasses which use two polarized lenses that selectively pass through to the eyes two distinct images emanating simultaneously from a single display surface. This passive approach is appealing due to the low cost, lightweight glasses required. Other, active systems use a rapid shuttering system on each lens, synchronized with alternating left and right images on the display. In both cases, the viewer effectively sees two separate images on a single display surface (Dodgson 2005) creating the illusion of depth and volume within the image (Chiang 2013). Stereoscopic 3D displays can be used to extract visual information on depth, size and position of objects to facilitate spatial analysis tasks. The creation of shadows in 3D space can assist in making the 3D images appear to have real volume (Hubona, Wheeler et al. 1999). These realistic aspects of stereoscopic 3D work by triggering the visual, auditory, and other sensory cues that users have experienced in the real world (Bowman and McMahan 2007). Head tracking refers to the use of a device which is worn by a user to identify the exact position in and orientation of the head thereby providing an estimate of the users view direction. Several highly reflective balls or markers are usually attached to the head mounted device and a wall- or ceiling-mounted instrument continuously measures the position of these markers to determine the exact position and orientation of the user in space relative to the position of the tracking instrument. The tracking instrument sends this information to a computer to modify the current view with the correct size, shape, and location based on the position of the user in space (Chance, Gaunet et al. 1998). When head tracking is used and head position is properly measured, movement parallax is achieved (Dodgson 2005) a condition that prevents the user from noticing that the objects are changing position based on their movement allowing users to move their head and body naturally (Gibson, Gibson et al. 1959, Billen, Kreylos et al. 2008). To achieve this result, 3

16 it important to have a tracking system that adds minimal latency when rendering the image based on the movements of the user so that the changing objects and scenes are not noticed by the user. Head tracking systems can also estimate position, motion, and orientation through the use of internal motion detection electronics. This approach is particularly useful as adapted to headmounted displays (HMD) which provide a virtual reality interface by generating an image on a small screen immediately in front of the eye and in accordance with their viewpoint (Koller, Lindstrom et al. 1995). The Oculus Rift ( and Google Glass ( are examples of current leading HMDs. HMDs can provide an extremely intimate and immersive single-user experience. The headmounted marker/tracker system, in contrast, has the advantage of allowing multiple people to view a single scene together. However, given the lack of support for multiple user tracking in all current CAVE systems, other viewers see the viewpoint of the user being tracked and therefore a slightly distorted object, which can be visually unsettling (Dodgson 2005). 1.3 Research Goals The remainder of this paper presents a two-part human subjects study to investigate the relative value of both stereoscopic displays and head tracking technology in terms of creating an immersive environment for data analysis and interpretation. A recent, unrelated study regarding the effects of stereoscopic 3D, head tracking, and field or regard concluded that stereoscopic 3D and head tracking enable users to perform better in immersive visualization environments (Ragan, Kopper et al. 2013). Because our study was designed and executed without any prior knowledge of this parallel research effort, our results can be viewed as a potential validation or qualification of those results. Also our effort is unique in that it is the first to focus heavily on a complex LiDAR 4

17 data set based navigation and interpretation problem. The following sections describe the research methods employed, detailed results, and some interpretation thereof. 5

18 6

19 2 METHODS 2.1 Hardware and Software The human subjects study presented here was conducted using a low-cost stereoscopic immersive visualization system called the VuePod, Figure 1. The VuePod is comprised of twelve 55 3D LCD televisions paired with a custom-built high end gaming computer containing three video cards each of which sends simultaneous stereoscopic video output to four of the monitors (Hayden, Ames et al. 2014). The VuePod includes a motion tracking system produced by ARTtrack ( that uses two cameras to track the position of the reflecting balls or markers attached to glasses (for head tracking) and to a Nintendo Wii video game remote controller (for hand tracking) within a volume in front of the televisions. The VuePod computer supports both Linux and Windows operating system based software. For the current study, an open source software application, Vrui was used. The VRUI VR Toolkit is a general purpose virtual reality software that is capable of outputting stereoscopic 3D images to multiple screens (Kreylos 2008). 7

20 Figure 1: The VuePod large-scale stereoscopic visualization system used for this study. 2.2 Human Subjects and Training Ninety-six study participants were recruited by and verbal announcement in two groups of 48 each (one group for each part or phase of the study, hereafter termed Part A and Part B ). These participants were comprised of both male and female individuals ages 18 to 30 years old. The students were primarily undergraduate civil engineering students with moderate-tohigh technical and computer skills. User suitability was assessed by a pre-participation survey completed by each user. The survey determined whether a potential user had any problems regarding vision, depth perception, balance, fine motor skills, or mobility. These challenges would potentially put the users at a disadvantage, so they were addressed before the study. Once a user was determined suitable for the study, each user signed a consent form regarding the logistics of the study, confidentiality, risks, and compensation as per Brigham Young University Institutional Review Board-established human subjects in research policies. 8

21 A short training video was shown to each subject to instruct them on use of the VuePod ( The tutorial video shows how to move and rotate objects, orient a scene, fly through a scene, measure distances, and zoom in and out of scenes. These controls were taught by visually showing how they were achieved and with instructions explaining their purpose and how to perform them. The video had periodic breaks to allow the subject to practice using the VuePod until they felt comfortable using the controls and tools taught in the video. The configuration used in the training of how to use the VuePod was head tracking with stereoscopic 3D. 2.3 System Configurations We created four different system configurations, to which each subject was randomly assigned, including: head tracking with stereoscopic 3D (TS), no head tracking with stereoscopic 3D (NTS), head tracking with no stereoscopic 3D (TNS), and no head tracking with no stereoscopic 3D (NTNS). After the video training, each subject was informed that the level of 3D immersion had been adjusted; however, they were not informed as to exactly what changes had been made, which specific system configurations were being tested, or to which they had been assigned. For example, for the NTNS group, all head tracking and stereoscopic 3D was disabled; whereas for the TS group, stereoscopic 3D and head tracking were both enabled. All subjects wore head tracking 3D glasses while performing tasks, regardless of their assigned system configuration. 2.4 Datasets Part A of the study used three different datasets for specific visualization and interpretation tasks. These datasets included: 1) a magnetic resonance imaging (MRI) scan of a human foot 9

22 retrieved from the Visible Human Project at the University of California, Davis (Kreylos 2000), 2) a ground-based LiDAR scan of a highway bridge overpass retrieved from the EarthScope Intermountain Seismic Belt LiDAR Project (NSF, USGS et al. 2008), and 3) an aerial LiDAR scan of a region of San Diego, California (San Diego 2005). After assessing results from Part A, we devised a Part B study using use a single complex ground based terrestrial laser scanner (TLS) scan of the Crystal Cave located in Sequoia National Park, California. The TLS scan depicts a 3D point cloud of the contours and characteristics of the inside of the cavern. The dataset was provided by the U.S. Department of Energy Idaho National Laboratory. 2.5 Tasks For Part A of the study, each subject was asked to perform four different tasks (Tasks A1- A4) using the three datasets noted. By random selection, each subject was assigned a system configuration and performed all four tasks using that same configuration. A total of 48 subjects participated in Part A of the study with 12 subjects performing tasks in each of the four configuration groups (TS, NTS, TNS, and NTNS). Task A1 and A2 tested the subjects ability to perform simple orientation of a common 3D object a human foot depicted via MRI scan using the six degrees of freedom controller and large scale visualization environment. First, subjects were shown the image oriented in a vertical position that revealed the exterior of the foot (Figure 2) and were instructed to rotate the foot 180⁰ horizontally to show the inside of the foot in a vertical position. Each user was timed while performing this task and the proctor assigned a score from 0 to 5 based on the accuracy with which the task was performed (5 indicated the greatest accuracy). Rotation was conducted using the Wii remote controller and its attached tracking system reflective balls. 10

23 Figure 2: Scan of the human foot horizontal orientation starting position Users also were instructed to rotate the image from an upside down vertical position showing the exterior of the foot to a vertical position. Each subject was timed while performing this task and was given a score based on the accuracy at which the task was performed of 0 to 5, with 5 being the most accurate. For Task A3, each subject was instructed to measure the vertical height from the bottom of the bridge overpass to the top of the road located below the overpass in the LiDAR scan shown in Figure 3. The vertical height measured was recorded, as well as the time required to complete the task. Measurement accuracy was recorded based on the location of the measurement in 3D space. For example, it was noted whether or not the line measured was vertical from all angles and whether the endpoints of the lines were located on the LiDAR points or in front of or behind them. The purpose of performing this task was to determine the ease and accuracy of a simple analytical tool (distance measurement). 11

24 Figure 3: LiDAR scan of a bridge overpass For Task A4, each subject was shown an artist s rendition of the Crowne Plaza Hotel in San Diego, California, as it appeared when built in 1966 (Figure 4) as well as a LiDAR scan of the same property as collected in 2005 (Figure 5). Each subject was given 3 minutes to analyze the LiDAR scan and the photo and to identify any notable differences between the two images. The number of differences and the specific differences themselves were recorded. There were approximately 10 notable differences between the original image and the LiDAR scan. 12

25 Figure 4: Artist s rendition of the Crowne Plaza Hotel property when built in 1966 Figure 5: 2005 LiDAR scan of the Crowne Plaza Hotel property Part B of the study involved one complex data set and associated task (Task B). For this task, the cavern dataset LiDAR scan was used to combine navigation and orientation tools to provide a difficult task that would measure how well subjects are able to perform in different 13

26 configurations using head tracking and stereoscopic 3D. We recruited 48 new participants who were each randomly assigned to perform the task in one of the four previously noted configurations making four groups of 12 participants in each configuration. To begin the task each user was provided with an ipad containing instructions, a map, and images of different locations within the cavern as shown in Figure 6. Each user was instructed to use the map to navigate from the beginning of the cavern to the end of the cavern following the path specified in the map. Subjects were instructed to stay within the cavern and not pass through cavern walls. Figure 6: Plan view of the cavern Four locations were marked on the map as A, B, C, and D. Subjects were instructed to navigate to each specific location. Upon finding the location, they were instructed to orient the VuePod view such that it matched the associated static images shown on the ipad. For example, the required view for location A as shown in Figure 7. Upon completing this activity, the accuracy 14

27 with which they oriented the view was measured by the study proctor. The subject was then instructed to start from a previously saved correct view of location A and navigate to location B, etc. After location D was found and matched, the subject was instructed to find the end of the cavern and match and associated final image (Figure 8). Figure 7: Required view at location A Figure 8: Image of the end location 15

28 2.6 Assessing Tasks Tasks A1 and A2 both measured the accuracy with which subjects were able to follow instructions and complete the task. After completing each task, the proctor observed the final position of the image when the subject had stated that the task was complete, and gave a score from 0 to 5 with 5 being the best possible score. Once a score was determined the time required to complete the task was recorded. Scores were assigned based on predetermined criteria of accuracy. Task A3 was assessed by determining how well the user duplicated the actual height of the overpass (10 m). This answer is achieved when the two endpoints of the line are located on specific LiDAR points and the line is vertical from all angles. The measurement length determined by each user was recorded as well as notes regarding their accuracy. The time to complete the task was also recorded. Task A4 was assessed based on the number of differences identified. Each identified difference was recorded by the proctor and was then later assessed as per its correctness. We expected the subjects to identify 10 specific and noticeable differences between the artist s rendition and LiDAR scan. The number of changes and time spent finding those changes were used to determine how well each person was able to perform the task in the different system configurations. Three of these four tasks were assessed based on time to complete the task and a score on how well the task was completed. Scores were scaled against time of completion by dividing the score by the time and multiplying by 100. The resulting scaled scores for each user were sorted from lowest to highest within each group. Creating ranked scaled scores allowed the time and actual score to be factored together to better characterize the differences in working in each configuration. 16

29 Task B was a timed activity that also accounted for how well the subjects were able to perform the task. The instructions stated that the subjects needed to stay within the cavern and not pass through the cavern walls. To account for this each subject was given an initial 30 points and 1 point was subtracted each time the subject passed through a cavern wall, floor, or ceiling. The subject was notified each time they exited the cavern so that they could correct themselves. The time to find and match each image at each of the four locations was recorded. Up to 5 points were given for accuracy in matching an image at each location. The accuracy with which subjects matched the VuePod view to the provided image at each location was determined based on zoom and angle. The same proctor assessed each subject in this study to limit subjective biases in the assignment of accuracy points. Fifty total points were possible for this task. The total score was divided by the total time to navigate the cavern and multiplied by 100 to create scaled score. A scaled score was also found for each individual location based on the time it took to find and match the image with the associated score. The scaled scores were sorted form smallest to largest to create a ranked order of scaled scores based on the configuration used to achieve each score. 17

30 18

31 Scaled Score 3 RESULTS A visual assessment of the ranked and scaled scores was performed, and a Student s t-test was used to determine the statistical significance of the results. We used a level of probability of 95% with 22 degrees of freedom to determine the tabulated t-value. The tabulated t-value was 2.07 and was then compared with the calculated t-value. Specific results are discussed below. 3.1 Task A1: Human Foot Horizontal Orientation The results from the human foot horizontal orientation are shown in Figure User Rank TS NTNS NTS TNS Figure 9: Human foot horizontal orientation scaled score results 19

32 The TS group had the highest overall score of any of the four groups. Figure 9 shows that among the top four users in each group (the third tercile), TS scored the highest, followed by the TNS group. Interestingly, results from the lower four subjects in each group (the first tercile) show the opposite, with the TS group scoring lowest. One interpretation of these results is that power users, or people who are comfortable with technology, are more likely to benefit from the addition of head tracking and stereoscopic 3D whereas novice technology users are more likely to find the additional technologies to be cumbersome or otherwise limiting. Regardless, the visual assessment of these data is inconclusive regarding the question of the relative benefit of these two technologies. Table 1 presents the calculated t-values from the human foot horizontal orientation task. From the results, none of the calculated t-values are greater than the tabulated t-value meaning that 95% of the tests in one group are not significantly different from any of the other groups. Table 1: Student s t-test Calculated t-values from the Human Foot Horizontal Orientation Task TS NTNS NTS TNS TS NTNS NTS TNS

33 Scaled Score 3.2 Task A2: Human Foot Vertical Orientation The results from the vertical rotation of the human foot can be found in Figure TS NTNS NTS TNS User Rank Figure 10: Human foot vertical orientation scaled score results plot The NTS group included one user with the overall highest score. However, only two of the four NTS users in the third tercile scored above the other three configuration groups. Within the third tercile TS has the highest average when the top scorer from NTS is disregarded. When looking at the first tercile TS is the top performer with the other three configurations having similar scores. The second tercile has a wide spread with NTNS having the best score followed by TS, NTS, and TNS. When considering all three terciles, TS scored best. Table 2 presents the calculated t-values from Task A2, and shows that none of the calculated t-values are greater than the tabulated t-value. This means that 95% of the tests in one group are not significantly different from any of the other groups. 21

34 Table 2: Student s t-test Calculated t-values from the Human Foot Vertical Orientation Task TS NTNS NTS TNS TS NTNS NTS TNS Task A3: Highway Overpass Assessment of the numerical results and associated user comments for the bridge overpass measurement task indicated a high level of difficulty performing the task. Most subjects were unable to correctly measure the height of the bridge overpass. The comments and proctor observations indicate that the subjects were unable to move the cursor directly on the required LiDAR points causing many of the measurements to be taken in front of the points instead of on the points. This means that the measurements were essentially taken in random space. Another issue with this task was that the users were unable to draw a vertical line. When subjects would rotate the image it was clear that the measurement lines were not exactly vertical, thereby causing measurements to be longer than expected. Only three people were able to obtain a measurement on the points with a distance close to the actual answer. These three subjects were members of groups TS, NTS, and TNS. 22

35 Scaled Score 3.4 Task A4: Change Detection of the Crowne Plaza Hotel, San Diego Figure 11. The average times to identify all changes in the Crowne Plaza Hotel scene are shown in TS NTNS NTS TNS User Rank Figure 11: Change detection Crowne Plaza Hotel scaled score results plot Change detection results for this task were scaled against time as with the other tasks, but with little visible difference in scores since most users used all three minutes to complete the task. Figure 11, shows that the third tercile in each group had the same high score, but the NTS group had the highest scores of the four. The other three groups have one of the four values different in the third tercile, but are otherwise the same. The second tercile is very similar to the third with NTS having the obvious advantage in performance and the other three groups ending with similar scores. 23

36 Scaled Score The third tercile shows that each group had the same high score, but overall in this tercile, NTS group scored highest. The other three groups show highly similar results in the top tercile. The second tercile is very similar to the third with NTS showing clearly higher scores. Table 3 shows the results from the statistical analysis of the scaled scores from the change detection of the Crowne Plaza Hotel task. None of the calculated t-values from this task exceed the tabulated t-value of Therefore, we can conclude that there is not a significant difference between each group and their respective results. Table 3: Student s t-test Calculated t-values from the Change Detection of the Crowne Plaza Hotel Task TS NTNS NTS TNS TS NTNS NTS TNS Task B: Location A The results from finding and matching location A are shown in Figure TS NTNS NTS TNS User Rank Figure 12: Location A scaled score results 24

37 The scaled score results show that TNS or tracking with no stereo consistently performed better than the other configurations. The other three configurations were grouped together except in the third tercile. Within the third tercile the results show that TS or tracking with stereoscopic 3D performed better than NTS or NTNS. Table 4 presents the calculated t-values from location A. From the results, it can be determined that there is a significant statistical difference between TNS and NTNS, as well as TNS and NTS. This shows that when trying to find location A and match the accompanying image, that users were able to perform better when head tracking was active and stereoscopic 3D was not versus when only stereoscopic 3D was active, or head tracking and stereoscopic 3D were turned off. It can be said with 95% confidence that the TNS configuration is better than the other configurations when performing this task. Table 4: Student's t-test Calculated t-values from Location A TS NTNS NTS TNS TS NTNS NTS TNS Task B: Location B Figure 13 shows the results from finding and matching location B. 25

38 Scaled Score TS NTNS NTS TNS User Rank Figure 13: Location B scaled score results Figure 13 clearly shows that NTS performed better than the other configurations at this location. Table 5 presents the calculated t-values from this task. None of the calculated t-values are greater than the tabulated t-values signifying that there is no statistical difference in the performance of users when finding and matching location B. Table 5: Student's t-test Calculated t-values from Location B TS NTNS NTS TNS TS NTNS NTS TNS While observing subjects navigate through the cavern, there was a problem that was noticed that skews the results for this task. In many cases while users were searching for location A, they would pass the correct location and would not realize they had passed it until they stumbled upon location B. At this point they would ask to start back at the beginning to try to find location 26

39 Scaled Score A once again. This helped to show how different configurations helped and hindered users while trying to find and match location A. However, after subjects found location A and started from location A to find location B the ones who missed location A and did not realize it until they found location B were easily able to find location B for the second task. From Figure 13, it shows that in many cases NTS subjects who had difficulty finding location A were able to find location B quicker and more accurately because they already knew where to find it. 3.7 Task B: Location D Figure 14 shows the results from finding and matching location D TS NTNS NTS TNS User Rank Figure 14: Location D scaled score results TNS performed better across all users with the exception of the top subject from TS and NTS performing better than the top TNS subject. The first tercile shows that between TS, NTNS, and NTS that there was not one who outperformed the other two to have the second best score 27

40 Scaled Score behind TNS. In the second tercile, NTS starts to outperform the other two and continues this trend into the third tercile. Table 6 presents the calculated t-values from this task. The calculated t-values for TNS versus NTNS are higher than the tabulated t-values thus showing that there is a statistically significant difference between the two. Table 6: Student's t-test Calculated t-values from Location D TS NTNS NTS TNS TS NTNS NTS TNS Task B: End Location Figure 15 shows the results from the users finding and matching the end location TS NTNS NTS TNS User Rank Figure 15: End location scaled score results 28

41 TS, NTNS, and TNS had similar for all 12 subjects. NTS was included in the group except for in the third tercile, where the top 4 users performed better than any of the other configurations. Table 7 presents the calculated t-values from finding and matching the end point. When comparing these values to the tabulated t-values none of the calculated values are higher than the tabulated values. This indicates that there was no statistically significant difference between how well the users performed and their associated configuration. Table 7: Student's t-test Calculated t-values from the End Location TS NTNS NTS TNS TS NTNS NTS TNS Similarly to finding location A the same problem occurred when users were trying to find location D. Subjects were unsure of themselves when trying to identify location D and would proceed forward and find the endpoint. When they found the end location they would use that as a reference point to find location D. So once location D was found, many users already knew where to find the end location and would find the end location very quickly and accurately. So the results from finding and matching the end location do not accurately represent the benefit of using one configuration over the others. 29

42 Scaled Score 3.9 Task B: Total Score Figure 16 shows the cumulative scaled score results from navigating through the cavern while finding and matching the associated images User Rank TS NTNS NTS TNS Figure 16: Total score scaled score results The cumulative total score from finding and matching the four different locations and images were scaled and plotted to show that TNS scored better throughout the ranked subjects. Throughout the first tercile, TNS subjects scored significantly better than the other configurations, whose scores were all grouped together. The second tercile has a similar trend to the first, but the TS and NTS subjects scored better than the NTNS subjects and a division begins between them. The third tercile shows that TNS scored the highest followed by NTS, TS, and NTNS. The only exception is that the overall highest score came from TS. Table 8 presents the calculated t-values from the total scaled score. When comparing these to the tabulated t-value of 2.07, the conclusion can be made that there is a statistically significant 30

43 difference between TNS and NTNS. Since the plot shows that subjects performed better using TNS, subjects with head tracking and no stereoscopic 3D will perform better than those with no head tracking and no stereoscopic 3D. Table 8: Student's t-test Calculated t-values from the Total Score TS NTNS NTS TNS TS NTNS NTS TNS

44 32

45 4 DISCUSSION Visual assessments of ranked, scaled scores and its associated tasks reveal inconclusive results regarding relative advantages of head tracking and stereoscopic displays. The TS group scored well on both human foot orientations, but did not perform well on the change detection task. The NTS group performed well on the change detection task, but did not receive very high scores on the human foot orientations when compared to the other groups. The statistical analysis using the Student s t-test approach supports the visual analysis concluded from the scaled score plots; there is not a statistically significant difference between the different configurations tested in Part A. In short, our results are at best inconclusive as to whether stereoscopic 3D with head tracking improves ones ability to view, interpret, or manipulate data. This result might be partially due to the noticeable bezel on the 3D televisions which tend to diminish the overall sense of immersion by providing a fixed reality reference point. When considering the navigation and orientation task in Part B there are significant statistical differences between some of the configurations. Part B contained four separate tasks that measured the orientation and navigation skills of subjects when using different configurations, however, two of these tasks produced results that were skewed by subjects failing to interpret the inside of the cavern. When taking into account just finding location A and location D with the total score, conclusions can be made that TNS improves ones ability to view, interpret, and manipulate data when compared to the NTNS configuration. There is also a slight difference between TNS 33

46 and NTS, but it cannot be said that users perform better 95% of the time when using TNS when compared to NTS. 34

47 5 CONCLUSION The purpose of this study was to determine the relative significance and importance of head tracking and stereoscopic 3D as it applies to interpreting and navigating data in immersive visualization environments. Part A of the study shows that when interpreting, navigating, or orienting data individually that there are no statistically significant differences between immersive visualization environments with different configurations of head tracking and stereoscopic 3D. Part B combined interpreting, navigating, and orienting into the same task and the results showed significant differences between different configurations of an immersive visualization environment. When considering the combination of what users would actually do in immersive visualization environments a conclusion can be made that an environment with head tracking and no stereoscopic 3D provides a most beneficial platform for users to perform at high levels. The initial hypothesis was that head tracking with stereoscopic 3D would provide an environment that would allow users to perform better when interpreting and navigating through data. However, the results showed that stereoscopic 3D hinders that success of users. The VuePod and its differences from other CAVEs needs to be considered when analyzing the results of this study. It is possible that performing this study in a four or more wall CAVE would produce different results regarding head tracking and stereoscopic 3D because of the way the 3D is rendered. When using a multi-walled CAVE the 3D images are perceived to be projected in open space and could possibly be more beneficial to users than when using the one-walled VuePod with 35

48 3D images projected into the screens instead of in front of them. The VuePod contains bezels where the 3D televisions meet one another. These bezels make it hard for the 3D images to be rendered realistic when the user can see this object in front of the 3D images. Our conclusions show that when using low-cost immersive visualization environments, such as the VuePod, that it is more beneficial for users to navigate and interpret data in a configuration that includes head tracking with no stereoscopic 3D. 36

49 REFERENCES Billen, M. I., et al. (2008). "A geoscience perspective on immersive 3D gridded data visualization." Computers & Geosciences 34(9): Bowman, D. and R. P. McMahan (2007). "Virtual Reality: How Much Immersion is Enough?" IEEE Computer Society 40(7): Browning, D. R., et al. (1994). Projection-Based Virtual Environments and Disability. Virtual Reality Conference. Chance, S. S., et al. (1998). "Locomotion Mode Affects the Updating of Objects Encountered Druing Travel: The Contribution of Vestibular and Proprioceptive Inputs to Path Integration." Presence: Teleoperators & Virtual Environments 7(2). Chiang, W.-L. (2013). Three-Dimensional Glasses. U. S. Patent. United States of America, Hon Hai Precision Industry Co., Ltd. Cruz-Neira, C., et al. (1992). "The Cave Audio Visual Experience Automatic Virtual Environment." Communications of the ACM 35(6): DeFanti, T. A., et al. (2011). "The Future of the CAVE." Central European Journal of Engineering 1(1): Dodgson, N. A. (2005). "Autostereoscopic 3D Displays." IEEE: Fisher, S. S., et al. (1986). "Virtual Environment Display System." Interactive 3D Graphics: Gibson, E. J., et al. (1959). "Motion Parallax as a Determinant of Perceived Depth." Journal of Experimental Psychology 58(1): Hayden, S., et al. (2014). A Mobile, Low-Cost, Large-Scale, Immersive Data Visualization Environment, for Civil Engineering Applications, Brigham Young University. Hubona, G. S., et al. (1999). "The Relative Contributions of Stereo, Lighting and Background Scenes in Promotin 3D Depth Visualization." ACM Transactions on Computer-Human Interaction 6(3):

50 Koller, D., et al. (1995). Virtual GIS: A Real-Time 3D Geographic Information System. 6th IEEE Visualization Conference. Kreylos, O. (2000). "Sample Datasets." from Kreylos, O. (2008). Environment-Independent VR Development. Las Vegas, Nevada. Kreylos, O. (2013). "How Head Tracking Makes Holographic Displays." Kruger, W., et al. (1995). "The Responsive Workbence: A Virtual Work Environment." Computer 28(7): Lin, H., et al. (2013). "Virtual Geographic Environments (VGEs): A New Generation of Geographic Analysis Tool." Elsevier Earth Science Reviews(126): NSF, et al. (2008). EarthScope Intermountain Seismic Belt LiDAR Project, Open Topography. Peterka, T., et al. (2008). "Advances in the Dynallax Solid-State Dynamic Parallax Barrier Autostereoscopic Visualization Display System." IEEE Transactions on Visualization and Computer Graphics 14(3): Ragan, E. D., et al. (2013). "Studying the Effects of Stereo, Head Tracking, and Field of Regard on a Small-Scale Spatial Judgment Task." Visualization and Computer Graphics, IEEE 19(5): San Diego, C. o. (2005). San Diego Urban Region Lidar. C. o. S. Diego. Sherman, W. R., et al. (2010). "IQ-Station: A Low Cost Portable Immersive Environment." Advances in Visual Computing 6454: Slater, M. (2003). "A Note on Presence Terminology." Presence Connect 3(3). Sulbaran, T. and N. C. Baker (2000). Enhancing Engineering Education Through Distributed Virtual Reality. IEEE. Georgia Institute of Technology, IEEE: Waly, A. F. and W. Y. Thabet (2002). "A Virtual Construction Environment for Precondtruction Planning." Elsevier Automation in Construction(12): Ware, C., et al. (1993). Fish Tank Virtual Reality. Conference on Human Factors in Computing Systems, ACM. 38

Application of 3D Terrain Representation System for Highway Landscape Design

Application of 3D Terrain Representation System for Highway Landscape Design Application of 3D Terrain Representation System for Highway Landscape Design Koji Makanae Miyagi University, Japan Nashwan Dawood Teesside University, UK Abstract In recent years, mixed or/and augmented

More information

Enhancing Fish Tank VR

Enhancing Fish Tank VR Enhancing Fish Tank VR Jurriaan D. Mulder, Robert van Liere Center for Mathematics and Computer Science CWI Amsterdam, the Netherlands mullie robertl @cwi.nl Abstract Fish tank VR systems provide head

More information

Enhancing Fish Tank VR

Enhancing Fish Tank VR Enhancing Fish Tank VR Jurriaan D. Mulder, Robert van Liere Center for Mathematics and Computer Science CWI Amsterdam, the Netherlands fmulliejrobertlg@cwi.nl Abstract Fish tank VR systems provide head

More information

A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems

A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems F. Steinicke, G. Bruder, H. Frenz 289 A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems Frank Steinicke 1, Gerd Bruder 1, Harald Frenz 2 1 Institute of Computer Science,

More information

VR-programming. Fish Tank VR. To drive enhanced virtual reality display setups like. Monitor-based systems Use i.e.

VR-programming. Fish Tank VR. To drive enhanced virtual reality display setups like. Monitor-based systems Use i.e. VR-programming To drive enhanced virtual reality display setups like responsive workbenches walls head-mounted displays boomes domes caves Fish Tank VR Monitor-based systems Use i.e. shutter glasses 3D

More information

HandsIn3D: Supporting Remote Guidance with Immersive Virtual Environments

HandsIn3D: Supporting Remote Guidance with Immersive Virtual Environments HandsIn3D: Supporting Remote Guidance with Immersive Virtual Environments Weidong Huang 1, Leila Alem 1, and Franco Tecchia 2 1 CSIRO, Australia 2 PERCRO - Scuola Superiore Sant Anna, Italy {Tony.Huang,Leila.Alem}@csiro.au,

More information

Studying the Effects of Stereo, Head Tracking, and Field of Regard on a Small- Scale Spatial Judgment Task

Studying the Effects of Stereo, Head Tracking, and Field of Regard on a Small- Scale Spatial Judgment Task IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, MANUSCRIPT ID 1 Studying the Effects of Stereo, Head Tracking, and Field of Regard on a Small- Scale Spatial Judgment Task Eric D. Ragan, Regis

More information

Realistic Visual Environment for Immersive Projection Display System

Realistic Visual Environment for Immersive Projection Display System Realistic Visual Environment for Immersive Projection Display System Hasup Lee Center for Education and Research of Symbiotic, Safe and Secure System Design Keio University Yokohama, Japan hasups@sdm.keio.ac.jp

More information

Output Devices - Visual

Output Devices - Visual IMGD 5100: Immersive HCI Output Devices - Visual Robert W. Lindeman Associate Professor Department of Computer Science Worcester Polytechnic Institute gogo@wpi.edu Overview Here we are concerned with technology

More information

CSC Stereography Course I. What is Stereoscopic Photography?... 3 A. Binocular Vision Depth perception due to stereopsis

CSC Stereography Course I. What is Stereoscopic Photography?... 3 A. Binocular Vision Depth perception due to stereopsis CSC Stereography Course 101... 3 I. What is Stereoscopic Photography?... 3 A. Binocular Vision... 3 1. Depth perception due to stereopsis... 3 2. Concept was understood hundreds of years ago... 3 3. Stereo

More information

The Application of Virtual Reality in Art Design: A New Approach CHEN Dalei 1, a

The Application of Virtual Reality in Art Design: A New Approach CHEN Dalei 1, a International Conference on Education Technology, Management and Humanities Science (ETMHS 2015) The Application of Virtual Reality in Art Design: A New Approach CHEN Dalei 1, a 1 School of Art, Henan

More information

VISUAL REQUIREMENTS ON AUGMENTED VIRTUAL REALITY SYSTEM

VISUAL REQUIREMENTS ON AUGMENTED VIRTUAL REALITY SYSTEM Annals of the University of Petroşani, Mechanical Engineering, 8 (2006), 73-78 73 VISUAL REQUIREMENTS ON AUGMENTED VIRTUAL REALITY SYSTEM JOZEF NOVÁK-MARCINČIN 1, PETER BRÁZDA 2 Abstract: Paper describes

More information

Evaluation of Visuo-haptic Feedback in a 3D Touch Panel Interface

Evaluation of Visuo-haptic Feedback in a 3D Touch Panel Interface Evaluation of Visuo-haptic Feedback in a 3D Touch Panel Interface Xu Zhao Saitama University 255 Shimo-Okubo, Sakura-ku, Saitama City, Japan sheldonzhaox@is.ics.saitamau.ac.jp Takehiro Niikura The University

More information

Exploring 3D in Flash

Exploring 3D in Flash 1 Exploring 3D in Flash We live in a three-dimensional world. Objects and spaces have width, height, and depth. Various specialized immersive technologies such as special helmets, gloves, and 3D monitors

More information

tracker hardware data in tracker CAVE library coordinate system calibration table corrected data in tracker coordinate system

tracker hardware data in tracker CAVE library coordinate system calibration table corrected data in tracker coordinate system Line of Sight Method for Tracker Calibration in Projection-Based VR Systems Marek Czernuszenko, Daniel Sandin, Thomas DeFanti fmarek j dan j tomg @evl.uic.edu Electronic Visualization Laboratory (EVL)

More information

AUGMENTED VIRTUAL REALITY APPLICATIONS IN MANUFACTURING

AUGMENTED VIRTUAL REALITY APPLICATIONS IN MANUFACTURING 6 th INTERNATIONAL MULTIDISCIPLINARY CONFERENCE AUGMENTED VIRTUAL REALITY APPLICATIONS IN MANUFACTURING Peter Brázda, Jozef Novák-Marcinčin, Faculty of Manufacturing Technologies, TU Košice Bayerova 1,

More information

Chapter 1 Virtual World Fundamentals

Chapter 1 Virtual World Fundamentals Chapter 1 Virtual World Fundamentals 1.0 What Is A Virtual World? {Definition} Virtual: to exist in effect, though not in actual fact. You are probably familiar with arcade games such as pinball and target

More information

Do 3D Stereoscopic Virtual Environments Improve the Effectiveness of Mental Rotation Training?

Do 3D Stereoscopic Virtual Environments Improve the Effectiveness of Mental Rotation Training? Do 3D Stereoscopic Virtual Environments Improve the Effectiveness of Mental Rotation Training? James Quintana, Kevin Stein, Youngung Shon, and Sara McMains* *corresponding author Department of Mechanical

More information

COPYRIGHTED MATERIAL. Overview

COPYRIGHTED MATERIAL. Overview In normal experience, our eyes are constantly in motion, roving over and around objects and through ever-changing environments. Through this constant scanning, we build up experience data, which is manipulated

More information

COPYRIGHTED MATERIAL OVERVIEW 1

COPYRIGHTED MATERIAL OVERVIEW 1 OVERVIEW 1 In normal experience, our eyes are constantly in motion, roving over and around objects and through ever-changing environments. Through this constant scanning, we build up experiential data,

More information

Head Tracking for Google Cardboard by Simond Lee

Head Tracking for Google Cardboard by Simond Lee Head Tracking for Google Cardboard by Simond Lee (slee74@student.monash.edu) Virtual Reality Through Head-mounted Displays A head-mounted display (HMD) is a device which is worn on the head with screen

More information

Effect of the number of loudspeakers on sense of presence in 3D audio system based on multiple vertical panning

Effect of the number of loudspeakers on sense of presence in 3D audio system based on multiple vertical panning Effect of the number of loudspeakers on sense of presence in 3D audio system based on multiple vertical panning Toshiyuki Kimura and Hiroshi Ando Universal Communication Research Institute, National Institute

More information

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL

More information

Mid-term report - Virtual reality and spatial mobility

Mid-term report - Virtual reality and spatial mobility Mid-term report - Virtual reality and spatial mobility Jarl Erik Cedergren & Stian Kongsvik October 10, 2017 The group members: - Jarl Erik Cedergren (jarlec@uio.no) - Stian Kongsvik (stiako@uio.no) 1

More information

A New Paradigm for Head-Mounted Display Technology: Application to Medical Visualization and Remote Collaborative Environments

A New Paradigm for Head-Mounted Display Technology: Application to Medical Visualization and Remote Collaborative Environments Invited Paper A New Paradigm for Head-Mounted Display Technology: Application to Medical Visualization and Remote Collaborative Environments J.P. Rolland', Y. Ha', L. Davjs2'1, H. Hua3, C. Gao', and F.

More information

Controlling Viewpoint from Markerless Head Tracking in an Immersive Ball Game Using a Commodity Depth Based Camera

Controlling Viewpoint from Markerless Head Tracking in an Immersive Ball Game Using a Commodity Depth Based Camera The 15th IEEE/ACM International Symposium on Distributed Simulation and Real Time Applications Controlling Viewpoint from Markerless Head Tracking in an Immersive Ball Game Using a Commodity Depth Based

More information

Immersive Simulation in Instructional Design Studios

Immersive Simulation in Instructional Design Studios Blucher Design Proceedings Dezembro de 2014, Volume 1, Número 8 www.proceedings.blucher.com.br/evento/sigradi2014 Immersive Simulation in Instructional Design Studios Antonieta Angulo Ball State University,

More information

VIRTUAL REALITY Introduction. Emil M. Petriu SITE, University of Ottawa

VIRTUAL REALITY Introduction. Emil M. Petriu SITE, University of Ottawa VIRTUAL REALITY Introduction Emil M. Petriu SITE, University of Ottawa Natural and Virtual Reality Virtual Reality Interactive Virtual Reality Virtualized Reality Augmented Reality HUMAN PERCEPTION OF

More information

Quantitative Comparison of Interaction with Shutter Glasses and Autostereoscopic Displays

Quantitative Comparison of Interaction with Shutter Glasses and Autostereoscopic Displays Quantitative Comparison of Interaction with Shutter Glasses and Autostereoscopic Displays Z.Y. Alpaslan, S.-C. Yeh, A.A. Rizzo, and A.A. Sawchuk University of Southern California, Integrated Media Systems

More information

The Visual Cliff Revisited: A Virtual Presence Study on Locomotion. Extended Abstract

The Visual Cliff Revisited: A Virtual Presence Study on Locomotion. Extended Abstract The Visual Cliff Revisited: A Virtual Presence Study on Locomotion 1-Martin Usoh, 2-Kevin Arthur, 2-Mary Whitton, 2-Rui Bastos, 1-Anthony Steed, 2-Fred Brooks, 1-Mel Slater 1-Department of Computer Science

More information

-f/d-b '') o, q&r{laniels, Advisor. 20rt. lmage Processing of Petrographic and SEM lmages. By James Gonsiewski. The Ohio State University

-f/d-b '') o, q&r{laniels, Advisor. 20rt. lmage Processing of Petrographic and SEM lmages. By James Gonsiewski. The Ohio State University lmage Processing of Petrographic and SEM lmages Senior Thesis Submitted in partial fulfillment of the requirements for the Bachelor of Science Degree At The Ohio State Universitv By By James Gonsiewski

More information

Design and Implementation of the 3D Real-Time Monitoring Video System for the Smart Phone

Design and Implementation of the 3D Real-Time Monitoring Video System for the Smart Phone ISSN (e): 2250 3005 Volume, 06 Issue, 11 November 2016 International Journal of Computational Engineering Research (IJCER) Design and Implementation of the 3D Real-Time Monitoring Video System for the

More information

SimVis A Portable Framework for Simulating Virtual Environments

SimVis A Portable Framework for Simulating Virtual Environments SimVis A Portable Framework for Simulating Virtual Environments Timothy Parsons Brown University ABSTRACT We introduce a portable, generalizable, and accessible open-source framework (SimVis) for performing

More information

Experience of Immersive Virtual World Using Cellular Phone Interface

Experience of Immersive Virtual World Using Cellular Phone Interface Experience of Immersive Virtual World Using Cellular Phone Interface Tetsuro Ogi 1, 2, 3, Koji Yamamoto 3, Toshio Yamada 1, Michitaka Hirose 2 1 Gifu MVL Research Center, TAO Iutelligent Modeling Laboratory,

More information

Regan Mandryk. Depth and Space Perception

Regan Mandryk. Depth and Space Perception Depth and Space Perception Regan Mandryk Disclaimer Many of these slides include animated gifs or movies that may not be viewed on your computer system. They should run on the latest downloads of Quick

More information

COLLABORATION WITH TANGIBLE AUGMENTED REALITY INTERFACES.

COLLABORATION WITH TANGIBLE AUGMENTED REALITY INTERFACES. COLLABORATION WITH TANGIBLE AUGMENTED REALITY INTERFACES. Mark Billinghurst a, Hirokazu Kato b, Ivan Poupyrev c a Human Interface Technology Laboratory, University of Washington, Box 352-142, Seattle,

More information

Sample Copy. Not For Distribution.

Sample Copy. Not For Distribution. Photogrammetry, GIS & Remote Sensing Quick Reference Book i EDUCREATION PUBLISHING Shubham Vihar, Mangla, Bilaspur, Chhattisgarh - 495001 Website: www.educreation.in Copyright, 2017, S.S. Manugula, V.

More information

EnSight in Virtual and Mixed Reality Environments

EnSight in Virtual and Mixed Reality Environments CEI 2015 User Group Meeting EnSight in Virtual and Mixed Reality Environments VR Hardware that works with EnSight Canon MR Oculus Rift Cave Power Wall Canon MR MR means Mixed Reality User looks through

More information

Geographic information systems and virtual reality Ivan Trenchev, Leonid Kirilov

Geographic information systems and virtual reality Ivan Trenchev, Leonid Kirilov Geographic information systems and virtual reality Ivan Trenchev, Leonid Kirilov Abstract. In this paper, we present the development of three-dimensional geographic information systems (GISs) and demonstrate

More information

Virtual/Augmented Reality (VR/AR) 101

Virtual/Augmented Reality (VR/AR) 101 Virtual/Augmented Reality (VR/AR) 101 Dr. Judy M. Vance Virtual Reality Applications Center (VRAC) Mechanical Engineering Department Iowa State University Ames, IA Virtual Reality Virtual Reality Virtual

More information

Paper on: Optical Camouflage

Paper on: Optical Camouflage Paper on: Optical Camouflage PRESENTED BY: I. Harish teja V. Keerthi E.C.E E.C.E E-MAIL: Harish.teja123@gmail.com kkeerthi54@gmail.com 9533822365 9866042466 ABSTRACT: Optical Camouflage delivers a similar

More information

Visual Data Mining and the MiniCAVE Jürgen Symanzik Utah State University, Logan, UT

Visual Data Mining and the MiniCAVE Jürgen Symanzik Utah State University, Logan, UT Visual Data Mining and the MiniCAVE Jürgen Symanzik Utah State University, Logan, UT *e-mail: symanzik@sunfs.math.usu.edu WWW: http://www.math.usu.edu/~symanzik Contents Visual Data Mining Software & Tools

More information

3D display is imperfect, the contents stereoscopic video are not compatible, and viewing of the limitations of the environment make people feel

3D display is imperfect, the contents stereoscopic video are not compatible, and viewing of the limitations of the environment make people feel 3rd International Conference on Multimedia Technology ICMT 2013) Evaluation of visual comfort for stereoscopic video based on region segmentation Shigang Wang Xiaoyu Wang Yuanzhi Lv Abstract In order to

More information

Best Practices for VR Applications

Best Practices for VR Applications Best Practices for VR Applications July 25 th, 2017 Wookho Son SW Content Research Laboratory Electronics&Telecommunications Research Institute Compliance with IEEE Standards Policies and Procedures Subclause

More information

Structure from Motion (SfM) Photogrammetry Field Methods Manual for Students

Structure from Motion (SfM) Photogrammetry Field Methods Manual for Students Structure from Motion (SfM) Photogrammetry Field Methods Manual for Students Written by Katherine Shervais (UNAVCO) Introduction to SfM for Field Education The purpose of the Analyzing High Resolution

More information

Immersive Real Acting Space with Gesture Tracking Sensors

Immersive Real Acting Space with Gesture Tracking Sensors , pp.1-6 http://dx.doi.org/10.14257/astl.2013.39.01 Immersive Real Acting Space with Gesture Tracking Sensors Yoon-Seok Choi 1, Soonchul Jung 2, Jin-Sung Choi 3, Bon-Ki Koo 4 and Won-Hyung Lee 1* 1,2,3,4

More information

NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS

NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS Xianjun Sam Zheng, George W. McConkie, and Benjamin Schaeffer Beckman Institute, University of Illinois at Urbana Champaign This present

More information

A Method for Quantifying the Benefits of Immersion Using the CAVE

A Method for Quantifying the Benefits of Immersion Using the CAVE A Method for Quantifying the Benefits of Immersion Using the CAVE Abstract Immersive virtual environments (VEs) have often been described as a technology looking for an application. Part of the reluctance

More information

A reduction of visual fields during changes in the background image such as while driving a car and looking in the rearview mirror

A reduction of visual fields during changes in the background image such as while driving a car and looking in the rearview mirror Original Contribution Kitasato Med J 2012; 42: 138-142 A reduction of visual fields during changes in the background image such as while driving a car and looking in the rearview mirror Tomoya Handa Department

More information

Empirical Comparisons of Virtual Environment Displays

Empirical Comparisons of Virtual Environment Displays Empirical Comparisons of Virtual Environment Displays Doug A. Bowman 1, Ameya Datey 1, Umer Farooq 1, Young Sam Ryu 2, and Omar Vasnaik 1 1 Department of Computer Science 2 The Grado Department of Industrial

More information

HMD based VR Service Framework. July Web3D Consortium Kwan-Hee Yoo Chungbuk National University

HMD based VR Service Framework. July Web3D Consortium Kwan-Hee Yoo Chungbuk National University HMD based VR Service Framework July 31 2017 Web3D Consortium Kwan-Hee Yoo Chungbuk National University khyoo@chungbuk.ac.kr What is Virtual Reality? Making an electronic world seem real and interactive

More information

Time-Lapse Panoramas for the Egyptian Heritage

Time-Lapse Panoramas for the Egyptian Heritage Time-Lapse Panoramas for the Egyptian Heritage Mohammad NABIL Anas SAID CULTNAT, Bibliotheca Alexandrina While laser scanning and Photogrammetry has become commonly-used methods for recording historical

More information

THE RELATIVE IMPORTANCE OF PICTORIAL AND NONPICTORIAL DISTANCE CUES FOR DRIVER VISION. Michael J. Flannagan Michael Sivak Julie K.

THE RELATIVE IMPORTANCE OF PICTORIAL AND NONPICTORIAL DISTANCE CUES FOR DRIVER VISION. Michael J. Flannagan Michael Sivak Julie K. THE RELATIVE IMPORTANCE OF PICTORIAL AND NONPICTORIAL DISTANCE CUES FOR DRIVER VISION Michael J. Flannagan Michael Sivak Julie K. Simpson The University of Michigan Transportation Research Institute Ann

More information

Virtual Reality I. Visual Imaging in the Electronic Age. Donald P. Greenberg November 9, 2017 Lecture #21

Virtual Reality I. Visual Imaging in the Electronic Age. Donald P. Greenberg November 9, 2017 Lecture #21 Virtual Reality I Visual Imaging in the Electronic Age Donald P. Greenberg November 9, 2017 Lecture #21 1968: Ivan Sutherland 1990s: HMDs, Henry Fuchs 2013: Google Glass History of Virtual Reality 2016:

More information

Perceived depth is enhanced with parallax scanning

Perceived depth is enhanced with parallax scanning Perceived Depth is Enhanced with Parallax Scanning March 1, 1999 Dennis Proffitt & Tom Banton Department of Psychology University of Virginia Perceived depth is enhanced with parallax scanning Background

More information

EYE MOVEMENT STRATEGIES IN NAVIGATIONAL TASKS Austin Ducworth, Melissa Falzetta, Lindsay Hyma, Katie Kimble & James Michalak Group 1

EYE MOVEMENT STRATEGIES IN NAVIGATIONAL TASKS Austin Ducworth, Melissa Falzetta, Lindsay Hyma, Katie Kimble & James Michalak Group 1 EYE MOVEMENT STRATEGIES IN NAVIGATIONAL TASKS Austin Ducworth, Melissa Falzetta, Lindsay Hyma, Katie Kimble & James Michalak Group 1 Abstract Navigation is an essential part of many military and civilian

More information

Ultrasonic Calibration of a Magnetic Tracker in a Virtual Reality Space

Ultrasonic Calibration of a Magnetic Tracker in a Virtual Reality Space Ultrasonic Calibration of a Magnetic Tracker in a Virtual Reality Space Morteza Ghazisaedy David Adamczyk Daniel J. Sandin Robert V. Kenyon Thomas A. DeFanti Electronic Visualization Laboratory (EVL) Department

More information

Immersive Augmented Reality Display System Using a Large Semi-transparent Mirror

Immersive Augmented Reality Display System Using a Large Semi-transparent Mirror IPT-EGVE Symposium (2007) B. Fröhlich, R. Blach, and R. van Liere (Editors) Short Papers Immersive Augmented Reality Display System Using a Large Semi-transparent Mirror K. Murase 1 T. Ogi 1 K. Saito 2

More information

Welcome to this course on «Natural Interactive Walking on Virtual Grounds»!

Welcome to this course on «Natural Interactive Walking on Virtual Grounds»! Welcome to this course on «Natural Interactive Walking on Virtual Grounds»! The speaker is Anatole Lécuyer, senior researcher at Inria, Rennes, France; More information about him at : http://people.rennes.inria.fr/anatole.lecuyer/

More information

Augmented and Virtual Reality

Augmented and Virtual Reality CS-3120 Human-Computer Interaction Augmented and Virtual Reality Mikko Kytö 7.11.2017 From Real to Virtual [1] Milgram, P., & Kishino, F. (1994). A taxonomy of mixed reality visual displays. IEICE TRANSACTIONS

More information

A Comparison of Virtual Reality Displays - Suitability, Details, Dimensions and Space

A Comparison of Virtual Reality Displays - Suitability, Details, Dimensions and Space A Comparison of Virtual Reality s - Suitability, Details, Dimensions and Space Mohd Fairuz Shiratuddin School of Construction, The University of Southern Mississippi, Hattiesburg MS 9402, mohd.shiratuddin@usm.edu

More information

MIT CSAIL Advances in Computer Vision Fall Problem Set 6: Anaglyph Camera Obscura

MIT CSAIL Advances in Computer Vision Fall Problem Set 6: Anaglyph Camera Obscura MIT CSAIL 6.869 Advances in Computer Vision Fall 2013 Problem Set 6: Anaglyph Camera Obscura Posted: Tuesday, October 8, 2013 Due: Thursday, October 17, 2013 You should submit a hard copy of your work

More information

OCULUS VR, LLC. Oculus User Guide Runtime Version Rev. 1

OCULUS VR, LLC. Oculus User Guide Runtime Version Rev. 1 OCULUS VR, LLC Oculus User Guide Runtime Version 0.4.0 Rev. 1 Date: July 23, 2014 2014 Oculus VR, LLC All rights reserved. Oculus VR, LLC Irvine, CA Except as otherwise permitted by Oculus VR, LLC, this

More information

TEAM JAKD WIICONTROL

TEAM JAKD WIICONTROL TEAM JAKD WIICONTROL Final Progress Report 4/28/2009 James Garcia, Aaron Bonebright, Kiranbir Sodia, Derek Weitzel 1. ABSTRACT The purpose of this project report is to provide feedback on the progress

More information

Virtual Reality. NBAY 6120 April 4, 2016 Donald P. Greenberg Lecture 9

Virtual Reality. NBAY 6120 April 4, 2016 Donald P. Greenberg Lecture 9 Virtual Reality NBAY 6120 April 4, 2016 Donald P. Greenberg Lecture 9 Virtual Reality A term used to describe a digitally-generated environment which can simulate the perception of PRESENCE. Note that

More information

Team Breaking Bat Architecture Design Specification. Virtual Slugger

Team Breaking Bat Architecture Design Specification. Virtual Slugger Department of Computer Science and Engineering The University of Texas at Arlington Team Breaking Bat Architecture Design Specification Virtual Slugger Team Members: Sean Gibeault Brandon Auwaerter Ehidiamen

More information

Usability Studies in Virtual and Traditional Computer Aided Design Environments for Benchmark 2 (Find and Repair Manipulation)

Usability Studies in Virtual and Traditional Computer Aided Design Environments for Benchmark 2 (Find and Repair Manipulation) Usability Studies in Virtual and Traditional Computer Aided Design Environments for Benchmark 2 (Find and Repair Manipulation) Dr. Syed Adeel Ahmed, Drexel Dr. Xavier University of Louisiana, New Orleans,

More information

CSE 190: Virtual Reality Technologies LECTURE #7: VR DISPLAYS

CSE 190: Virtual Reality Technologies LECTURE #7: VR DISPLAYS CSE 190: Virtual Reality Technologies LECTURE #7: VR DISPLAYS Announcements Homework project 2 Due tomorrow May 5 at 2pm To be demonstrated in VR lab B210 Even hour teams start at 2pm Odd hour teams start

More information

ABSTRACT. A usability study was used to measure user performance and user preferences for

ABSTRACT. A usability study was used to measure user performance and user preferences for Usability Studies In Virtual And Traditional Computer Aided Design Environments For Spatial Awareness Dr. Syed Adeel Ahmed, Xavier University of Louisiana, USA ABSTRACT A usability study was used to measure

More information

Enclosure size and the use of local and global geometric cues for reorientation

Enclosure size and the use of local and global geometric cues for reorientation Psychon Bull Rev (2012) 19:270 276 DOI 10.3758/s13423-011-0195-5 BRIEF REPORT Enclosure size and the use of local and global geometric cues for reorientation Bradley R. Sturz & Martha R. Forloines & Kent

More information

Introduction to Virtual Reality (based on a talk by Bill Mark)

Introduction to Virtual Reality (based on a talk by Bill Mark) Introduction to Virtual Reality (based on a talk by Bill Mark) I will talk about... Why do we want Virtual Reality? What is needed for a VR system? Examples of VR systems Research problems in VR Most Computers

More information

Navigating the Virtual Environment Using Microsoft Kinect

Navigating the Virtual Environment Using Microsoft Kinect CS352 HCI Project Final Report Navigating the Virtual Environment Using Microsoft Kinect Xiaochen Yang Lichuan Pan Honor Code We, Xiaochen Yang and Lichuan Pan, pledge our honor that we have neither given

More information

Exploring Virtual Reality in Construction, Visualization and Building Performance Analysis

Exploring Virtual Reality in Construction, Visualization and Building Performance Analysis Exploring Virtual Reality in Construction, Visualization and Building Performance Analysis M. Al-Adhami a, L. Ma a and S. Wu a a School of Art, Design and Architecture, University of Huddersfield, UK E-mail:

More information

AN ORIENTATION EXPERIMENT USING AUDITORY ARTIFICIAL HORIZON

AN ORIENTATION EXPERIMENT USING AUDITORY ARTIFICIAL HORIZON Proceedings of ICAD -Tenth Meeting of the International Conference on Auditory Display, Sydney, Australia, July -9, AN ORIENTATION EXPERIMENT USING AUDITORY ARTIFICIAL HORIZON Matti Gröhn CSC - Scientific

More information

Universidade de Aveiro Departamento de Electrónica, Telecomunicações e Informática. Output Devices - I

Universidade de Aveiro Departamento de Electrónica, Telecomunicações e Informática. Output Devices - I Universidade de Aveiro Departamento de Electrónica, Telecomunicações e Informática Output Devices - I Realidade Virtual e Aumentada 2017/2018 Beatriz Sousa Santos What is Virtual Reality? A high-end user

More information

Virtual Reality. Lecture #11 NBA 6120 Donald P. Greenberg September 30, 2015

Virtual Reality. Lecture #11 NBA 6120 Donald P. Greenberg September 30, 2015 Virtual Reality Lecture #11 NBA 6120 Donald P. Greenberg September 30, 2015 Virtual Reality What is Virtual Reality? Virtual Reality A term used to describe a computer generated environment which can simulate

More information

Spatial Judgments from Different Vantage Points: A Different Perspective

Spatial Judgments from Different Vantage Points: A Different Perspective Spatial Judgments from Different Vantage Points: A Different Perspective Erik Prytz, Mark Scerbo and Kennedy Rebecca The self-archived postprint version of this journal article is available at Linköping

More information

CSC 2524, Fall 2018 Graphics, Interaction and Perception in Augmented and Virtual Reality AR/VR

CSC 2524, Fall 2018 Graphics, Interaction and Perception in Augmented and Virtual Reality AR/VR CSC 2524, Fall 2018 Graphics, Interaction and Perception in Augmented and Virtual Reality AR/VR Karan Singh Inspired and adapted from material by Mark Billinghurst What is this course about? Fundamentals

More information

TRAVEL IN SMILE : A STUDY OF TWO IMMERSIVE MOTION CONTROL TECHNIQUES

TRAVEL IN SMILE : A STUDY OF TWO IMMERSIVE MOTION CONTROL TECHNIQUES IADIS International Conference Computer Graphics and Visualization 27 TRAVEL IN SMILE : A STUDY OF TWO IMMERSIVE MOTION CONTROL TECHNIQUES Nicoletta Adamo-Villani Purdue University, Department of Computer

More information

Exploring the Benefits of Immersion in Abstract Information Visualization

Exploring the Benefits of Immersion in Abstract Information Visualization Exploring the Benefits of Immersion in Abstract Information Visualization Dheva Raja, Doug A. Bowman, John Lucas, Chris North Virginia Tech Department of Computer Science Blacksburg, VA 24061 {draja, bowman,

More information

Physical Presence in Virtual Worlds using PhysX

Physical Presence in Virtual Worlds using PhysX Physical Presence in Virtual Worlds using PhysX One of the biggest problems with interactive applications is how to suck the user into the experience, suspending their sense of disbelief so that they are

More information

A Kinect-based 3D hand-gesture interface for 3D databases

A Kinect-based 3D hand-gesture interface for 3D databases A Kinect-based 3D hand-gesture interface for 3D databases Abstract. The use of natural interfaces improves significantly aspects related to human-computer interaction and consequently the productivity

More information

Intro to Virtual Reality (Cont)

Intro to Virtual Reality (Cont) Lecture 37: Intro to Virtual Reality (Cont) Computer Graphics and Imaging UC Berkeley CS184/284A Overview of VR Topics Areas we will discuss over next few lectures VR Displays VR Rendering VR Imaging CS184/284A

More information

REPORT ON THE CURRENT STATE OF FOR DESIGN. XL: Experiments in Landscape and Urbanism

REPORT ON THE CURRENT STATE OF FOR DESIGN. XL: Experiments in Landscape and Urbanism REPORT ON THE CURRENT STATE OF FOR DESIGN XL: Experiments in Landscape and Urbanism This report was produced by XL: Experiments in Landscape and Urbanism, SWA Group s innovation lab. It began as an internal

More information

Tele-Nursing System with Realistic Sensations using Virtual Locomotion Interface

Tele-Nursing System with Realistic Sensations using Virtual Locomotion Interface 6th ERCIM Workshop "User Interfaces for All" Tele-Nursing System with Realistic Sensations using Virtual Locomotion Interface Tsutomu MIYASATO ATR Media Integration & Communications 2-2-2 Hikaridai, Seika-cho,

More information

VR based HCI Techniques & Application. November 29, 2002

VR based HCI Techniques & Application. November 29, 2002 VR based HCI Techniques & Application November 29, 2002 stefan.seipel@hci.uu.se What is Virtual Reality? Coates (1992): Virtual Reality is electronic simulations of environments experienced via head mounted

More information

Audio Output Devices for Head Mounted Display Devices

Audio Output Devices for Head Mounted Display Devices Technical Disclosure Commons Defensive Publications Series February 16, 2018 Audio Output Devices for Head Mounted Display Devices Leonardo Kusumo Andrew Nartker Stephen Schooley Follow this and additional

More information

A Study on Interaction of Gaze Pointer-Based User Interface in Mobile Virtual Reality Environment

A Study on Interaction of Gaze Pointer-Based User Interface in Mobile Virtual Reality Environment S S symmetry Article A Study on Interaction of Gaze Pointer-Based User Interface in Mobile Virtual Reality Environment Mingyu Kim, Jiwon Lee ID, Changyu Jeon and Jinmo Kim * ID Department of Software,

More information

Chapter 9. Conclusions. 9.1 Summary Perceived distances derived from optic ow

Chapter 9. Conclusions. 9.1 Summary Perceived distances derived from optic ow Chapter 9 Conclusions 9.1 Summary For successful navigation it is essential to be aware of one's own movement direction as well as of the distance travelled. When we walk around in our daily life, we get

More information

A Study of the Effects of Immersion on Short-term Spatial Memory

A Study of the Effects of Immersion on Short-term Spatial Memory Purdue University Purdue e-pubs College of Technology Masters Theses College of Technology Theses and Projects 8-6-2010 A Study of the Effects of Immersion on Short-term Spatial Memory Eric A. Johnson

More information

CSC 170 Introduction to Computers and Their Applications. Lecture #3 Digital Graphics and Video Basics. Bitmap Basics

CSC 170 Introduction to Computers and Their Applications. Lecture #3 Digital Graphics and Video Basics. Bitmap Basics CSC 170 Introduction to Computers and Their Applications Lecture #3 Digital Graphics and Video Basics Bitmap Basics As digital devices gained the ability to display images, two types of computer graphics

More information

Air-filled type Immersive Projection Display

Air-filled type Immersive Projection Display Air-filled type Immersive Projection Display Wataru HASHIMOTO Faculty of Information Science and Technology, Osaka Institute of Technology, 1-79-1, Kitayama, Hirakata, Osaka 573-0196, Japan whashimo@is.oit.ac.jp

More information

Session T3G A Comparative Study of Virtual Reality Displays for Construction Education

Session T3G A Comparative Study of Virtual Reality Displays for Construction Education Session TG A Comparative Study of Virtual Reality Displays for Construction Education Abstract - In many construction building systems courses, two-dimensional (D) diagrams are used in text books and by

More information

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception

More information

HUMAN MOVEMENT INSTRUCTION SYSTEM THAT UTILIZES AVATAR OVERLAYS USING STEREOSCOPIC IMAGES

HUMAN MOVEMENT INSTRUCTION SYSTEM THAT UTILIZES AVATAR OVERLAYS USING STEREOSCOPIC IMAGES HUMAN MOVEMENT INSTRUCTION SYSTEM THAT UTILIZES AVATAR OVERLAYS USING STEREOSCOPIC IMAGES Masayuki Ihara Yoshihiro Shimada Kenichi Kida Shinichi Shiwa Satoshi Ishibashi Takeshi Mizumori NTT Cyber Space

More information

Govt. Engineering College Jhalawar Model Question Paper Subject- Remote Sensing & GIS

Govt. Engineering College Jhalawar Model Question Paper Subject- Remote Sensing & GIS Govt. Engineering College Jhalawar Model Question Paper Subject- Remote Sensing & GIS Time: Max. Marks: Q1. What is remote Sensing? Explain the basic components of a Remote Sensing system. Q2. What is

More information

Proposal for the Object Oriented Display : The Design and Implementation of the MEDIA 3

Proposal for the Object Oriented Display : The Design and Implementation of the MEDIA 3 Proposal for the Object Oriented Display : The Design and Implementation of the MEDIA 3 Naoki KAWAKAMI, Masahiko INAMI, Taro MAEDA, and Susumu TACHI Faculty of Engineering, University of Tokyo 7-3- Hongo,

More information

6.869 Advances in Computer Vision Spring 2010, A. Torralba

6.869 Advances in Computer Vision Spring 2010, A. Torralba 6.869 Advances in Computer Vision Spring 2010, A. Torralba Due date: Wednesday, Feb 17, 2010 Problem set 1 You need to submit a report with brief descriptions of what you did. The most important part is

More information

UMI3D Unified Model for Interaction in 3D. White Paper

UMI3D Unified Model for Interaction in 3D. White Paper UMI3D Unified Model for Interaction in 3D White Paper 30/04/2018 Introduction 2 The objectives of the UMI3D project are to simplify the collaboration between multiple and potentially asymmetrical devices

More information

Virtual Reality Technology and Convergence. NBA 6120 February 14, 2018 Donald P. Greenberg Lecture 7

Virtual Reality Technology and Convergence. NBA 6120 February 14, 2018 Donald P. Greenberg Lecture 7 Virtual Reality Technology and Convergence NBA 6120 February 14, 2018 Donald P. Greenberg Lecture 7 Virtual Reality A term used to describe a digitally-generated environment which can simulate the perception

More information