Overview of Virtual Reality in Science and Engineering

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Overview of Virtual Reality in Science and Engineering"

Transcription

1 Overview of Virtual Reality in Science and Engineering M Kasim A Jalil Department of Mechanical & Aerospace Engineering University at Buffalo Virtual Reality (VR) involves development of a computer generated virtual environment intended to simulate the real world. It is an emerging computer visualization technology that allows users to experience a strong sense of reality in a computer-generated environment. Engineers have begun to realize the usefulness of VR as an innovative tool to visualize, manipulate, and interact with complex three-dimensional (3-D) graphical data that are difficult or even impossible to adequately understand in traditional twodimensional (2-D) drawings or even 3-D solid models. This chapter highlights the recent developments and applications of VR in engineering and the sciences. 2.1 Definition of Virtual Reality (VR) The term Virtual Reality (VR) is used by many different people with as many different meanings. There are some to whom VR is a specific collection of technologies (i.e. Head Mounted Display, Glove Input Device and Audio Device). Others stretch the term to include movies, games, entertainment and imagination. Virtual Reality is a way for humans to visualize, manipulate and interact with extremely complex data in a variety of immersive environments. A computer is used to generate visual, auditory or other sensual outputs to the user. This data may encompass a CAD model, a scientific simulation, or a view into a database. The user can interact with the virtual world and directly manipulate objects within it. Some worlds are animated by other processes such as physical simulations or simple animation scripts. Interaction in an immersive environment is perhaps the most intriguing part of virtual reality. In conventional human-computer interaction, humans remain "separated" from the computer environment. In VR, humans are totally immersed in the visualization-based world. They have the ability to manipulate and interact with the objects analyzed just as they do in the real world. Virtual Reality is often referred to by other terms, such as Augmented Reality, Synthetic Environments, Cyberspace, Artificial Reality, Simulator Technology and Immersive Environments. All of these terms actually refer to the same thing - Virtual Reality (VR). VR remains the most used term by the media. 2.2 History of VR In order to gain an understanding of where today s technology is in the field of high-end visualization, it is helpful to look at the history of Virtual Reality in both fiction and reality. Surprisingly, VR is closely linked to the development of calculating machines as 7

2 well as the development of mechanical devices (such as automata). The concept of VR can be traced back to the automata of the ancient Greeks [1]. Archytas of Terentim (circa BC) was reported to have developed a pigeon whose movements he controlled remotely using a jet of steam or compressed air. In China, at about the same time, inventors had created an entire mechanical orchestra that could be controlled by operators sitting yards from the instruments [2]. Calculating machines such as Charles Babbage s Analytical Engine were attempts to simulate reality in numeric form and then manipulate that reality to learn the results of different forces [2]. During World War II, the first computer was developed to decipher intelligence as well as to assist in missile research. Rocket trajectory, airflow patterns, and other characteristics of rocket engines were simulated on computers before prototypes were actually developed. As with most technologies, fiction preceded fact in VR. In the 1932 novel, Brave New World [3], Aldous Huxley described feelies, movies which allowed the viewers to feel the action taking place. Isaac Asimov [4] explored the subject of virtual environments in his Robot series. The books in that series featured positron brains that operated in virtual worlds. Arthur C. Clarke [5], in many of his books, talked about a cyberspace created by orbiting satellites. The first fictional description of a true VR concept may have come from William Gibson, an American who moved to Canada during the late 1960s. VR, (or "the matrix" or "cyberspace") plays an important role in Gibson's trio of 1980s novels, Neuromancer [6], Count Zero [7], and Mona Lisa Overdrive [8]. Possibly the most well-known fictional VR system today is the Holodeck from the TV series Star Trek: The Next Generation [9]. In this show, the Holodeck is controlled by a computer that translates voiced commands into various scenarios. These scenarios can be peopled with lifelike characters that seem to have volition. In fact, a computer bug occasionally causes the characters to go awry, threatening the Holodeck user. The Holodeck requires no special gear such as goggles, earphones, or tracking devices connected to the user's body. Rather, all the mechanisms are hidden within the room, providing what may be called "unencumbered VR" - a system not yet available in reality. The roots of non-fictional VR in a form that might be recognized as VR today may be traced back to early 1940s. An entrepreneur by the name of Edwin Link [10] joined forces with Admiral Luis DeFlorez to develop flight simulators in order to reduce pilot training time and costs. The early simulators were complex mechanical contraptions and illusion of the flight was relatively poor in the early models. However, the increasing power of computers and image technology have now made these simulators very realistic. Today, the mocked-up cockpit turns and rolls on a moving platform almost exactly simulating what would occur in an actual plane. The original simulators required that the user sit in front of a computer or TV screen, which normally represents either a window or a set of gauges [11]. The room was built to look like the equipment or vehicle the user is being trained on - a cockpit, bridge or power plant control room. 8

3 When a VR system requires that the user view the virtual environment through a screen, it is called Desktop VR or a Window on a World (WoW). Its origin can be traced back to 1965, when Ivan Sutherland published a paper called The Ultimate Display [12,13] in which he described the computer display screen as " a window through which one beholds a virtual world". He challenged scientists to create images that would justify the computer screen as the window analogy. While WoW systems may represent an earlier form of VR, they are still considered an important part of the VR family. In newer VR systems, more of the environment exists as a function of software. This environment is displayed via goggles and represented as force feedback joysticks or other sensing devices. An advantage of these systems is that without the requirement to build large rooms or mock cockpits, they are less expensive to build and maintain. Another level of representation is the video mapping approach. This merges an image of the user's silhouette with an on-screen two-dimensional computer graphic called parallax. In order to accomplish this, the user has to wear stereographic shutter glasses that provide input to the computer as to his or her physical orientation. Artificial reality and Artificial Reality II, both published in the 1960s by Myron Kruger [14], described such systems. A few TV game shows have used variations of image mapping techniques. For example, Nick Arcade (on cable channel Nickelodeon) places young contestants into video games. Immersive VR systems, when and if they can be perfected, represent the final step as of the VR technology ladder as we know it today. In theory, these systems should, from the users perspective, replicate reality exactly. In other words, the user should not be able to discern whether the world he or she is interacting with is real or virtual. 2.3 Why Virtual Reality? There is a growing body of research that can now provide a strong rationale for VR as the next Human-Computer Interface. As an interface metaphor, VR clearly has tremendous potential. This has already been demonstrated in industry, commerce, and the leisure communities. A continuing issue, however, is how will VR gain acceptance? To answer this one, must consider the advantages that VR can offer, specifically in 3D perspective and communication. Consider the following points. 3D Perception: The shape of objects and associated interrelationships remain ambiguous without true three-dimensional representation. The perspective projection onto a flat surface on a normal computer screen can be unclear. VR removes this ambiguity, and therefore represents a fundamental objective in design processes. Of particular importance is the sense of scale that can only be conveyed by immersing the analyst or designer in the "design" itself. Communication: VR promises to completely revolutionize the use of computers for cooperative work interaction. Natural human interaction is not easily 9

4 achievable in two dimensions. The telephone or videophone is effective but limited. When participants share a common location, they have the freedom to more easily and naturally communicate ideas (i.e. engineers from Japan and Germany simultaneously discussing a model of a car in the design process). When multiple participants are involved then the VR environment is said to be a Collaborative Virtual Environment (CVE). This concept will be explained in greater detail later in this dissertation. 2.4 Types of VR System Not all Virtual Reality systems require gloves and goggles, such as are seen in most technological amusement centers and scientific magazines. A major distinction between VR systems pertains to the mode with which they are interfaced to the users. The following sections describe some of the common modes used in VR systems, including Video Mapping, Immersive Systems and Telepresence Video Mapping This type of VR is best demonstrated in the highly recognized computer game DOOM. The game requires a player to control a battle-hardened warrior who must fight through a virtual world of hell. In this hell, there are monsters that must be killed, lest they kill the player. The interface of the user to the virtual world of monsters is in the form of a twodimensional display (i.e. a monitor), which places the user into the warrior s body. The game provides stereo sound that gives the patron an idea of the distance and direction of any nearby roaring monsters. By using the keyboard, the player can make the warrior run, turn, shoot or pick up objects. Everything shown on the screen seems real (i.e. moving about in the corridors, monsters die in response to the user s actions, etc.) but it is all based on a set of very complex data stored in the computer. It is these visual, auditory, and force feedback cues, as described above, together with the independent monsters implementing different actions intelligently, that puts the game in the VR category. A variation of the WoW approach merges a video input of the user's silhouette with a 2Dcomputer graphic Immersive System The most advanced VR systems completely immerse the user's personal viewpoint inside the virtual world. These "immersive" VR systems are often equipped with a Head Mounted Display (HMD), a BOOM, or other types of VR peripherals. An HMD is a helmet or a facemask that holds the visual and auditory displays. The helmet may be free ranging, tethered, or it might be attached to some sort of a boom armature. A nice variation of the immersive systems use multiple large projection displays to create a 'cave' or room in which the viewers stand. An early implementation was called "The Closet Cathedral", for the impression of an immense environment within a small physical space. The CAVE TM [15] system is one of the recent and famous VR spaces (developed at University of Illinois at Chicago). 10

5 2.4.3 Teleprescence Teleprescence is a variation on visualizing complete computer-generated worlds. This is a technology that links remote sensors in the real world with the senses of a human operator. In the virtual world, this technology has been used in medicine (called telemedicine), robotics (called telerobotics), firefighting, underwater exploration and space exploration, as well as others. One of the major uses of the technology is in medicine. Surgeons use very small instruments on cables to do surgery without making large incisions in their patients. The instruments have a small video camera at one end. This technology potentially enables future surgery to be performed remotely. Substantial research is undergoing in this area. Robots equipped with telepresence systems have already been used in deep sea and volcanic exploration. NASA is currently researching the use of telerobotics for space exploration. Therefore, while telepresence does not create a virtual world for the operator, it does give the user enough visual and audio information to make him feel as though he were virtually present Mixed Reality - Virtual Reality (VR) versus Augmented Reality (AR) Augmented Reality (AR) is a variation of Virtual Environments (VE), or Virtual Reality (VR). As previously described, VR technologies completely immerse a user inside a synthetic environment. While immersed, the user cannot see the real world around him. In contrast, AR allows the user to see the real world. Therefore, AR supplements reality rather than completely replaces it. AR can be thought of as the "middle ground" between VR, which is completely synthetic, and teleprescence, which is completely real [16]. Augmented Reality (AR) is a growing area in virtual reality research. The world environment around us provides a wealth of information that is difficult to duplicate in a computer. An augmented reality system generates a composite view for the user. It is a combination of the real scene viewed by the user and a virtual scene generated by the computer that augments the scene with additional information. The ultimate goal is to create a system such that the user cannot tell the difference between the real world and the virtual augmentation of it. For example, in a medical application, it might depict the merging and correct registration of data from a pre-operative imaging study onto a patient's head. Providing this view to a surgeon in the operating theater would enhance his performance and possibly eliminate the need for any other calibration fixtures during the procedure. A typical application of AR is telesurgery. Here, the computer-generated inputs are merged with telepresence inputs and the user s view of the real world. For example, surgeon's view of a brain surgery might be overlaid with images from earlier CAT scans, coupled with real-time ultrasound. In this chapter, the term VR is used for explaining both AR and VR concepts. 2.5 Scientific Applications of VR 11

6 Until the late 1980 s, researches in VR were limited to academia, the government, the military, and some large companies, due to high cost of computer equipment. AutoDesk TM presented an inexpensive PC-based system in 1989 that reduced the price in VR hardware significantly. However, the graphics speed and the quality of the rendering was limited since it served as a starter system for people who wanted to explore VR technology [17]. In the late 1980 s and the early 1990 s, many commercial companies have emerged and contributed tremendously to the development of VR technology (i.e. Virtual Research, Ascension, Fakespace and StereoGraphics and Sense8). Many major industrial companies such as Boeing [18], Caterpillar [19], Chrysler, Ford [20], General Motors [22], John Deere [21] have made a substantial investment in virtual reality techniques in both research and development. Many academic and government research centers around the world, especially in Europe and Asia, have also become serious about VR developments and applications Scientific Data Visualization One of the most astounding applications of VR is in the area of engineering testing. In this field, the actual testing environment is simulated in a virtual world to enable engineers to interactively visualize, perceive and explore the complicated structure of engineering objects. Applications have ranged from casting processes, wind tunnel testing, and structural analysis. VR technology has been extensively applied in scientific data visualization. The development of superior computer graphics capabilities has enabled scientists to develop better visualization models, interact visually, and improve their perception of new discovery processes. The Virtual Wind Tunnel [22] project was developed at NASA Ames for exploring numerically-generated, three-dimensional, unsteady flow fields. A boom-mounted six-degree-of-freedom head-position-sensitive stereo CRT system is used for viewing. A hand position sensitive glove controller is used for injecting various tracers (i.e. "smoke") into the virtual flow field. A multiprocessor graphics workstation is used for computation and rendering. Another application of VR in data visualization is called Computational Steering (CS) [23]. CS refers to the interactive control of a running simulation during the execution process. Scientists can control a set of parameters of the program and immediately react to the current results without having to wait until the end of the execution process. Several computational steering systems have been developed. The CAVEStudy [24] is a computational steering system that allows scientists to steer a program from a virtual reality system without requiring any modification to the simulation program. It enables an interactive and immersive analysis of simulations running on remote computers. Another system is SCIRun [25] developed by Christopher R. Johnson at University of Utah. It is a problem-solving environment for computational science. It provides an integrated framework to construct, steer, and study large applications. Other computational steering systems include VASE[26], Magellan [27], CUMULVS [28], VIPER [29], and CSE [30]. All of these systems provide varying levels of a user-controlled visualization interface to enable the steering of the analysis. A variation on Computational Steering, is the Visual 12

7 Design Steering (VDS) [31] paradigm, which is less stringent in its application criteria than Computational Steering (CS). VDS is more applicable to design while CS is more applicable to analysis Virtual Prototyping and Modeling Imagine that a group of designers are working on the model of a complex device for their clients. The designers and clients desire a joint detailed design review, even though they are physically separated. If each of them had a conference room that was equipped with a virtual reality display, this could be accomplished. The physical prototype that the designers have mocked up could be imaged and displayed in the client's conference room in 3D. The clients could walk around the display looking at different aspects of it. Both groups could, interact with the design and each other simultaneously. It is this type of distributed collaborative interaction that is the goal of this research, with a particular emphasis on scalability of computer platforms. There are an increasing number of companies that use VR technology to give customers better understanding of their future products. An example of these companies is Lavalley Lumber [32] that supplied their customers with VR images of kitchen designs. Electrolux has developed a VR marketing tool for its kitchen appliance range. Matsushita developed a multi-user virtual home to market and sell its range of domestic electrical goods [33,34]. Matsushita demonstrated an advanced interactive design experience with a virtual reality architectural walk-through in a complete two-story Japanese house. This includes fully textured, detailed bedrooms, kitchen, bathrooms, and interconnecting stairs. With a head-mounted display and a 3D mouse, users can walk around the rooms and up the stairs to different stories of the house. Airbus, a European aircraft manufacturer, is using a virtual prototype of its new airplane interior to market future products. More companies are investing in VR technology every day Robotics and Manufacturing A telerobot is a robotic system controlled by a human operator at a remote control station. With the development of VR technology in robotics, the control and manipulation of robotic structures have become simple and easy (compared to the traditional method requiring tedious programming). The operator can immerse himself in the robot s task by using VR peripherals (such as a glove and HMD). The manipulation of the robotic arm can be done directly using VR devices. This technology has been widely used in telesurgery [35] (remote surgery using VR technology), space exploration (MARS mission) and underwater exploration. In the domain of robotics and telerobotics, a virtual display can assist the user of the system [36,37]. A telerobotic operator uses a visual image of the remote workspace to guide the robot. Annotation of the view is also useful just as it is when the scene is in front of the operator. Since the view of the remote scene is often monoscopic, augmentation with wireframe drawings of structures in the view can facilitate visualization of the remote 3D geometry. Prior to an operator attempting a motion, it can be practiced on a virtual robot that is visualized as an augmentation to the 13

8 real scene. The operator could decide to proceed with the motion after seeing the results or could decide to modify the motion. Once the robot motion is determined, it could then be executed directly. In a telerobotics application, this would eliminate any oscillations caused by long delays to the remote site [38]. Another application can be seen in the maintenance field. When a maintenance technician must learn how to use a new or unfamiliar piece of equipment, he could put on a virtual reality display. In this display, the image of the equipment would be presented virtually with annotations and information pertinent to the repair. For example, the location of fasteners and attachment hardware that must be removed might be highlighted. Then, the inside view of the machine would highlight the boards that need to be replaced [39,40,41]. Training in the maintenance field is an obvious and important application of VR. Consider the additional maintenance applications. The military has developed a wireless vest worn by personnel that is attached to an optical see-through display [42]. The wireless connection allows the soldier to access repair manuals and images of equipment. Future versions are expected to register those images on a live scene and provide animation to show any procedures that must be performed. Boeing researchers are developing a virtual reality display to replace the large work frames used for making wiring harnesses for their aircraft [43,44]. Using this experimental system, technicians are guided by the virtual display that shows the routing of the cables on a generic frame used for all harnesses. The virtual display allows a single fixture to be used for making multiple harnesses. VR technology has also been applied in manufacturing planning, such as layout planning. This technique has potential to replace the traditional and time-consuming approach of physical modeling. Through the use of a VR model, planners and engineers can walk through a virtual factory, move virtual machines to any desired location, and simulate the intended manufacturing process. An example of this type of research is carried out at the VR Lab at the University at Buffalo [45,46]. The use of VR has also been extended to process simulation such as virtual tool path simulation and virtual milling and machining. The main advantage of using VR in manufacturing is reduced cost and timesavings Tele-Immersion Tele-Immersion enables users at geographically distributed sites to collaborate in real time in a shared simulated environment as if they were in the same physical room. It is the combination of networking and media technologies to enhance collaborative environments. In a tele-immersive environments, computers recognize the presence and movements of individuals and objects, track those individuals and images, and then permit them to be projected in realistic, multiple, geographically distributed immersive environments on stereo-immersive surfaces. 14

9 Tele-immersive environments facilitate not only interaction between users themselves but also between users and computer-generated models. This requires expanding the boundaries of computer vision, tracking, display, and rendering technologies to achieve a compelling experience. This then lays the groundwork for a higher degree of their inclusion into the entire system [47]. This type of communication is necessary when complex 3D numerical or graphical models need to be assessed and analyzed between different remote stations. The technological challenges include incorporation of measured, on-site data into the computational model, real-time transmission of the tremendous amount of scientific data from the computational model to the virtual environments, and management of the collaborative interaction between two stations. A great deal of research work in this area has been done in Argonne National Laboratory involving CAVE TM to CAVE TM communication [48]. Research on the application of telepresence in games and entertainment was also conducted at the University of Geneva (headed by Dr. Thalmann). Projects include Virtual Tennis [49], Virtual Chess, and the Real-Virtual Human [50]. National Tele-immersion Initiative (NTII) [51] is a project carried out in collaboration with some of the major universities in the United States (Brown University, University of North Carolina at Chapel Hill and University of Pennsylvania in Philadelphia) to build a national tele-immersive research infrastructure and to actively participate in the creation of the key technologies in tele-immersion Training and Educational The capability of simulating an actual working environment using VR has yielded another advantage for training and education. The development of the visible human, that displays 3D anatomical details of a male and female human body, together with surgery simulators, have contributed significantly to medical training [52]. Other examples include the virtual tank [53], the virtual submarine (VESUB) [54], the virtual workbench [55] and virtual classroom [56]. The military has been using simulation for pilots for years. Displays in the cockpits present information to the pilot on the windshield or on the visor of his flight helmet. This is a form of virtual reality display. SIMNET [57], a distributed war games simulation system, also embraces virtual reality technology. By equipping military personnel with helmet mounted visor displays or a special purpose rangefinder, the activities of other units participating in the exercise can be imaged. While looking at the horizon, for example, the display-equipped soldier could see a helicopter rising above the tree line [58]. This helicopter might actually be controlled by another participant who is involved in the simulation. In wartime, the display of a real battlefield scene could be virtual with annotation information or highlighting to emphasize hidden enemy units Medical and Therapy 15

10 Because imaging technology is so pervasive throughout the medical field, it is not surprising that this domain is viewed as one of the more important applications for virtual reality systems. Most medical applications deal with image-guided surgery. Pre-operative imaging studies (e.g. CT or MRI scans) of the patient provide the surgeon with the necessary views of the patients internal anatomy. From these images, the surgery can be planned. Visualization of the path through the anatomy to the affected area where, for example, a tumor must be removed, is done by first creating a 3D model from the multiple views and slices in a preoperative study. Also, VR can be applied so that the surgical team can see the CT or MRI data correctly registered on the patient in the operating theater while the procedure is progressing. Being able to accurately register the images at this point enhances the performance of the surgical team and eliminates the need for the painful and cumbersome stereotactic frames [59]. Just this year, the most complex and longest recorded surgery ever performed (96 hours) to separate twins conjoined at the head, was preceded by months of training and planning using VR technology. The surgeons found VR an invaluable tool to achieve success in this unprecedented surgery. Medicine has definitely become a computer-integrated, high technology industry. VR and teleprescence may have much to offer with its human-computer interfaces, 3D visualization, and modeling tools. In information visualization, medical professionals have access to a volume of information and data formats including MRI (magnetic resonance imaging), CAT (computerized axial topography), ultrasound, and X-rays. VR s graphics and output peripherals allow users to view large amounts of information by navigating through 3D models. Telepresence techniques can allow surgeons to conduct robotic surgery from anywhere in the world thereby offering increased accessibility to specialists. In the area of training, simulation of various surgeries has been done using VR technology. Recent research developments include the use of haptic system in a surgery simulator [60,61]. VR has also been used in therapy to treat patients suffering from psychological disturbances such as schizophrenia and tragic trauma. Psychological phobias, such as fear of heights (acrophobia), have been treated in virtual environments [62]. Chugh et. al. [63] at University at Buffalo developed an object-oriented approach to physically-based human tissues modeling for virtual reality application. It this research work, an approach termed the Atomic Unit Method was developed to simulate a realtime, physically accurate, volumetric virtual reality simulation of human tissues using haptics (force feedback) devices Entertainment The entertainment sector is one of the major users of VR technology today. A substantial amount of research has been directed towards human modeling for animation purposes. The aim of the research is to make human modeling and deformation capabilities available to the general engineering and entertainment community without the need for 16

11 physical prototypes or scanning devices. Another area of research pertains to the development of a new generation interactive entertainment simulator. The new generation simulator will provide immersive virtual experiences to the users. A simple form of virtual reality has been used in the entertainment and news business for quite some time. Whenever one watches the evening weather report, the meteorologist is shown standing in front of changing weather maps. In the studio, however, he is actually standing in front of a blue or green screen. This real image is supplemented with computer-generated maps using a technique called chroma keying. It is also possible to create a virtual studio environment so that the actors can appear to be positioned in a studio with computer generated decorating [64]. Movie special effects make use of digital compositing to create illusions in a similar manner [65]. Strictly speaking, with current technology, this may not be considered virtual reality because it is not generated in realtime. Most special effects are created off-line, frame-by-frame, with a substantial amount of user interaction and computer graphics system rendering. However, some work is progressing in computer analysis of the live action images to determine the camera parameters and to then use this to drive the generation of the virtual graphics objects to be merged [66]. Princeton Electronic Billboard has developed a virtual reality system that allows broadcasters to insert advertisements into specific areas of the broadcast [67]. For example, while broadcasting a baseball game, this system would be able to place an advertisement in the image so that it appears on the outfield wall of the stadium. The electronic billboard requires calibration to the stadium by taking images from typical camera angles and zoom settings in order to build a map of the stadium including the locations in the images where advertisements will be inserted. By using pre-specified reference points in the stadium, the system automatically determines the camera angle being used. By referring to the pre-defined stadium map, the advertisement can be inserted into the correct place Other Applications Virtual reality offers the potential to enhance architecture by combining threedimensional design, HMD, sound, and movement, to simulate a walk-through of a virtual space before the expensive construction on the structure begins. Although architects are generally good at visualizing structures from blueprints and floor plans, their clients often are not. Walking through virtual environments provides an opportunity to test the design of buildings, interiors and landscaping, and to resolve misunderstandings and explore options. The use of VR has also been extended to business areas. VR technology has been used for weather model visualization and navigating science. In arts and history, VR can be used either as a novel medium to create interactive art forms, or as an instrument, that takes the user on a guided tour of existing conventional art or historical sites. Several 17

12 projects have been implemented on creating artistic models using mathematical techniques (fractals), virtual instruments, virtual theater, recreating historical sites (e.g. Xian Terracotta) [68], and developing of a virtual museum. VR technology plays a major role in early product development. New design models, such as a car interior, can be simulated and tested in terms of ergonomics. The virtual Gorilla project at Georgia Tech and various virtual museums projects around the world represent some of the more useful applications of VR technology in the field of education. 2.6 Collaborative Virtual Environments (CVE) Today, there are several high technology solutions to support cooperative interaction in the work place. The Collaborative and Immersive Environment extends the concept of Virtual Reality further by integrating networking concepts where users in different geographic locations can communicate in a shared VR environment. Communication technologies are used to overcome the geographical separation of collaborators and to achieve the expected level of cooperation using teleconference, videoconference, electronic mail, and network documents management systems. This follows the new paradigm for scientific discovery wherein modern research is conducted by multiple scientists around the world. Often, a team of distributed collaborators works in the same subject area, sharing and discussing partial results. So, in areas where results are presented as images, visualization can be executed in different places, at different moments, and of course, by more than one person. Actual visualization systems treat visualization as an individual activity [69]. Certainly, scientists have a great number of powerful systems such as IRIS Explorer [70], AVS (Application Visualization System) [71], Khoros [72] and IBM Data Explorer [73]. However, even when using such systems scientists still have a need to be physically together to verify the results. Usually, these systems follow the dataflow model to implement a visualization pipeline, which has been detailed by Haber and McNabb [74]. Various Collaborative Virtual Environments (CVE) have been developed in the area of distant learning, games and engineering. There are two distinct collaborative virtual environments - window-based collaborative environments and totally immmersive collaborative environments. Window-based collaborative environments refer to an x-windows type of system where users at different locations are represented by a video screen with voice capability. Another window may be used to represent a design model or the subject of discussion. There is some similarity here to video-conferencing done in an x-windows environment. In a totally immersive collaborative environment, users at different geographic locations are immersed in a single shared environment with each represented by an avatar. A system called IRI (Interactive Remote Instruction) [75], developed at Old Dominion University, is a geographically dispersed virtual classroom, created by integrating high-speed computer networks and interactive multi-media workstations. Each student participates using a personal workstation, which can be used to view multi-media notebooks, and to interact 18

13 via video/audio and shared computer tools. Each workstation is equipped with a speaker, microphone and video camera. A collaborative and completely immersive tennis game [76] has been developed by a group of researchers at MIRALab (University of Geneva). In this environment, the interactive players were separated 60 km from each other and were merged into the virtual environment by head mounded displays, magnetic flock of bird sensors, and data gloves. The Virtual Life NETwork (VLNET) [77], is a general purpose client/server network system, is used for managing and controlling the shared networked virtual environment via an Asynchronous Transfer Mode (ATM) network [78] using realistic virtual humans (avatars) for user representation. These avatars support body deformation during motion. They also represent autonomous virtual actors such as the synthetic referee that make up part of the tennis game simulation. A special tennis ball driver animates the virtual ball by detecting and treating collisions between the tennis ball, the virtual rackets, the court and the net of the court. The tele-immersion group at the Electronic Visualization Lab (EVL) [79] at University of Illinois at Chicago has been involved in the development of a virtual environment to enable multiple globally situated participants to collaborate over high-speed and highbandwidth networks connected to heterogeneous supercomputing resources and large data stores. Among the projects developed at EVL are the CAVE Research Network (CAVERN) [80], the Collaborative Image Based Rendering (CIBR), the Laboratory for Analyzing and Reconstructing Artifacts (LARA), the Tele-Immersive Data Explorer (TIDE), Tandem [81], the Collaborative Architectural Layout Via Immersive Navigation (CALVIN) [82], the Narrative Immersive Constructionist/Collaborative Educational Environments (NICE) [83], the Round Earth Project [84], and the Computer Augmentation for Smart Architectonics (CASA) [85] which is a networked collaborative environment designed to allow the prototyping of smart homes and environments in VR. All of these projects require supercomputers and high-end visualization facilities. The CAVE Research Network (CAVERN) [86], for example, is an alliance of industrial and research institutions equipped with CAVE-based virtual reality hardware and highperformance computing resources, interconnected by high-speed networks, to support collaboration in design, education, engineering, and scientific visualization. CAVERNsoft [87] is the collaborative software backbone for CAVERN. CAVERNsoft uses distributed data stores to manage the wide range of data volumes (from a few bytes to several terabytes) that are typically needed for sustaining collaborative virtual environments. Multiple networking interfaces support customizable, latency, data consistency, and scalability that are needed to support a broad spectrum of networking requirements. These diverse database and networking requirements have not been exhibited by previous desktop multimedia systems but are common in real-time immersive virtual reality applications. 19

14 The Collaborative Image Based Rendering (CIBR) [79] viewer is a tool for viewing animated sequences of image-based renderings from volume data. CIBR Viewer is a CAVERNsoft-based tool for viewing animated sequences of image-based renderings from volume data. CIBR View was designed to allow DOE scientists to view volume renderings composed of 2D image slices. CIBR View allows collaboration on a variety of visualization platforms- from desktop workstations to CAVE TM s. LARA (Laboratory for Analyzing and Reconstructing Artifacts) [79] is an application for developing collaborative walkthroughs of large virtual environments. LARA was designed specifically to facilitate EVL's development of applications related to virtual restorations and recreations of cultural and historic sites. LARA allows users to traverse massive landscapes whose geometric parts may be located on distant distributed servers. LARA also allows the creation of annotations that allows past visitors to leave fully animated messages for new visitors. This same technology will allow world builders to create interactive tours of the environments. The Tele-Immersive Data Explorer (TIDE) [88] is a CAVERNsoft-based collaborative, immersive environment for querying and visualizing data from massive and distributed data-stores. TIDE is designed as a re-usable framework to facilitate the construction of other domain-specific data exploration applications challenged with the problem of having to visualize massive datasets. The TIDE framework allows groups of scientists each at geographically disparate locations to collectively participate in a data analysis session in a shared virtual environment. Tandem [89] is a Distributed Interaction Framework for Collaborative VR applications. It makes use of the CAVE library (CAVElib), which is a set of libraries designed as a base for developing virtual reality applications for spatially immersive displays, for VR projection display support, and CAVERNsoft for its networking. This framework allows VR developers to spend more time developing the application content and less time implementing generic VR requirements. A sequel to LIMBO [90] (a simple collaborative program that allows multiple participants represented as avatars to load and manipulate models in a persistent virtual environment), Tandem provides a more flexible architecture for building rich Tele- Immersive Environments. Tandem is the architecture on which TIDE and other CAVERN applications are based. The Collaborative Architectural Layout Via Immersive Navigation (CALVIN) [91] system is an immersive multimedia approach to applying virtual reality in architectural design and collaborative visualization emphasizing heterogeneous perspectives. These perspectives, including multiple mental models as well as multiple visual viewpoints, allow virtual reality to be applied in the earlier, more creative, phases of design, rather 20

15 than just as a walk-through of the finished space. CALVIN's interface employs visual, gestural, and vocal input to give the user greater control over the virtual environment. A prototype of CALVIN has been created and been used in the CAVE TM virtual reality theatre. Narrative Immersive Constructionist/Collaborative Educational Environments (NICE) [92] borrows and improves on the techniques developed in CALVIN. NICE is a project that applies virtual reality to the creation of a family of educational environments for young users. The approach is based on constructionism, where real and synthetic users, motivated by an underlying narrative, build persisting virtual worlds through collaboration. This approach is grounded on well-established paradigms in contemporary learning and integrates ideas from such diverse fields as virtual reality, human-computer interaction, storytelling, and artificial intelligence. The goal is to build an experiential learning environment that will engage children in authentic activity. The system explores the above ideas within the CAVE TM virtual reality theater. As a sequel to NICE, The Round Earth Project [93] investigates how virtual reality technology can be used to help teach concepts that are counter-intuitive to a learner's currently held mental model. Virtual reality can be used to provide an alternative cognitive starting point that does not carry the baggage of past experiences. In particular, two strategies are compared for using virtual reality to teach children that the Earth is round when their everyday experience tells them that it is flat. One strategy starts the children off on the Earth and attempts to transform their current mental model of the Earth into the spherical model. The second strategy starts the children off on a small asteroid where they can learn about the sphericality of the asteroid independent of their Earth-bound experiences. Bridging activities then relate their asteroid experiences back to the Earth. In each of the strategies, two children participate at the same time. One child participates from a CAVE TM while the other participates from an Immersadesk TM. The child in the CAVE TM travels around the Earth or the asteroid to retrieve items to complete a task, but cannot find these items without assistance. The child at the Immersadesk TM, with a view of the world as a sphere, provides this assistance. The children must reconcile their different views to accomplish their task. Computer Augmentation for Smart Architectonics (CASA) [94] is a collaborative VR application to demonstrate the feasibility of designing "smart environments" in VR depicting a house of the future. CASA is the predecessor to CALVIN. The environment created involves remote CAVE TM -to-cave TM collaboration via The Information Wide Area Year (I-WAY) [95], which is an experimental high-performance network linking dozens of the country's fastest computers and advanced visualization environments. This network is based on Asynchronous Transfer Mode (ATM) technology, an emerging standard for advanced telecommunications networks. 21

16 Collaborative Virtual Environments (COVEN) [96] is a European project that seeks to develop a comprehensive approach to the issues in the development of collaborative virtual environments (CVEs) technology. The overall objective of the COVEN project is to comprehensively explore the issues in the design, implementation and usage of multiparticipant shared virtual environments, at scientific, methodological and technical levels. COVEN brings together twelve academic and industrial partners with a wide range of expertise in Computer-Supported Co-operative Work (CSCW) [97], networked VR, computer graphics, human factors, human-computer interaction, and telecommunications infrastructures. Another two European based projects are the Distributed Interactive Virtual Environment (DIVE) [98] and the Model, Architecture and System for Spatial Interaction in Virtual Environments (MASSIVE) [99]. DIVE is an internet-based multi-user VR system where participants navigate in 3D space and see, meet and interact with other users and applications. The DIVE software is a research prototype covered by licenses. Binaries for non-commercial use, however, are freely available for a number of platforms. The first DIVE version appeared in DIVE supports the development of virtual environments, user interfaces and applications based on shared 3D synthetic environments. DIVE is especially tuned to multi-user applications, where several networked participants interact over a network. DIVE applications and activities include virtual battlefields, spatial models of interaction, virtual agents, real-world robot control and multi-modal interaction. MASSIVE was developed as part of the on-going research into collaborative virtual environments. This system allows multiple users to communicate using arbitrary combinations of audio, graphics, and text media over local and wide area networks. Communication is controlled by a so-called spatial model of interaction so that one user's perception of another user is sensitive to their relative positions and orientations. Virtue [100] is a collaborative virtual environment for a massively parallel project developed at the University of Illinois Pablo Research Group. The objective of the project is the development of virtual environments that allow software developers to directly manipulate software components and their behavior while immersed in scalable, hierarchical representations of software structure and real-time performance data. The goal with Virtue is to eliminate the barrier that separates the real world of users from the abstract world of software and its dynamics. In turn, this makes large-scale, complex software and its behavior concrete entities that can be understood and manipulated in the same ways as the physical environment. 2.7 Research focus and direction 22

17 The aim of this research is to develop a collaborative virtual environment for engineering design communication. Various applications of CVE have been discussed in the preceding sections. The applications are mainly used for educational, entertainment, and general scientific visualization purposes. In addition, the CVE uses the state of the art in high end VR systems or very expensive peripherals that are not available to common designers (e.g. CAVE TM and high-speed computers). In contrast, the research presented in this dissertation extends the use of CVE in engineering design where geographically distributed designers can communicate and interact in a single immersive virtual environment. The scalable, heterogeneous approach developed here enables designers to discuss engineering problems to enable more effective decision-making. The proposed environment is intended to be multiple platforms, user-friendly, and available with minimum hardware requirements. This means that users can run the application using UNIX, IRIX and PC machines. Another aspect of this research is to show that Virtual Reality (VR) is not necessarily limited to higher-end peripherals. The use of low-end peripherals such as stereographic shutter glasses is sufficient to transform a 2D monitor screen into 3D VR world. However, the use of more expensive peripherals such as a CAVE TM, an Immersadesk TM, or a Head Mounted Display (HMD), will certainly enhance the sense of reality in the virtual environment. A multiple platform collaborative distributed virtual environment for engineering design is developed in this research. The environment integrates virtual reality technology, design sensitivity analysis, and distributed environments to facilitate a more efficient communication among geographically disbursed designers. 23