The Visorama System: a Functional Overview of a New Virtual Reality Environment

Size: px
Start display at page:

Download "The Visorama System: a Functional Overview of a New Virtual Reality Environment"

Transcription

1 The Visorama System: a Functional Overview of a New Virtual Reality Environment André Matos 1,3 Jonas Gomes 1 André Parente 2 Heloisa Siffert 2 Luiz Velho 1 1 IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Rio de Janeiro, RJ, Brasil {amatos, lvelho, jonas}@visgraf.impa.br 2 Escola de Comunicação UFRJ Av. Pasteur, 250(fundos) Rio de Janeiro CEP Departamento de Informática, PUC-Rio Rua Marquês de São Vicente 225, Rio de Janeiro, RJ, Brasil Abstract The recent developments in image-based rendering have enabled a representation of virtual environments based on a simulation of panoramas, which we call virtual panoramas. Current virtual panorama systems do not provide a natural and immersive interaction with the environment. We propose a new system that uses hardware and software components to provide a natural and immersive interaction with virtual panoramas. As part of the system we propose a specific representation for the interactions in a virtual panorama. This representation can be used as a basis for the design of a high-level language for the creation of such environments. 1. Introduction Recent developments in 3D computer graphics has led to the creation of numerous virtual reality systems. These systems use a geometry description to model a virtual environment and 3D computer graphics algorithms to render the models in real-time. The users of such systems are usually allowed to navigate through the virtual environment, often with the help of additional hardware devices that provide an interface to the navigation [1]. The real-time rendering constraint, however, imposes a limit on the complexity of the modeled environment. As real world objects tend to have an exceedingly complex geometry, this limitation has a negative impact on the photo-realistic quality of such systems. The desire to overcome this problem has motivated the development of image-based rendering systems. The image-based approach combines techniques from photogrammetry, computer vision and computer graphics to represent and display virtual environments [2]. The representation is created from a number photographs, which can be taken from the real world or from a modeled environment. This representation is then used to reconstruct the views for an observer navigating through the environment. A number of image based rendering methods have been proposed which use different representations. McMillan and Bishop [2] present a framework for analyzing such methods and describe a new aproach. Other methods have recently been proposed which are based on ideas from holography [3][4]. The representation used by some of these methods provide an interaction similar to panoramas, which have been used since the nineteenth century as a type of realistic image expression. We refer to these representations as virtual panoramas. The first system to support virtual panoramas was Apple Computer s QuicktimeVR [5]. Several other systems have recently been developped with different implementations. Despite their differences, all of them have common deficiencies. Among them we should stress the fact that they do not provide a natural and immersive interaction with panoramas. This paper proposes the development of a new system, called Visorama, that combines hardware and software components to provide a natural and immersive interaction with virtual panoramas. The system also provides an authoring environment that will allow a highlevel specification of interactions common to a panorama-

2 based virtual environment. The paper begins with an introduction to panoramas, presenting their historical evolution. It then discusses virtual panoramas and the systems that currently support them. Next it presents the Visorama system, its hardware and software components. Finally, it discusses authoring in the Visorama system. The Visorama project is a collaboration between the Visgraf project of Instituto de Matemática Pura e Aplicada (IMPA) and the Grupo de Cultura e Tecnologia da Imagem of the Escola de Comunicação (ECO) of the Universidade Federal do Rio de Janeiro (UFRJ). 2. Panoramas A panorama is a type of mural painting built in a circular space around a central platform where spectators can look around in all directions and see a scene as if they were in the middle of it. It was patented by Robert Baker in 1787 and at that time was a very popular representation of landscapes and historical events. They were usually built in two or three store buildings so spectators could walk through different scenes [8]. The drawing in Figure 1 shows a section in a three store panorama building dated from By the middle of this century, several variations of panoramas had been created. An example is the Cinerama, in which cinematographic images were projected onto a circular surface covering 180 degrees. Although three different projections were used, the image appeared to be only one. Some of these variations used additional resources to enhance the user s immersibility in the fictitious world created by the panorama. In the Hale s Tour, for example, which simulated a train ride, the spectators actually sat inside a train, and the images were displayed through the windows. In another variation, the Sensorama, effects such as stereo sound and odors were used to simulate a motorcycle ride through Brooklyn, New York City. All of these variations share a number of common characteristics. The spectators are positioned inside an environment and images are projected on surfaces around them. These images display the views that would be seen by the spectators if they were in the middle of a real scene, always trying to achieve as much realism as possible to immerse the observer in this scene. The form of interaction of panoramas is very naturally accepted by spectators since it resembles the way we are used to observing the world around us, as if we were in the center of it. Perhaps this is the psychological reason why panoramas have always been so popular. Figure 1. Section of a panorama building [8]. 3. Virtual Panoramas Recent advances in image-based rendering techniques have enabled the real-time simulation of panoramas on the computer, which we call virtual panoramas. In a virtual panorama a digital image is painted onto a panorama surface S R 3 using environment mapping techniques. A virtual camera is then used to observe the surface interactively. The user is allowed to rotate the camera around its nodal point and change its field-ofview. The image to be mapped on the surface is called the panoramic image. This is illustrated in Figure 2. The panoramic image represents the projection of the environment in the panorama surface. The virtual panorama systems provide tools for the creation of these images from photographs. These could be taken from the real world or from a modeled environment. After being mapped on the panorama surface the panorama image can be interactively observed on the screen, as if the user were at the location where the pictures were taken. The process just described involves two projections: Projection of the environment onto the panorama surface. Projection of the panorama surface onto the virtual camera screen. The panorama surface should have a simple geometric shape to facilitate these two projections. The first projection is done by generating the panoramic image with specialized cameras and lens and mapping it onto the panorama surface. The most commonly used panorama surfaces are cylinders, spheres and cubes. If the cylinder is used, the panoramic image should be obtained with a panoramic camera. The image is mapped around the cylinder, an operation which defines an isometry between the cylinder and the image domain. Therefore, the image is not deformed by this transformation. If a sphere is used,

3 the panoramic image is divided into two parts which should be obtained with fisheye lens, from the same point and at opposite directions. Each image is orthogonaly mapped onto opposite hemispheres. If a cube is used, the panoramic image is divided in six parts which should be obtained with a common lens, from the same point and in six perpendicular directions. Each image is mapped onto a different face of the cube. Figure 2. A virtual camera observing a panorama. From the previous description, a virtual panorama system has two major components: an authoring environment and a viewer. The authoring environment allows the creation of the panoramic image. If this image is to represent a projection of the real world onto the panorama surface, it should be generated with cameras and lens specialized for each type of panoramic surface, as previously mentioned. These cameras and lens can also be simulated on the computer for the generation of a panoramic image representing a projection of a modeled environment. Alternatively, some systems allow the creation of these images using a number of photographs taken with regular cameras and lens. The images obtained are digitally warped, aligned with each other and combined to create a panoramic image that is very similar to one obtained using the specialized cameras and lens. The viewer performs the projection of the panorama surface onto the virtual camera interactively, allowing the user to pan, tilt and zoom with the camera. To implement this projection in real time, it can be approximated by a warping operation of the panoramic image. The result of this operation is displayed directly on the screen [2]. Systems may also provide means for playing sound files in the virtual panoramas and for composing images, animation or 3D objects with the painted panorama image. 4. Current virtual panorama systems We will compare three virtual panorama systems commercially available today: Quicktime VR 1.0 by Apple Computer, PhotoBubbles by Omniview, RealVR by RealSpace Description QuicktimeVR supports only cylindrical panoramic surfaces, and provides a complete authoring environment for the creation of virtual panoramas. The panoramic image can be a photograph obtained with a panoramic camera or generated from a sequence of overlapping photographs taken with regular cameras and lens. It currently does not allow sound, images or 3D objects to be composed with the panorama, although it has been announced that some of these features will be included in the next version of the system. PhotoBubles only allows spheres to be used as the panorama surface. The panoramic image must be taken with a 180 degrees fisheye lens, or, alternatively, it can be generated by a modeling program. The system does not provide an authoring environment so, panoramas must be generated by Omniview. RealVR, on the other hand, supports cylinders, spheres, and cubes as the panorama surface. However, it does not currently provide an authoring environment for any of these types of virtual panoramas. It only provides a converter from QuicktimeVR panoramas to its own format. This format is an extension to VRML 2.0 that allows virtual panoramas to be included in a VRML world. As a result, it is fairly simple to compose 3D objects, pictures, videos and sound with the virtual panorama. All three systems described above provide a viewer for their panoramas with a mouse interface for panning and tilting. In QuicktimeVR and RealVR, the keyboard has to be used as zoom control. None of them use multiresolution schemes for improved image quality during zooming. Finally, both QuicktimeVR and RealVR should, in the future, have an API for controlling playback from other applications 4.2. Analysis In a real panorama, the environment image is painted on a screen wall and the painted image is directly projected on the eye of the observer. As described in section 3, virtual panorama systems try to simulate this two-projection system. Nevertheless, a virtual panorama uses three projections. In fact, besides the environment projection of the panorama image, the viewing pipeline of the system uses two projections: the projection of the virtual camera which produces the image in the monitor, and the projection from the monitor to the users eyes.

4 Moreover, the viewing direction on the virtual panorama does not change as the user turns his head to look around, which is an expected feedback in a virtual environment. The indirect means of manipulating the virtual panorama through the mouse is not natural, and requires the user to associate hand movement with changing viewing directions. As a result, current virtual panorama systems do not provide immersibility of the user in the virtual environment. Because of this lack of immersibility, they cannot be considered virtual reality systems since immersibility is one of the main characteristics of these systems. Another deficiency of these systems is that they don t provide a high-level language to specify the interaction between the user and the virtual panorama. Most of them provide an interface for specifying low-level interaction tasks involving mouse movement on the viewer s window. But in a virtual panorama system, authors should be able to specify high-level interaction tasks such as a specific sequence of operations on the virtual camera. Related to this, is the inability of these systems to support the concept of a virtual environment represented by a virtual panorama. Most of them support movement of the user within a 3D environment by jumping between panoramas. But when the user is interacting with a single panorama, they don t provide any virtual environment support for working with the panorama. One exception is the RealVR system, which has all the support of VRML to create virtual environments. This language, however, requires the specification of a world with full 3D information, which is not available on virtual panoramas. It can be cumbersome to determine the 3D positions that should be assigned to objects so that they achieve a desired interaction in the virtual panorama. On the other hand, interactions on this virtual environment are more easily represented in terms of the operations that can be applied to the virtual camera, because this information is always available. 5. The Visorama System In this section we describe briefly the Visorama system, software and hardware components, and we will show how the system provides a solution to the deficiencies of current systems described in the previous section: User immersibility; View changing interface; High-level authoring language; Support for panorama-based virtual environments; As previously described, the problem of user immersibility and view changing interface occurs in the current virtual panorama systems because the viewing pipeline uses two projections: the projection of the virtual camera and the projection of the monitor screen onto the user s eyes. Moreover, these two projections are not coupled. In order to solve these problems, we need a device that is able to perform the following functions: Couple the two projections in the user s viewing pipeline. Correlate the eye projection with the head motion of the user. This device will provide the user with an abstraction of being in the middle of the virtual environment, interacting with it through an observation device. It is implemented by a viewer that simulates binoculars but, instead of displaying the real world through lenses, it displays regions of the virtual panorama through a pair of miniature CRT screens. A sketch of the device used by the Visorama system is shown in Figure 3. Figure 3. The Visorama observation device. This binocular display provides the necessary visual immersibility of the user in the virtual panorama. It is not allowed to move, but it can rotate in a pan and tilt motion. As that is done, the panorama is updated on the screen to provide the correct feedback to the rotation. This form of direct manipulation of the viewing direction while receiving the expected feedback provides a natural view changing interface for virtual panoramas. In addition, the user is not allowed to move, which is also natural for a virtual panorama. The system has software which takes user input and generates the corresponding output according to

5 interaction tasks specified by the author of the virtual environment. These interaction tasks are specified as a sequence of user actions common to virtual environments represented by panoramas, such as a sequence of operations on the virtual camera. An authoring language is being specified which uses high-level primitives based on simple specifications of interaction tasks. The language also supports the combination of primitives to create higher-level constructs. In the next sections, we describe the Visorama system in detail. In section 6 we describe the system s main hardware components. In section 7 we describe the software components. Finally, in section 8 we discuss authoring in the Visorama system. 6. The Visorama Hardware Visorama s main hardware components, their relationships and functional groupings are shown in Figure 4. They can be classified into three hardware subsystems: input subsystem, output subsystem and control subsystem. Figure 4. Hardware components. The control subsystem is basically a desktop computer with the necessary interfaces for communicating with the other hardware components in the input and output subsystems. This computer stores all information about the virtual environment. It runs software programs that use this information and user data obtained from the input subsystem to generate feedback data for the output subsystem. This real time process imposes a minimum speed constraint on the choice of processor used, since it must be able to take user input and generate appropriate output without introducing any lagging effects. It is also important that current virtual panorama systems run on this platform, since we intend to use them as part of our system. The control subsystem generates two types of data for the output subsystem: image and sound. The first type is sent to the binocular display and the second to the stereo sound equipment. As previously mentioned, the binocular display is an immersive display device that resembles common binoculars but, instead of having a set of lenses, it has, for each eye, an eyepiece and a miniature CRT screen. The images displayed by these screens appear to the user as if they were the projection of lenses in common binoculars. Each CRT screen is connected to a video output port on the computer. If two output ports are available, each screen can be connected to one, and a different image can be displayed for each eye. Although this allows stereo panoramas to be displayed, the first version of the Visorama system does not use this functionality. The stereo sound equipment is basically a pair of headphones that are connected to a stereo sound output port in the computer. Alternatively, speakers can also be used, but these have the disadvantage that the sound of the real environment could be confused with the system s output sound, resulting in a loss of auditory immersibility. By having a stereo system, different sounds can be output to each channel in order to simulate 3D sound in panoramas where sound sources are associated with a specific viewing direction. All output generated by the control subsystem is a function of the data it receives from the input subsystem and the authoring information. The input data takes two forms: viewing direction data, which is generated by a rotating head and a set of sensors, and user control data generated by a set of additional controls. The rotating head provides a direct manipulation of the viewing direction on the panorama. Potentiometers are attached to the two rotating axes of the head to capture the binocular display s movement and send it to the control subsystem. The input subsystem has a set of additional controls: two buttons and a potentiometer. The potentiometer is used to control the zooming factor. One of the buttons is used to generate discrete actions to the system, such as selecting an object on the panorama. These two controls are easily accessible by the user, since they should be heavily used. Note that these two controls and the potentiometers on the rotating head allow the execution of position, select and quantify tasks (see [7]). The remaining button is used to take the system into a control mode for specifying settings such as volume control. The values of input devices are periodically sent to the control subsystem, which must generate the correct feedback to the user as specified by the author of the virtual environment. This is achieved by the system s software components. 7. The Visorama Software

6 The system s software program can also be divided into three main functional modules: input, output and control. These three modules and their relationships are illustrated in Figure 5. The input module takes all data from the hardware devices and sends it to the other software modules. The output module takes this data from the input module and commands from the control module and generates all image and sound output. Finally, the control module examines input data and, if appropriate, sends commands to the output module. Figure 5. Modules and their relationships. 7.1 Input and Output The input module reads the data that is sent by the input hardware. This data is translated into a format that can be understood by the remaining modules. By having this translation done by the input software, the remaining software components do not have to be modified when the input hardware is changed. The resulting data is periodically sent to the other software modules. The control module reads this data at a slower rate than it is sent by the input module. As a result, some of the input data is not processed. If the button is pressed, however, new data is only sent when the old one is read, so the exact position where the button was pressed is read by the control module. The output module only reads the position and zooming data. The input module sends these values directly to the output module so it can immediately generate the rotating and zooming feedback on the virtual panorama. In this way, delays introduced by the control module processing the data do not affect the response time of the panorama regarding movement and zooming actions. Keeping this output coordinated with the binocular display s movement is fundamental to the immersibility provided by the system, since any lagging effects introduced in this process could confuse the user. The output module has two components, the image generation component and the sound generation component. The image generation component displays the virtual panorama, static images or 3D objects, which are all combined into a single output. Any virtual panorama system can be used if it has the following functionality: displays images and 3D objects on top of panoramas and has an API that can be used to control the display of the virtual panorama. This component receives commands from the control module determining which panorama, images or 3D objects are to be displayed, and a few other commands. It then loads the appropriate files from disk and displays them. The viewing direction and zooming angle are obtained directly from the input module, and are updated each time a new set of data arrives, providing the correct feedback. The sound generation component uses system resources to play audio files on the sound output hardware. It takes commands from the control module that determine which audio files should be played and the current position in the audio files, as well as common commands such as play, pause, stop and volume control. An environment like Apple Quicktime can be used as a basis for the sound output component Control: The Visorama System at work The commands generated by the control module are based on data taken from the input module and from a file which stores authoring information about the virtual environment. This information relates input sequences to their corresponding feedback, as specified previously by the author of the environment. Its internal representation is conceptually equivalent to a state diagram. Using this representation, the system is always at a known state and a number of events are specified that cause the system to transition to another state. These events are organized in sets, where each set causes a transition to a different state. Events are defined in terms of the module s parameter space. This space is the set of all combinations of pan, tilt, and zooming angles, button states and system timers. The first three parameters can be composed into a single parameter, the viewing position, so they are treated as a point in a 3D space, the viewing space, which defines a certain viewing configuration. A basic event can be defined as the current viewing position being inside a region in viewing space, the button being pressed, or a timer reaching a certain value. A composite event can be defined as a Boolean expression involving other basic and composite events. Actions can be specified to be executed while a transition is taking place. These actions are commands

7 which the control module sends to the output module. The commands that can be sent using current virtual panorama systems are: changing the current panorama; altering the current viewing parameters; playing, pausing, stopping, or jumping to a point in an audio file; showing or hiding an image or 3D object; and starting a timer. Other interesting actions should become available as virtual panorama systems provide more functionality. The control module is driven by the state diagram representing the current virtual environment. Its execution is basically a single loop where it reads input parameters and checks if any event that causes a transition occurred. When this happens, it executes the actions specified for the transition and replaces the current state by the transition s destination state. This implementation provides a simple and efficient way of generating the output corresponding to a sequence of interaction tasks in the virtual environment. 8. Authoring in the Visorama System The main problem with the approach based on state diagrams is the tedious process of creating the diagram to specify complex interactions. The approach is powerful in the sense that it can represent any interaction possible with virtual panoramas, and allows an efficient implementation. But as interactions become complicated, it is not intuitive for the author which states and transitions should be created. To illustrate this fact, Figure 6 shows a state diagram for a simple interaction specification in a virtual environment: If the user views a region R 1 for more than t 0 seconds, play an audio S 1 with duration t 1. If the user zooms into region R 2 and if S 1 has finished playing, then play another audio S 2. If it has not finished, wait for it to finish and then play S 2. If at any time the user leaves regions R 1 or R 2, stop S 1. If at any time the user leaves region R 2, stop S 2. To relieve authors from having to specify complex state diagrams, and still be able to specify the complex interaction tasks possible with this system, we are developing an authoring environment that provides a highlevel language for the specification of interaction tasks. This authoring language has basic primitives that correspond to state diagrams representing commonly used interaction tasks. These primitives are parameterized and can be used in different situations. The parameters can be regions in viewing space, button states, or timers, in which case they define the events that cause transitions inside the primitive. They can also be audio files, panoramas, images, 3D objects, or associated information, in which case they define commands that will be issued during transitions inside the primitive. The state diagram in Figure 6, for example, could represent a primitive. In this case, R 1, R 2, S 1, S 2 and t 0 would be the parameters, while t 1 and t 2 would be derived from S 1 and S 2 respectively. The language allows the combination of primitives to create higher level constructs. Initial not ( R 1or R 2) Stop S C 1 T 1 >t 1 Play S 2 not R R 2 and T 1>t Play S notr 2 Stop S R 1 Start T not R D 0 2 A B R 1and T 0 > t 0 Play S, Start T 1 1 R and not T >t Figure 6. A simple interaction specification. Using this language, authors create the virtual environment by specifying how it should respond to user input and timing events. This event-based approach is commonly used in multimedia authoring environments because of its adequacy for controlling user interactions [9]. By creating new constructs from existing ones, authors can hierarchically structure the interactions in the virtual environment. The divide-and-conquer paradigm can be applied to the authoring process so that only a small number of constructs have to be considered at a given level in the hierarchy. In addition, a hierarchically structured interaction specification is easier to reuse when simple modifications have to be done [10]. Although the authoring process in Visorama is similar to that of common multimedia presentation systems, the events generated by the users of these systems are fundamentally different. In multimedia presentation systems and in current virtual panorama systems, events are usually associated with the mouse moving into a certain area of the screen, or the user pressing the mouse button. The user has to do some action independent of viewing the content of the panorama or presentation. In Visorama, on the other hand, events might be generated by the viewing operation itself, with no further action necessary. As a result, users can seamlessly navigate through the virtual environment, not being aware that events are being generated. The specification of events associated with a viewing region allows authors to determine the output depending

8 on how much attention the user is giving to specific regions in the virtual environment. As a result, authors can specify which output the system should generate when the viewing area is restricted to a region, or object, of the virtual environment. This type of support to panoramabased virtual environments does not exist in current virtual panorama systems. We intend to develop a visual authoring environment to support the authoring language. The language constructs and means of combining them will be specified graphically though some form of diagram. In addition, areas of the panorama will be specified by navigating through the virtual panorama. All this information will be automatically converted into a state diagram representation used by Visorama s control software. 9. Conclusions We have described the development of a system that allows natural and immersive interaction with virtual panoramas. The new system includes hardware and software components and enables powerful forms of interaction that are not possible with current systems. The system has an authoring language that provides high-level primitives for the specification of interactions in the virtual environment. The Visorama system has many applications in areas such as education, tourism, real state, entertainment and many others. As the system is developed, we intend to create an application for exploring the city of Rio de Janeiro. The users will be presented with panoramas of the city from the Corcovado hill, where most of the city is visible. They will be guided to the most important parts of the city, will be able to zoom into regions of greater interest, and eventually be taken to panoramas of other parts of the city. In addition, through the use of image processing on old photographs and modeling programs, panoramas will be created that represent parts of the city in the past. All the navigation will be guided by audio narration, with explanations and suggestions about where to go. As the technology of image-based rendering evolves, we expect that this basic system architecture can be extended to provide an even greater amount of interaction capabilities with virtual panoramas. Acknowledgments: The Visorama project is sponsored by the Conselho Nacional de Pesquisa (CNPq), Fundação de Amparo a Pesquisa do Estado do Rio de Janeiro (FAPERJ), and Fundação José Bonifácio (FUJB). August [3] S. J. Gortler, R. Grzeszczuk, R. Szeliski, M. F. Cohen. The Lumigraph. SIGGRAPH 96 Proceedings, 43-54, August [4] M. Levoy, P. Hanrahan. Light Field Rendering. SIGGRAPH 96 Proceedings, 31-42, August [5] S.E. Chen. Quicktime VR - An Image Based Approach to Virtual Environment Navigation. Computer Graphics, Annual Conference Series, 29-38, [6] N. Greene. Environment Mapping and Other Applications of World Projections. IEEE Computer Graphics and Applications, 6(11):21-29, November [7] J. D. Foley, A.V. Dam, S.K. Feiner, J.F. Hughes. Computer Graphics Principles and Practice. Addison-Wesley, Reading, MA, [8] B. Comment. Le XIX e Siecle des Panoramas. Adam Piro, Paris, [9] E. Seongbae, E. S. No, H. C. Kim, H. Yoon, S.R. Maeng. Eventor: an Authoring System for Interactive Multimedia Applications. Multimedia Systems, 1994(2): [10]L. Hardman, G. V. Rossum, D. C. A. Bulterman. Structured Multimedia Authoring. Proceedings of ACM Multimedia 93, , References [1] R. S. Kalawsky, The Science of Virtual Reality. Addison- Wesley, Wokingham, England, [2] L. McMillan, G. Bishop. Plenoptic Modeling: An Image- Based Rendering System. SIGGRAPH 95 Proceedings, 39-46,

Early art: events. Baroque art: portraits. Renaissance art: events. Being There: Capturing and Experiencing a Sense of Place

Early art: events. Baroque art: portraits. Renaissance art: events. Being There: Capturing and Experiencing a Sense of Place Being There: Capturing and Experiencing a Sense of Place Early art: events Richard Szeliski Microsoft Research Symposium on Computational Photography and Video Lascaux Early art: events Early art: events

More information

Panoramic imaging. Ixyzϕθλt. 45 degrees FOV (normal view)

Panoramic imaging. Ixyzϕθλt. 45 degrees FOV (normal view) Camera projections Recall the plenoptic function: Panoramic imaging Ixyzϕθλt (,,,,,, ) At any point xyz,, in space, there is a full sphere of possible incidence directions ϕ, θ, covered by 0 ϕ 2π, 0 θ

More information

Time Warping of Audio Signals

Time Warping of Audio Signals Time Warping of Audio Signals Siome Goldenstein VAST Lab. University of Pennsylvania 2 South 33rd Street, Philadelphia, PA, USA 94 siome@graphics.cis.upenn.edu Jonas Gomes IMPA Instituto de Matemática

More information

VIRTUAL REALITY FOR NONDESTRUCTIVE EVALUATION APPLICATIONS

VIRTUAL REALITY FOR NONDESTRUCTIVE EVALUATION APPLICATIONS VIRTUAL REALITY FOR NONDESTRUCTIVE EVALUATION APPLICATIONS Jaejoon Kim, S. Mandayam, S. Udpa, W. Lord, and L. Udpa Department of Electrical and Computer Engineering Iowa State University Ames, Iowa 500

More information

Virtual Environments. Ruth Aylett

Virtual Environments. Ruth Aylett Virtual Environments Ruth Aylett Aims of the course 1. To demonstrate a critical understanding of modern VE systems, evaluating the strengths and weaknesses of the current VR technologies 2. To be able

More information

Argonne National Laboratory P.O. Box 2528 Idaho Falls, ID

Argonne National Laboratory P.O. Box 2528 Idaho Falls, ID Insight -- An Innovative Multimedia Training Tool B. R. Seidel, D. C. Cites, 5. H. Forsmann and B. G. Walters Argonne National Laboratory P.O. Box 2528 Idaho Falls, ID 83404-2528 Portions of this document

More information

On the data compression and transmission aspects of panoramic video

On the data compression and transmission aspects of panoramic video Title On the data compression and transmission aspects of panoramic video Author(s) Ng, KT; Chan, SC; Shum, HY; Kang, SB Citation Ieee International Conference On Image Processing, 2001, v. 2, p. 105-108

More information

Liquid Galaxy: a multi-display platform for panoramic geographic-based presentations

Liquid Galaxy: a multi-display platform for panoramic geographic-based presentations Liquid Galaxy: a multi-display platform for panoramic geographic-based presentations JULIA GIANNELLA, IMPA, LUIZ VELHO, IMPA, Fig 1: Liquid Galaxy is a multi-display platform

More information

High-Resolution Interactive Panoramas with MPEG-4

High-Resolution Interactive Panoramas with MPEG-4 High-Resolution Interactive Panoramas with MPEG-4 Peter Eisert, Yong Guo, Anke Riechers, Jürgen Rurainsky Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institute Image Processing Department

More information

Beacon Island Report / Notes

Beacon Island Report / Notes Beacon Island Report / Notes Paul Bourke, ivec@uwa, 17 February 2014 During my 2013 and 2014 visits to Beacon Island four general digital asset categories were acquired, they were: high resolution panoramic

More information

- Modifying the histogram by changing the frequency of occurrence of each gray scale value may improve the image quality and enhance the contrast.

- Modifying the histogram by changing the frequency of occurrence of each gray scale value may improve the image quality and enhance the contrast. 11. Image Processing Image processing concerns about modifying or transforming images. Applications may include enhancing an image or adding special effects to an image. Here we will learn some of the

More information

Technical Specifications: tog VR

Technical Specifications: tog VR s: BILLBOARDING ENCODED HEADS FULL FREEDOM AUGMENTED REALITY : Real-time 3d virtual reality sets from RT Software Virtual reality sets are increasingly being used to enhance the audience experience and

More information

Understanding OpenGL

Understanding OpenGL This document provides an overview of the OpenGL implementation in Boris Red. About OpenGL OpenGL is a cross-platform standard for 3D acceleration. GL stands for graphics library. Open refers to the ongoing,

More information

R (2) Controlling System Application with hands by identifying movements through Camera

R (2) Controlling System Application with hands by identifying movements through Camera R (2) N (5) Oral (3) Total (10) Dated Sign Assignment Group: C Problem Definition: Controlling System Application with hands by identifying movements through Camera Prerequisite: 1. Web Cam Connectivity

More information

Chapter 1 Virtual World Fundamentals

Chapter 1 Virtual World Fundamentals Chapter 1 Virtual World Fundamentals 1.0 What Is A Virtual World? {Definition} Virtual: to exist in effect, though not in actual fact. You are probably familiar with arcade games such as pinball and target

More information

Exploring 3D in Flash

Exploring 3D in Flash 1 Exploring 3D in Flash We live in a three-dimensional world. Objects and spaces have width, height, and depth. Various specialized immersive technologies such as special helmets, gloves, and 3D monitors

More information

Realistic Visual Environment for Immersive Projection Display System

Realistic Visual Environment for Immersive Projection Display System Realistic Visual Environment for Immersive Projection Display System Hasup Lee Center for Education and Research of Symbiotic, Safe and Secure System Design Keio University Yokohama, Japan hasups@sdm.keio.ac.jp

More information

The Mixed Reality Book: A New Multimedia Reading Experience

The Mixed Reality Book: A New Multimedia Reading Experience The Mixed Reality Book: A New Multimedia Reading Experience Raphaël Grasset raphael.grasset@hitlabnz.org Andreas Dünser andreas.duenser@hitlabnz.org Mark Billinghurst mark.billinghurst@hitlabnz.org Hartmut

More information

INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY

INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY T. Panayiotopoulos,, N. Zacharis, S. Vosinakis Department of Computer Science, University of Piraeus, 80 Karaoli & Dimitriou str. 18534 Piraeus, Greece themisp@unipi.gr,

More information

VR-programming. Fish Tank VR. To drive enhanced virtual reality display setups like. Monitor-based systems Use i.e.

VR-programming. Fish Tank VR. To drive enhanced virtual reality display setups like. Monitor-based systems Use i.e. VR-programming To drive enhanced virtual reality display setups like responsive workbenches walls head-mounted displays boomes domes caves Fish Tank VR Monitor-based systems Use i.e. shutter glasses 3D

More information

- applications on same or different network node of the workstation - portability of application software - multiple displays - open architecture

- applications on same or different network node of the workstation - portability of application software - multiple displays - open architecture 12 Window Systems - A window system manages a computer screen. - Divides the screen into overlapping regions. - Each region displays output from a particular application. X window system is widely used

More information

2017 EasternGraphics GmbH New in pcon.planner 7.5 PRO 1/10

2017 EasternGraphics GmbH New in pcon.planner 7.5 PRO 1/10 2017 EasternGraphics GmbH New in pcon.planner 7.5 PRO 1/10 Content 1 Your Products in the Right Light with OSPRay... 3 2 Exporting multiple cameras for photo-realistic panoramas... 4 3 Panoramic Images

More information

The Application of Virtual Reality in Art Design: A New Approach CHEN Dalei 1, a

The Application of Virtual Reality in Art Design: A New Approach CHEN Dalei 1, a International Conference on Education Technology, Management and Humanities Science (ETMHS 2015) The Application of Virtual Reality in Art Design: A New Approach CHEN Dalei 1, a 1 School of Art, Henan

More information

HMD based VR Service Framework. July Web3D Consortium Kwan-Hee Yoo Chungbuk National University

HMD based VR Service Framework. July Web3D Consortium Kwan-Hee Yoo Chungbuk National University HMD based VR Service Framework July 31 2017 Web3D Consortium Kwan-Hee Yoo Chungbuk National University khyoo@chungbuk.ac.kr What is Virtual Reality? Making an electronic world seem real and interactive

More information

POTENTIAL USE OF VIRTUAL ENVIRONMENTS IN DESIGN EDUCATION

POTENTIAL USE OF VIRTUAL ENVIRONMENTS IN DESIGN EDUCATION POTENTIAL USE OF VIRTUAL ENVIRONMENTS IN DESIGN EDUCATION Aysu SAGUN Middle East Technical University, NCC aysusagun@gmail.com ABSTRACT This paper explores the potential use of Virtual Environments (VE)

More information

The Application of Virtual Reality Technology to Digital Tourism Systems

The Application of Virtual Reality Technology to Digital Tourism Systems The Application of Virtual Reality Technology to Digital Tourism Systems PAN Li-xin 1, a 1 Geographic Information and Tourism College Chuzhou University, Chuzhou 239000, China a czplx@sina.com Abstract

More information

VIRTUAL REALITY TECHNOLOGY APPLIED IN CIVIL ENGINEERING EDUCATION: VISUAL SIMULATION OF CONSTRUCTION PROCESSES

VIRTUAL REALITY TECHNOLOGY APPLIED IN CIVIL ENGINEERING EDUCATION: VISUAL SIMULATION OF CONSTRUCTION PROCESSES VIRTUAL REALITY TECHNOLOGY APPLIED IN CIVIL ENGINEERING EDUCATION: VISUAL SIMULATION OF CONSTRUCTION PROCESSES Alcínia Z. Sampaio 1, Pedro G. Henriques 2 and Pedro S. Ferreira 3 Dep. of Civil Engineering

More information

Which equipment is necessary? How is the panorama created?

Which equipment is necessary? How is the panorama created? Congratulations! By purchasing your Panorama-VR-System you have acquired a tool, which enables you - together with a digital or analog camera, a tripod and a personal computer - to generate high quality

More information

CREATION AND SCENE COMPOSITION FOR HIGH-RESOLUTION PANORAMAS

CREATION AND SCENE COMPOSITION FOR HIGH-RESOLUTION PANORAMAS CREATION AND SCENE COMPOSITION FOR HIGH-RESOLUTION PANORAMAS Peter Eisert, Jürgen Rurainsky, Yong Guo, Ulrich Höfker Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institute Image Processing

More information

Introduction to Virtual Reality (based on a talk by Bill Mark)

Introduction to Virtual Reality (based on a talk by Bill Mark) Introduction to Virtual Reality (based on a talk by Bill Mark) I will talk about... Why do we want Virtual Reality? What is needed for a VR system? Examples of VR systems Research problems in VR Most Computers

More information

Networked Virtual Environments

Networked Virtual Environments etworked Virtual Environments Christos Bouras Eri Giannaka Thrasyvoulos Tsiatsos Introduction The inherent need of humans to communicate acted as the moving force for the formation, expansion and wide

More information

MRT: Mixed-Reality Tabletop

MRT: Mixed-Reality Tabletop MRT: Mixed-Reality Tabletop Students: Dan Bekins, Jonathan Deutsch, Matthew Garrett, Scott Yost PIs: Daniel Aliaga, Dongyan Xu August 2004 Goals Create a common locus for virtual interaction without having

More information

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real... v preface Motivation Augmented reality (AR) research aims to develop technologies that allow the real-time fusion of computer-generated digital content with the real world. Unlike virtual reality (VR)

More information

International Planetarium Society 98 Conference. ElectricSky Immersive Multimedia Theater Ed Lantz, Product Development Mgr. Spitz, Inc.

International Planetarium Society 98 Conference. ElectricSky Immersive Multimedia Theater Ed Lantz, Product Development Mgr. Spitz, Inc. ElectricSky Immersive Multimedia Theater Ed Lantz, Product Development Mgr. Spitz, Inc. Abstract A new vision is emerging for planetaria. We soon will be able to graphically control the entire surface

More information

Eyes n Ears: A System for Attentive Teleconferencing

Eyes n Ears: A System for Attentive Teleconferencing Eyes n Ears: A System for Attentive Teleconferencing B. Kapralos 1,3, M. Jenkin 1,3, E. Milios 2,3 and J. Tsotsos 1,3 1 Department of Computer Science, York University, North York, Canada M3J 1P3 2 Department

More information

Digital Design and Communication Teaching (DiDACT) University of Sheffield Department of Landscape. Adobe Photoshop CS4 INTRODUCTION WORKSHOPS

Digital Design and Communication Teaching (DiDACT) University of Sheffield Department of Landscape. Adobe Photoshop CS4 INTRODUCTION WORKSHOPS Adobe Photoshop CS4 INTRODUCTION WORKSHOPS WORKSHOP 3 - Creating a Panorama Outcomes: y Taking the correct photographs needed to create a panorama. y Using photomerge to create a panorama. y Solutions

More information

Regan Mandryk. Depth and Space Perception

Regan Mandryk. Depth and Space Perception Depth and Space Perception Regan Mandryk Disclaimer Many of these slides include animated gifs or movies that may not be viewed on your computer system. They should run on the latest downloads of Quick

More information

Realtime 3D Computer Graphics Virtual Reality

Realtime 3D Computer Graphics Virtual Reality Realtime 3D Computer Graphics Virtual Reality Virtual Reality Display Systems VR display systems Morton Heilig began designing the first multisensory virtual experiences in 1956 (patented in 1961): Sensorama

More information

Social Editing of Video Recordings of Lectures

Social Editing of Video Recordings of Lectures Social Editing of Video Recordings of Lectures Margarita Esponda-Argüero esponda@inf.fu-berlin.de Benjamin Jankovic jankovic@inf.fu-berlin.de Institut für Informatik Freie Universität Berlin Takustr. 9

More information

FAQ AUTODESK STITCHER UNLIMITED 2009 FOR MICROSOFT WINDOWS AND APPLE OSX. General Product Information CONTENTS. What is Autodesk Stitcher 2009?

FAQ AUTODESK STITCHER UNLIMITED 2009 FOR MICROSOFT WINDOWS AND APPLE OSX. General Product Information CONTENTS. What is Autodesk Stitcher 2009? AUTODESK STITCHER UNLIMITED 2009 FOR MICROSOFT WINDOWS AND APPLE OSX FAQ CONTENTS GENERAL PRODUCT INFORMATION STITCHER FEATURES LICENSING STITCHER 2009 RESOURCES AND TRAINING QUICK TIPS FOR STITCHER UNLIMITED

More information

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception

More information

of a Panoramic Image Scene

of a Panoramic Image Scene US 2005.0099.494A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2005/0099494A1 Deng et al. (43) Pub. Date: May 12, 2005 (54) DIGITAL CAMERA WITH PANORAMIC (22) Filed: Nov. 10,

More information

Designing Semantic Virtual Reality Applications

Designing Semantic Virtual Reality Applications Designing Semantic Virtual Reality Applications F. Kleinermann, O. De Troyer, H. Mansouri, R. Romero, B. Pellens, W. Bille WISE Research group, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium

More information

CATS METRIX 3D - SOW. 00a First version Magnus Karlsson. 00b Updated to only include basic functionality Magnus Karlsson

CATS METRIX 3D - SOW. 00a First version Magnus Karlsson. 00b Updated to only include basic functionality Magnus Karlsson CATS METRIX 3D - SOW Revision Number Date Changed Details of change By 00a 2015-11-11 First version Magnus Karlsson 00b 2015-12-04 Updated to only include basic functionality Magnus Karlsson Approved -

More information

Effective Iconography....convey ideas without words; attract attention...

Effective Iconography....convey ideas without words; attract attention... Effective Iconography...convey ideas without words; attract attention... Visual Thinking and Icons An icon is an image, picture, or symbol representing a concept Icon-specific guidelines Represent the

More information

Waves Nx VIRTUAL REALITY AUDIO

Waves Nx VIRTUAL REALITY AUDIO Waves Nx VIRTUAL REALITY AUDIO WAVES VIRTUAL REALITY AUDIO THE FUTURE OF AUDIO REPRODUCTION AND CREATION Today s entertainment is on a mission to recreate the real world. Just as VR makes us feel like

More information

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright E90 Project Proposal 6 December 2006 Paul Azunre Thomas Murray David Wright Table of Contents Abstract 3 Introduction..4 Technical Discussion...4 Tracking Input..4 Haptic Feedack.6 Project Implementation....7

More information

Simulation of Water Inundation Using Virtual Reality Tools for Disaster Study: Opportunity and Challenges

Simulation of Water Inundation Using Virtual Reality Tools for Disaster Study: Opportunity and Challenges Simulation of Water Inundation Using Virtual Reality Tools for Disaster Study: Opportunity and Challenges Deepak Mishra Associate Professor Department of Avionics Indian Institute of Space Science and

More information

Benefits of using haptic devices in textile architecture

Benefits of using haptic devices in textile architecture 28 September 2 October 2009, Universidad Politecnica de Valencia, Spain Alberto DOMINGO and Carlos LAZARO (eds.) Benefits of using haptic devices in textile architecture Javier SANCHEZ *, Joan SAVALL a

More information

Methodology for Agent-Oriented Software

Methodology for Agent-Oriented Software ب.ظ 03:55 1 of 7 2006/10/27 Next: About this document... Methodology for Agent-Oriented Software Design Principal Investigator dr. Frank S. de Boer (frankb@cs.uu.nl) Summary The main research goal of this

More information

ABSTRACT. Keywords Virtual Reality, Java, JavaBeans, C++, CORBA 1. INTRODUCTION

ABSTRACT. Keywords Virtual Reality, Java, JavaBeans, C++, CORBA 1. INTRODUCTION Tweek: Merging 2D and 3D Interaction in Immersive Environments Patrick L Hartling, Allen D Bierbaum, Carolina Cruz-Neira Virtual Reality Applications Center, 2274 Howe Hall Room 1620, Iowa State University

More information

Intelligent Modelling of Virtual Worlds Using Domain Ontologies

Intelligent Modelling of Virtual Worlds Using Domain Ontologies Intelligent Modelling of Virtual Worlds Using Domain Ontologies Wesley Bille, Bram Pellens, Frederic Kleinermann, and Olga De Troyer Research Group WISE, Department of Computer Science, Vrije Universiteit

More information

Light field sensing. Marc Levoy. Computer Science Department Stanford University

Light field sensing. Marc Levoy. Computer Science Department Stanford University Light field sensing Marc Levoy Computer Science Department Stanford University The scalar light field (in geometrical optics) Radiance as a function of position and direction in a static scene with fixed

More information

Federico Forti, Erdi Izgi, Varalika Rathore, Francesco Forti

Federico Forti, Erdi Izgi, Varalika Rathore, Francesco Forti Basic Information Project Name Supervisor Kung-fu Plants Jakub Gemrot Annotation Kung-fu plants is a game where you can create your characters, train them and fight against the other chemical plants which

More information

Abstract. 1. Introduction and Motivation. 3. Methods. 2. Related Work Omni Directional Stereo Imaging

Abstract. 1. Introduction and Motivation. 3. Methods. 2. Related Work Omni Directional Stereo Imaging Abstract This project aims to create a camera system that captures stereoscopic 360 degree panoramas of the real world, and a viewer to render this content in a headset, with accurate spatial sound. 1.

More information

The Representation of the Visual World in Photography

The Representation of the Visual World in Photography The Representation of the Visual World in Photography José Luis Caivano INTRODUCTION As a visual sign, a photograph usually represents an object or a scene; this is the habitual way of seeing it. But it

More information

REPORT ON THE CURRENT STATE OF FOR DESIGN. XL: Experiments in Landscape and Urbanism

REPORT ON THE CURRENT STATE OF FOR DESIGN. XL: Experiments in Landscape and Urbanism REPORT ON THE CURRENT STATE OF FOR DESIGN XL: Experiments in Landscape and Urbanism This report was produced by XL: Experiments in Landscape and Urbanism, SWA Group s innovation lab. It began as an internal

More information

Advancements in Gesture Recognition Technology

Advancements in Gesture Recognition Technology IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) Volume 4, Issue 4, Ver. I (Jul-Aug. 2014), PP 01-07 e-issn: 2319 4200, p-issn No. : 2319 4197 Advancements in Gesture Recognition Technology 1 Poluka

More information

Application of 3D Terrain Representation System for Highway Landscape Design

Application of 3D Terrain Representation System for Highway Landscape Design Application of 3D Terrain Representation System for Highway Landscape Design Koji Makanae Miyagi University, Japan Nashwan Dawood Teesside University, UK Abstract In recent years, mixed or/and augmented

More information

Tangible interaction : A new approach to customer participatory design

Tangible interaction : A new approach to customer participatory design Tangible interaction : A new approach to customer participatory design Focused on development of the Interactive Design Tool Jae-Hyung Byun*, Myung-Suk Kim** * Division of Design, Dong-A University, 1

More information

Light-Field Database Creation and Depth Estimation

Light-Field Database Creation and Depth Estimation Light-Field Database Creation and Depth Estimation Abhilash Sunder Raj abhisr@stanford.edu Michael Lowney mlowney@stanford.edu Raj Shah shahraj@stanford.edu Abstract Light-field imaging research has been

More information

Image stitching. Image stitching. Video summarization. Applications of image stitching. Stitching = alignment + blending. geometrical registration

Image stitching. Image stitching. Video summarization. Applications of image stitching. Stitching = alignment + blending. geometrical registration Image stitching Stitching = alignment + blending Image stitching geometrical registration photometric registration Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2005/3/22 with slides by Richard Szeliski,

More information

Integrating PhysX and OpenHaptics: Efficient Force Feedback Generation Using Physics Engine and Haptic Devices

Integrating PhysX and OpenHaptics: Efficient Force Feedback Generation Using Physics Engine and Haptic Devices This is the Pre-Published Version. Integrating PhysX and Opens: Efficient Force Feedback Generation Using Physics Engine and Devices 1 Leon Sze-Ho Chan 1, Kup-Sze Choi 1 School of Nursing, Hong Kong Polytechnic

More information

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data Hrvoje Benko Microsoft Research One Microsoft Way Redmond, WA 98052 USA benko@microsoft.com Andrew D. Wilson Microsoft

More information

Photographing Long Scenes with Multiviewpoint

Photographing Long Scenes with Multiviewpoint Photographing Long Scenes with Multiviewpoint Panoramas A. Agarwala, M. Agrawala, M. Cohen, D. Salesin, R. Szeliski Presenter: Stacy Hsueh Discussant: VasilyVolkov Motivation Want an image that shows an

More information

Craig Barnes. Previous Work. Introduction. Tools for Programming Agents

Craig Barnes. Previous Work. Introduction. Tools for Programming Agents From: AAAI Technical Report SS-00-04. Compilation copyright 2000, AAAI (www.aaai.org). All rights reserved. Visual Programming Agents for Virtual Environments Craig Barnes Electronic Visualization Lab

More information

Real World / Virtual Presentations: Comparing Different Web-based 4D Presentation Techniques of the Built Environment

Real World / Virtual Presentations: Comparing Different Web-based 4D Presentation Techniques of the Built Environment Real World / Virtual Presentations: Comparing Different Web-based 4D Presentation Techniques of the Built Environment Joseph BLALOCK 1 Introduction The World Wide Web has had a great effect on the display

More information

Omni-Directional Catadioptric Acquisition System

Omni-Directional Catadioptric Acquisition System Technical Disclosure Commons Defensive Publications Series December 18, 2017 Omni-Directional Catadioptric Acquisition System Andreas Nowatzyk Andrew I. Russell Follow this and additional works at: http://www.tdcommons.org/dpubs_series

More information

Touch Feedback in a Head-Mounted Display Virtual Reality through a Kinesthetic Haptic Device

Touch Feedback in a Head-Mounted Display Virtual Reality through a Kinesthetic Haptic Device Touch Feedback in a Head-Mounted Display Virtual Reality through a Kinesthetic Haptic Device Andrew A. Stanley Stanford University Department of Mechanical Engineering astan@stanford.edu Alice X. Wu Stanford

More information

Up to Cruising Speed with Autodesk Inventor (Part 1)

Up to Cruising Speed with Autodesk Inventor (Part 1) 11/29/2005-8:00 am - 11:30 am Room:Swan 1 (Swan) Walt Disney World Swan and Dolphin Resort Orlando, Florida Up to Cruising Speed with Autodesk Inventor (Part 1) Neil Munro - C-Cubed Technologies Ltd. and

More information

USER-ORIENTED INTERACTIVE BUILDING DESIGN *

USER-ORIENTED INTERACTIVE BUILDING DESIGN * USER-ORIENTED INTERACTIVE BUILDING DESIGN * S. Martinez, A. Salgado, C. Barcena, C. Balaguer RoboticsLab, University Carlos III of Madrid, Spain {scasa@ing.uc3m.es} J.M. Navarro, C. Bosch, A. Rubio Dragados,

More information

Rectified Mosaicing: Mosaics without the Curl* Shmuel Peleg

Rectified Mosaicing: Mosaics without the Curl* Shmuel Peleg Rectified Mosaicing: Mosaics without the Curl* Assaf Zomet Shmuel Peleg Chetan Arora School of Computer Science & Engineering The Hebrew University of Jerusalem 91904 Jerusalem Israel Kizna.com Inc. 5-10

More information

Colour correction for panoramic imaging

Colour correction for panoramic imaging Colour correction for panoramic imaging Gui Yun Tian Duke Gledhill Dave Taylor The University of Huddersfield David Clarke Rotography Ltd Abstract: This paper reports the problem of colour distortion in

More information

A short introduction to panoramic images

A short introduction to panoramic images A short introduction to panoramic images By Richard Novossiltzeff Bridgwater Photographic Society March 25, 2014 1 What is a panorama Some will say that the word Panorama is over-used; the better word

More information

Appendix A ACE exam objectives map

Appendix A ACE exam objectives map A 1 Appendix A ACE exam objectives map This appendix covers these additional topics: A ACE exam objectives for Photoshop CS6, with references to corresponding coverage in ILT Series courseware. A 2 Photoshop

More information

Collaborative Virtual Environments Based on Real Work Spaces

Collaborative Virtual Environments Based on Real Work Spaces Collaborative Virtual Environments Based on Real Work Spaces Luis A. Guerrero, César A. Collazos 1, José A. Pino, Sergio F. Ochoa, Felipe Aguilera Department of Computer Science, Universidad de Chile Blanco

More information

Immersive Visualization and Collaboration with LS-PrePost-VR and LS-PrePost-Remote

Immersive Visualization and Collaboration with LS-PrePost-VR and LS-PrePost-Remote 8 th International LS-DYNA Users Conference Visualization Immersive Visualization and Collaboration with LS-PrePost-VR and LS-PrePost-Remote Todd J. Furlong Principal Engineer - Graphics and Visualization

More information

Cameras for Stereo Panoramic Imaging Λ

Cameras for Stereo Panoramic Imaging Λ Cameras for Stereo Panoramic Imaging Λ Shmuel Peleg Yael Pritch Moshe Ben-Ezra School of Computer Science and Engineering The Hebrew University of Jerusalem 91904 Jerusalem, ISRAEL Abstract A panorama

More information

Design and Implementation of the 3D Real-Time Monitoring Video System for the Smart Phone

Design and Implementation of the 3D Real-Time Monitoring Video System for the Smart Phone ISSN (e): 2250 3005 Volume, 06 Issue, 11 November 2016 International Journal of Computational Engineering Research (IJCER) Design and Implementation of the 3D Real-Time Monitoring Video System for the

More information

History of Virtual Reality. Trends & Milestones

History of Virtual Reality. Trends & Milestones History of Virtual Reality (based on a talk by Greg Welch) Trends & Milestones Displays (head-mounted) video only, CG overlay, CG only, mixed video CRT vs. LCD Tracking magnetic, mechanical, ultrasonic,

More information

Mid-term report - Virtual reality and spatial mobility

Mid-term report - Virtual reality and spatial mobility Mid-term report - Virtual reality and spatial mobility Jarl Erik Cedergren & Stian Kongsvik October 10, 2017 The group members: - Jarl Erik Cedergren (jarlec@uio.no) - Stian Kongsvik (stiako@uio.no) 1

More information

Research on Presentation of Multimedia Interactive Electronic Sand. Table

Research on Presentation of Multimedia Interactive Electronic Sand. Table International Conference on Education Technology and Economic Management (ICETEM 2015) Research on Presentation of Multimedia Interactive Electronic Sand Table Daogui Lin Fujian Polytechnic of Information

More information

Brief summary report of novel digital capture techniques

Brief summary report of novel digital capture techniques Brief summary report of novel digital capture techniques Paul Bourke, ivec@uwa, February 2014 The following briefly summarizes and gives examples of the various forms of novel digital photography and video

More information

Novel Hemispheric Image Formation: Concepts & Applications

Novel Hemispheric Image Formation: Concepts & Applications Novel Hemispheric Image Formation: Concepts & Applications Simon Thibault, Pierre Konen, Patrice Roulet, and Mathieu Villegas ImmerVision 2020 University St., Montreal, Canada H3A 2A5 ABSTRACT Panoramic

More information

Robot Task-Level Programming Language and Simulation

Robot Task-Level Programming Language and Simulation Robot Task-Level Programming Language and Simulation M. Samaka Abstract This paper presents the development of a software application for Off-line robot task programming and simulation. Such application

More information

Quick Start Training Guide

Quick Start Training Guide Quick Start Training Guide To begin, double-click the VisualTour icon on your Desktop. If you are using the software for the first time you will need to register. If you didn t receive your registration

More information

Legal Tech Issue of New York Law Journal August, 1996 Virtual Reality Comes of Age in The Courtroom

Legal Tech Issue of New York Law Journal August, 1996 Virtual Reality Comes of Age in The Courtroom Today, virtual panoramas are commonplace, and can even be created on smart phones. In 1996, the technology was not only brand new, it had technical limitations and had to be used carefully to protect a

More information

Creating a Panorama Photograph Using Photoshop Elements

Creating a Panorama Photograph Using Photoshop Elements Creating a Panorama Photograph Using Photoshop Elements Following are guidelines when shooting photographs for a panorama. Overlap images sufficiently -- Images should overlap approximately 15% to 40%.

More information

RV - AULA 05 - PSI3502/2018. User Experience, Human Computer Interaction and UI

RV - AULA 05 - PSI3502/2018. User Experience, Human Computer Interaction and UI RV - AULA 05 - PSI3502/2018 User Experience, Human Computer Interaction and UI Outline Discuss some general principles of UI (user interface) design followed by an overview of typical interaction tasks

More information

The presentation based on AR technologies

The presentation based on AR technologies Building Virtual and Augmented Reality Museum Exhibitions Web3D '04 M09051 선정욱 2009. 05. 13 Abstract Museums to build and manage Virtual and Augmented Reality exhibitions 3D models of artifacts is presented

More information

LOOKING AHEAD: UE4 VR Roadmap. Nick Whiting Technical Director VR / AR

LOOKING AHEAD: UE4 VR Roadmap. Nick Whiting Technical Director VR / AR LOOKING AHEAD: UE4 VR Roadmap Nick Whiting Technical Director VR / AR HEADLINE AND IMAGE LAYOUT RECENT DEVELOPMENTS RECENT DEVELOPMENTS At Epic, we drive our engine development by creating content. We

More information

Virtual Reality I. Visual Imaging in the Electronic Age. Donald P. Greenberg November 9, 2017 Lecture #21

Virtual Reality I. Visual Imaging in the Electronic Age. Donald P. Greenberg November 9, 2017 Lecture #21 Virtual Reality I Visual Imaging in the Electronic Age Donald P. Greenberg November 9, 2017 Lecture #21 1968: Ivan Sutherland 1990s: HMDs, Henry Fuchs 2013: Google Glass History of Virtual Reality 2016:

More information

Direct Manipulation. and Instrumental Interaction. CS Direct Manipulation

Direct Manipulation. and Instrumental Interaction. CS Direct Manipulation Direct Manipulation and Instrumental Interaction 1 Review: Interaction vs. Interface What s the difference between user interaction and user interface? Interface refers to what the system presents to the

More information

A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems

A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems F. Steinicke, G. Bruder, H. Frenz 289 A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems Frank Steinicke 1, Gerd Bruder 1, Harald Frenz 2 1 Institute of Computer Science,

More information

Time-Lapse Panoramas for the Egyptian Heritage

Time-Lapse Panoramas for the Egyptian Heritage Time-Lapse Panoramas for the Egyptian Heritage Mohammad NABIL Anas SAID CULTNAT, Bibliotheca Alexandrina While laser scanning and Photogrammetry has become commonly-used methods for recording historical

More information

A Very High Level Interface to Teleoperate a Robot via Web including Augmented Reality

A Very High Level Interface to Teleoperate a Robot via Web including Augmented Reality A Very High Level Interface to Teleoperate a Robot via Web including Augmented Reality R. Marín, P. J. Sanz and J. S. Sánchez Abstract The system consists of a multirobot architecture that gives access

More information

Fast Focal Length Solution in Partial Panoramic Image Stitching

Fast Focal Length Solution in Partial Panoramic Image Stitching Fast Focal Length Solution in Partial Panoramic Image Stitching Kirk L. Duffin Northern Illinois University duffin@cs.niu.edu William A. Barrett Brigham Young University barrett@cs.byu.edu Abstract Accurate

More information

AR 2 kanoid: Augmented Reality ARkanoid

AR 2 kanoid: Augmented Reality ARkanoid AR 2 kanoid: Augmented Reality ARkanoid B. Smith and R. Gosine C-CORE and Memorial University of Newfoundland Abstract AR 2 kanoid, Augmented Reality ARkanoid, is an augmented reality version of the popular

More information

Application Areas of AI Artificial intelligence is divided into different branches which are mentioned below:

Application Areas of AI   Artificial intelligence is divided into different branches which are mentioned below: Week 2 - o Expert Systems o Natural Language Processing (NLP) o Computer Vision o Speech Recognition And Generation o Robotics o Neural Network o Virtual Reality APPLICATION AREAS OF ARTIFICIAL INTELLIGENCE

More information

Interactive Simulation: UCF EIN5255. VR Software. Audio Output. Page 4-1

Interactive Simulation: UCF EIN5255. VR Software. Audio Output. Page 4-1 VR Software Class 4 Dr. Nabil Rami http://www.simulationfirst.com/ein5255/ Audio Output Can be divided into two elements: Audio Generation Audio Presentation Page 4-1 Audio Generation A variety of audio

More information