Symmetric Model of Remote Collaborative Mixed Reality Using Tangible Replicas

Size: px
Start display at page:

Download "Symmetric Model of Remote Collaborative Mixed Reality Using Tangible Replicas"

Transcription

1 Symmetric Model of Remote Collaborative Mixed Reality Using Tangible Replicas Shun Yamamoto Keio University Yuichi Bannai CANON.Inc Hidekazu Tamaki Keio University Yuta Okajima Keio University Kenichi Okada Keio University Abstract Research into collaborative mixed reality (MR) or augmented reality has recently been active. Previous studies showed that MR was preferred for collocated collaboration while immersive virtual reality was preferred for remote collaboration. The main reason for this preference is that the physical object in remote space cannot be handled directly. However, MR using tangible objects is still attractive for remote collaborative systems, because MR enables seamless interaction with real objects enhanced by virtual information with the sense of touch. Here we introduce tangible replicas (dual objects that have the same shape, size, and surface), and propose a symmetrical model for remote collaborative MR. The result of experiments shows that pointing and drawing functions on the tangible replica work well despite limited shared information. I. INTRODUCTION In collocated collaborative systems, it is easy to share both physical objects and non-verbal information such as gaze or gesture between the users because the users exist in the same real space. An early collocated collaborative augmented reality (AR) system is AR2 Hockey [13] developed by MR Systems Laboratory in 1998, in which two users with head mounted displays (HMDs) seated on opposite sides of a table hit a virtual puck using real mallets. Kato et al. [8] developed a system where multiple users can play cards on which virtual objects are overlaid. Kiyokawa et al. [9] evaluated collocated AR interfaces under different conditions of display type and setting, finding that visibility of nonverbal cues (e.g. gaze and gesture ) has a considerable effect on communication. In 1980s, collocated collaboration systems such as meeting support systems were developed. For example, Colab [18] shared 2D information between a desktop PC and a projector. Collocated collaborative AR enables the handling of 3D information associated with real objects in real space. In the case of remote collaboration, in contrast to collocated settings, neither real objects nor real spaces can be shared at the same time. As a result, most object sharing mechanisms in remote sites were provided by a collaborative virtual environment (CVE) using the virtual objects [14]. Early studies (e.g. [6][16]) insisted on the importance of force feedback in virtual environments (VEs). They showed that the sensory information such as visual and haptic feedback is a key source of information when acquiring and manipulating objects. However, incorporating rich interactive graphics and haptic feedback into virtual environments is costly, both in terms of computing cycles and equipment purchases [12]. The tangible AR interface is an approach designed to overcome these problems. Physical objects support collaboration by their appearance, physical affordance, and the sense of touch. The easiest way to use physical objects between remote sites is to represent the real object set in a local site as a virtual object in remote space. However, this asymmetric scheme might create a dual ecology problem [10]: that is, each user has a different manipulation condition between the physical and virtual object. We introduce here the concept of tangible replicas and propose a collaborative MR system mediated by the replicas where the users hold and interact with them. The system also provides the same manipulation condition for each user due to the symmetry of the system. II. RELATED WORK Studierstube [15] is a multi-user AR/MR environment that enables the display of 3D objects in real space. In a collocated situation, multiple users wearing HMDs gather in a room and interact with the 3D objects. The personal interaction panel, a two-handed interface composed of pen and pad, both of which are fitted with magnetic trackers, is used to control the application. Although a distributed execution mechanism is provided for remote users, the system cannot associate virtual objects with real objects or real spaces between the remote sites. RealWorld Teleconferencing [2] is an AR/MR conferencing system where remote collaborators are represented as live video images or virtual avatars that are overlaid to a set of small marked cards that can be freely positioned about a user in space. The user with the AR interface wears a HMD while his/her counterpart sits in front of a desktop monitor. A shared

2 virtual whiteboard is displayed on another marked card in the AR user site and on a window in the desktop user site. Billinghurst et al. developed a remote work assistance system named Block Party [3] that allows a remote expert to assist a user in building a real model out of plastic bricks. The remote expert is seated at a desktop terminal, while the block builder wears a seethrough HMD with a video camera. The expert manipulates the 3D virtual model of the target object on the screen to show how to construct the object. The block builder has a 3D view of the virtual model floating close to the real workspace. Both Real World Teleconferencing and Block Party are asymmetric systems that consist of two different interfaces: AR interfaces and desktop interfaces. The asymmetry may cause confusion between the users due to the existence of dual ecology. Moreover, there exist seams between the windows on the screen that impede intuitive operation for the desktop user. Another trial that uses tangible objects for remote collaboration is Distributed Designer s Outpost [5]. This system is a collaborative web site design tool that employs physical Postit notes as interaction primitives on an electronic whiteboard. The location information and data on the Post-it is captured by two cameras and is transformed to electronic data, and displayed as a virtual Post-it on a whiteboard in the remote site. Although the movement of the physical Post-it is detected and displayed on the other whiteboard, it is necessary for the user to move it by hand when the corresponding virtual Post-it is moved by the counterpart in order to avoid inconsistency. A number of studies showed that force feedback in VE improved performance on a number of tasks and substantially reduced errors. For example, Arsenault et al. [1] showed that a task requiring the coordination of visual and haptic feedback took 12% less time than that with visual feedback alone. Lok et al. [11] concluded that interacting with real objects significantly improves task performance over interacting with virtual objects in spatial cognitive tasks, and handling real objects makes task performance and interaction in the VE more like the actual task. In the following text, we list some collaboration systems using real objects between remote sites. Psybench [4] synchronizes distributed objects to provide a generic shared physical workspace across distance. It is constructed from two augmented and connected motorized chessboards. Positions of the chess pieces on a 10 8 grid are sensed by an array of membrane switches. The pieces have magnetic bases so they can be moved using an electromagnet placed on a 2-axis positioning mechanism under the surface. Although the system provides a symmetrical tangible user interface between the remote sites, the actuator mechanism is needed to move real objects in each site. In-Touch [4] is a tele-haptic communication system consisting of two handsized devices with three cylindrical rollers embedded within a base. The rollers on each base are haptically coupled such that each one feels like it is physically linked to its counterpart on the other base. To achieve simultaneous manipulation, In-Touch employs bilateral force feedback technology, using position sensors and high precision motors. Although it is interesting that the system provides a means for expression through touch, it is not designed to support specific cooperative work. III. CONCEPTUAL MODEL A. Remote Collaboration Model Table I shows the main features for the 3 remote collaboration models of Media Space, Immersive CVE, and remote MR using TUI. In Media Space, awareness information can be obtained from video images, and the user cannot change his/her viewpoint. However, in Immersive CVE, the awareness information can be taken from the avatars of the counterpart, and the user viewpoint is controllable using a Computer Graphics (CG) technique. In the case of MR with TUI, viewpoint controllability, construction of the workspace, and the functions for the objects depend on the setting. In this paper, the element of our MR with TUI model consists of a tangible object enhanced by virtual information in a real space in two sites. The early systems of Media Space, such as MARMAID [20], did not provide a seamless interface; the screen was divided into a talking head window and a whiteboard window. Clearboard [7] realized the seamless interface by overlapping the talking head image onto the whiteboard image. A 3D seamless space consists of some virtual objects and the virtual environment can be easily constructed in Immersive CVE. In the case of MR with TUI, a seamless space including both tangible and virtual objects in the real environment can be achieved. In terms of object manipulation, Media Space systems provide tele-pointing and drawing functions on a 2D screen while Immersive CVE enables manipulation of 3D objects as well as pointing. The functions for the objects in MR systems with TUI are limited to pointing and drawing on the 2D surface of the tangible objects, since changing the properties of the physical objects is difficult. B. Remote MR model with TUI In our remote MR model, we choose the base physical object, on which the world coordinate system is set, in the real space. The base physical object may be a plane such as a floor, a table, or a monitor screen (e.g. in the case of Distributed Designer s Outpost [5]). Then the workspace is set on the base object in each site for the collaboration, as shown in Figure 1-(1). We define that the two spaces are equivalent when the physical structure of the work spaces is the same. A set of tables of the same size is an appropriate example of the equivalent spaces. Each user can move around and work with the objects in the same way under this condition. The counterpart is usually displayed as an avatar in remote site, and the shared object represented by a virtual object can be manipulated in each site (Figure 1-(2)). When we put the tangible object in site A shown in Figure 1- (3), the virtual object corresponding to the object is displayed as a gray oval in site B. This causes asymmetry between the

3 TABLE I MAIN FEATURES FOR REMOTE COLLABORATIVE SYSTEMS Model Awareness View Point Seamlessness Physical Object Function(Objects) Media Space Video Image Fixed Overlaid image of Not available Pointing User s body and Drawing and Shared white board (2D screens) Immersive CVE Avator Controllable Saemless Space including Not available Pointing 3D virtual objects and Manipulation(Modification) 3D virtual environments (3d objects) Remote MR Avattor Controllable Seamless Space including Available Pointing 3D virtual objects Drawing Real environment (3D tangible object) two spaces. For example, the position of the virtual object in site B is changed when the tangible object in site A is moved while on the other hand, the real object in site A cannot change its position without an actuator mechanism as the result of movement of the virtual object in site B. In Figure 1-(4), we replace the virtual object with the real object in site B such that each user has the real object independently. It is assumed that the real object in site A has the same size, shape, and surface as that in site B. We define these objects as a set of tangible replicas. Although the model in Figure 1-(4), is symmetric, the problem that the position of the real object in remote site cannot be changed still remains. Fig. 2. World and Object Coordinate System a view from an arbitrary camera position. Since the remote replica cannot be moved directly, we move the remote pointer instead of the replica, keeping its relative position. Therefore, we use the object coordinate system of the replica to represent the remote pointer, and transform the object coordinates into the world coordinates to create a local view. As a result, the remote pointer moves in the following cases. When the counterpart moves either his stylus or replica, or moves both of them at the same time, and When the local user moves his replica. The movement of the pointer is observed by the user as a composition vector of motion, created by his replica, and by the counterpart s pointer and replica. The user cannot recognize whether the counterpart is moving the pointer or the replica. This model is especially effective if the replica is a portable object rather than one fixed on a table, because the latter case can be managed in the world coordinate system. However, only one replica in each site can be handled at the same time in this model. Fig. 1. A Remote MR Model with Tangible Objects C. World and Object Coordinate Systems To simplify the problem, we show the symmetric model in Figure 2 where each site consists of a tangible replica, a local stylus, shared CG texture on the replica, and a remote pointer of virtual representation. In CG systems, an object represented by the object coordinates is transformed to the world coordinates in order to create D. Use Case Figure 3 shows a use case of remote collaboration using tangible replicas. Each user A and B existing in different sites has the tangible replica of a plain white mug. He/she can paint texture on the cup with his stylus while wearing a HMD. A line is drawn as a CG object on the surface of the replica where the stylus touched. Both A and B can draw and erase it moving the replicas independently, and the results of operation are displayed and shared between them.

4 Fig. 3. A Use Case of Tangible Replicas Fig. 4. The Scene Graph of the Virtual Object Although the texture and pointer data are shared, it is unnecessary to exchange the information such as location and orientation data related to the replica between users. Therefore, user B s replica and its view are not affected by the motion of user A s replica, and vice versa. It is a main feature of the system that users have the ability to share the tangible object overlapped with the texture, while maintaining the consistency of each user s view. E. Implementation F. Virtual Objects in Object Coordinate Systems In order to synchronize the virtual objects to be shared between the sites, the parameters of the manipulated object in site A are sent to site B, and vice versa; the objects are simultaneously displayed using the same parameters in the object coordinate system of each site. In Figure 3, each virtual object (the pointer and the texture) S a is expressed by S oa = [xoa; yoa; zoa; 1]t, where S oa is a set of location parameters in the object coordinate system of site A, and S wa is a set of location parameters in the world coordinate system of site A. Transformation from object coordinates to world coordinates is calculated by S wa = M a S oa using the modeling transformation matrix M a, where M a is a 4 4 homogeneous matrix. Since the virtual objects are managed based on the world coordinates in each site, we calculate the location parameters of the object coordinates by transforming S oa = Ma 1 S wa, where Ma 1 is the inverse matrix of M a. Since the virtual object is shared at the same object coordinates in each site, we can set S oa = S ob, where S ob is the object coordinates in site B. After receiving S ob from system A, the system in site B transforms S ob to S wb (the world coordinates in site B) using S wb = M b S ob, where M b is the modeling transformation matrix of site B, and displays the object at S wb. G. Texture In order to overlap virtual objects on the replica correctly, it is necessary to get the 3D model of the replica beforehand. The user can draw a line when his/her stylus is touched on the replica, and change the color of the drawing by pressing the button on the stylus. We covered a number of 1x1 mm transparent squares onto the replica so that the pixel size of the drawings is 1mm 2. As shown in Figure 4, each pixel is represented by a part of the scene graph. The scene graph has the tree structure consisting of several nodes, one of which switches the node corresponding to the color. The switch node changes the color node to the corresponding one when the system receives the user s request. H. Synchronization of the shared objects The modification of the object status caused by the user manipulation, e.g. movement of the replica and the stylus and drawing on the replica, must be updated in the remote site as well as the local site. We use the virtual object management table shown in Table II which stores the shared virtual objects that can change the state. When the state of an object changes, the flag is set and the other corresponding data in Table II is updated. This data is sent to the other site and the flag is reset by the background loop program checking periodically for the update. TABLE II VIRTUAL OBJECTS MANAGEMENT TABLE Virtual Object ID Virtual Object Name Flag Type of Change Amount of Change The system of the receiver site updates the object data for display of the object. This process is executed periodically in each site in order to synchronize the shared object. In the case of the drawing function, the pixel node ID and color ID are stored in the Virtual Object ID and the Amount of Change columns, respectively. I. System Configuration Figure 5 shows the system configuration. The video seethrough HMD (Canon VH2002) is equipped with a pair of NTSC video cameras and a pair of VGA LCDs. Its horizontal view angle is 51 degrees and its weight is 450g. A sensor receiver made by FASTRAK R that is attached to the HMD and the stylus receive 6 degree of freedom (DOF) parameters of its position and orientation. The same type of receiver is also fixed on the replica.

5 MR Platform [19] generates the CG image from the viewpoint of the HMD using the parameters from the sensor receivers. A marker registration for the HMD that compensates for sensor registration error is made in order to precisely overlap the CG image onto the real object. Two video boards within the PC capture the video output from the right and left cameras and send the composed image of video and CG to the right and left display monitors, respectively. The specifications of the PC are as follows: CPU=Pentium4 3.4 GHz (PC1), Pentium4 2.4 GHz(PC2); RAM=1GB; Graphics board=nvidia GeForce4; OS=Red Hat Linux9. The system configuration is identical between sites A and B, as shown in Figure 5. The handling of virtual objects in each site is managed by the MR Platform, while synchronization is controlled by virtual object management units using Table II J. Prototype System Then we implement a prototype system. Each user has a white cube replica by his hand and the stylus by the other hand. And size of the cube is 5cm 5cm 5cm, so the cube has 15, 000pixcels on its surface. A pixel has three color nodes; transparent, white, and red. So, user can use two colors and eraser; yellow and red and white. User selects color by pushing a button of stylus, and his color is displayed on the top of the stylus as a virtual sphere. Opposite user s stylus is displayed as a virtual cone and it is also displayed by keeping the relative position and orientation to the cube in remote user s side. Figure 6 shows the prototype system. And left side of picture shows the view without HMD, and right side shows the view through HMD. Fig. 6. Prototype System IV. EVALUATION We focused on the evaluation of the pointing and drawing functions in the experiment to investigate the following questions. 1) Can the pointer movement and pointing position be recognized correctly? Since the motion of the pointer is displayed as a composite vector images of the three movements described above. 2) In which condition is the performance of the pointing task better? That is, the portable condition where both world and object coordinate systems are used, or the fixed condition where the replica is fixed on the table using only the world coordinate system? 3) Does the drawing function on the replica between the remote users work well? We expect the collaboration is established even under the condition where the awareness information is limited to the pointer. The following three experiments were conducted: We chose a mutual pointing task in the Preliminary Experiment where the correct answer rate of pointed position and the pointing time was measured. In Experiment 1, we hypothesized that the pointing performance under the fixed condition is better than under the portable condition, since the movement of the pointer may become more complicated in the latter case, and a previous study [17] showed that performance under the condition of observer movement around the object was better than under the condition of object rotation. In Experiment 2, we created a game similar to tick-tack-toe using a physical cube, and asked the subjects to play with it under the remote condition using a pair of replicas and under the collocated condition using one physical cube. A. Conditions of the Preliminary Experiment and Experiment 1 Twelve subjects (10 males and 2 females, aged from 20 to 25) were divided into six pairs. One became an indicator and the other played the role of responder in the pointing task and wore the HMD. Both systems of the indicator and responder were set in the same room. Each workspace was partitioned so that the other subject could not see it. Communication between the subjects was via voice. The replica used in the experiments was a cm cube whose surface was divided into a 3x3 mesh (4 4cm square). The numbers from 01 to 45 generated by CG were randomly overlapped on the mesh of the surfaces except the bottom. The average frame rate of the HMD display was 26.3 frames/sec. No delay was observed during the experiments B. Preliminary Experiment This experiment aimed to investigate whether or not mutual pointing can be accomplished correctly;, that is, whether the indicator can point to the target as he wishes and the responder can correctly read the number highlighted by the indicator. Each subject pointed to an arbitrary number with the stylus in one hand while holding the replica in the other. The responder, who sat in front of the table, responded to the number by moving his replica so he could trace the pointer. The indicator permitted the responder to answer by saying OK when he pointed to the number. When he heard the responder s answer, the subjects changed role. This task was repeated five times. The total time was measured. In the

6 Fig. 5. The Scene Graph of the Virtual Object screenshot of Figure 7, the right pointer is local and the left pointer is remote. Fig. 7. A Screenshot of the Preliminary Experiment The result of 60 pointing trials by six pairs showed the average time from pointing to response was 3.6 sec, with a standard deviation of 0.33 sec; the correct answer rate was 100%. We observed that the responder could trace the pointer and correctly determine the number without trouble even when the motion of the pointer was displayed as a composition vector of three movements. C. Experiment 1 Since the pointing task in preliminary experiment was accomplished successfully, we conducted another experiment comparing the pointing and response time between the portable condition and fixed condition. In this experiment, the role of a pair of subjects was fixed: one was solely an indicator while the other was solely a responder. The indicator pointed to a number on each of five surfaces of the cube except the bottom surface (i.e., he pointed to five locations). The other conditions were similar to those of the aforementioned preliminary experiment. In the fixed condition, both the indicator and the responder had to move their upper body in order to see the numbers on the back surfaces of the cube since the replica was fixed on their tables in front of them. The cube was fixed on the table with three surfaces in the subject s view. The pointing time was taken as the duration from the time when the indicator began to point to the time when he said, OK soon after fixing his pointer. The response time was measured as the end of the pointing to the time when the responder said the number. Figure 8 shows the average time per point of pointing and response. The average pointing time of the fixed condition was 3.7 seconds (standard deviation (sd): 1.4 sec), while that of the portable condition was 2.7 sec (sd: 0.6 sec). The average response time of the former system was 2.0 sec (sd: 0.6 sec) and that of the latter system was 1.6 sec (sd: 0.4 sec). The correct answer rate was 100in each case. We tested the difference of average pointing time and response time between the two conditions using the t test. The T value of the pointing time was T p = 2.25 > T (22, 0.05), and that of the response time was T r = 2.07 > T (22, 0.05). The difference of the average time between the two conditions was found to be significant at the 5% level. Therefore, hypothesis 1 is rejected. D. Experiment 2 We created an extended tick-tack-toe game using a set of cubes each of whose surface is divided by a 3 3 mesh, shown

7 Fig. 8. The Average Time of Pointing and Response in Figure 9. Each user in turn puts a piece on a mesh so that he/she makes a line with 5 consecutive pieces in a row or a column. The user must use at least 2 surfaces to win. Fig. 9. Fig. 11. Experiment Scenery Extended Tick-Tack-Toe Game Five pairs of subjects joined the experiment under the two conditions: the collocated condition where the pair sits across the table from each other using a 15 15cm cube, and the remote condition where each subject wearing a HMD holds a 5 5cm replica. In the collocated condition, the subject paints a circle on a mesh of the cube with a pen and hands the cube to his/her counterpart to change the turn, whereas in the remote condition, he/she touches a mesh with his/her stylus so that the piece is put on the mesh. The ratio of the cube size (15:5) was determined from that of the human view angle (136 degrees) to the view angle of the HMD (51 degrees). Figure 10 shows player s view through HMD. White cone of which tip is red sphere is player s stylus and black cone is opposite player s stylus. Figure 11 shows Experiment Scenery. Seeing player from behind, player has just a white cube, wears HMD, and has a stylus. Fig. 10. Fig. 12. The Average Number of Turns per Minute After the game was over, the subjects were asked to complete a subject questionnaire. It is expected to answer using a scale of 1 ( Disagree ) to 5 ( Agree ), while comparing the following two conditions. Q1: I had enough time to think of my next placement during my partner s turn. Q2: I could easily understand what my partner is doing during his/her turn. The average score in each condition is shown in Figure 13. There existed a highly significant difference between the two conditions for each question with t values of Tq1 = 4.06 > T (18, 0.01); and Tq2 = 6.59 > T (18, 0.01). Player s View The number of turns in a minute was used as the performance measure, since it took over ten minutes for a game on average. The more turns, the more actively and efficiently the subjects played the game. Figure 12 shows the average number of turns per minute for 5 pairs in both conditions. We found no significant difference here Tt = 1.02 < T (8, 0.05), although the average number in the collocated condition was less than that in the remote condition. Fig. 13. The Average Scores of the Questionnaire E. Discussions We derived the fact from the preliminary experiment that the subject could recognize the pointed position correctly in our model showing the relative view of the replica and the pointer, and from Experiment 1 that the subject accomplished

8 the pointing task more efficiently in the portable condition than in the fixed condition. The reason for this may be due to two facts: the movement of the pointer is not so complicated for it to be traced in a short time, and the subject had to physically move in order to see the surface unseen from his view. As a matter of fact, we observed from the recorded video that the subject turned the replica in the opposite direction of the pointer movement such that the pointer was always seen in his/her field of view. In Experiment 2, the performance measure of the game in the remote condition is as good as that in the collocated condition. We found from the questionnaire that the subject appreciated an arbitrary viewpoint can be set independent of the partner. V. FUTURE WORK The number of tangible objects that can be shared simultaneously is limited to one in the proposed system. It is very difficult to handle more than two tangible objects since the relative position of the two objects must be changed when either of the objects moves by a mechanism such as an actuator. As a practical solution, users can pick one object from a group of objects by negotiating the target objects, and handling one by one. Another consideration is to relax the restriction of the replica such that tangible objects of different size and/or shape can be handled. For example, we can share objects with the same shape but different size (e.g. a miniature and a real object). Even when the objects have a different shape, users can share them by the proposed method if the point on the object corresponds to that on the other object. Another issue to be discussed is how to display shared virtual objects independent of the replica such as the avatar or other objects in the environment, since the movement of the replica may cause much more frequent and wider movement of the shared object. VI. CONCLUSION We have proposed a symmetric MR collaboration model that enables users to share and interact seamlessly with the objects with the feeling of touch using tangible replicas. Shared objects are managed in both the object coordinate system based on the replica and the world coordinate system in our model. Results of the experiments showed the pointing task could be accomplished correctly without problem and the pointing performance in the portable replica condition was more efficient than in the fixed replica condition. Although there exist some limitations of the tangible objects in the present system, we believe that our tangible MR interface can extend the application of remote collaboration systems. REFERENCES [1] R. Arsenault and C. Ware. Eye-hand co-ordination with force feedback. In Proceedings of CHI 2000, pages , [2] M. Billinghurst and H. Kato. Novel collaborative paradigms: Real world teleconferencing. In Extended Abstract of CHI 99, pages 20-29, [3] M. Billinghurst, E. miller, and S.Weghorst. Collaboration with Wearable Computers. Lawrence Erlbaum Associates, [4] S. Brave, H. Ishii, and A. Dahlefy. Tangible interfaces for remote collaboration and communication. In Proceedings of CSCW 98, pages , [5] K. M. Everitt, S. R. Klemmer, R. Lee, and J. Landay. Two worlds apart: Bridging the gap between physical and virtual media for distributed design collaboration. In Proceedings of CHI 03, pages , [6] B. Hannaford, L. Wood, Guggisberg, D. McAffee, and H. Zack. Performance evaluation of a six-axis universal force-reflecting hand controller. In Proceedings of 19th IEEE Conference on Decision and Control, pages , [7] H. Ishii and M. Kobayashi. Clearboard: A seamless medium for shared drawing and conversation with eye contact. In Proceedings of CHI 92, pages , [8] H. Kato, M. Billinghurst, I. Poupyrev, K. Imamoto, and K. Tachibana. Virtual object manipulation on a table-top ar environment. In Proceedings of ISAR 2000, pages , [9] K. Kiyokawa, M. Billinghurst, S. E. Hayes, A. Gupta, Y. Sannohe, and H. Kato. Communication behaviors of co-located users in collaborative ar interfaces. In Proceedings of ISMAR 02, pages , [10] H. Kuzuoka, J. Kosaka, Y. Yamazaki, Y. Suga, A. Yamazaki, P. Luff, and C. Heath. Gesturing, moving and talking together: Mediating dual ecologies. In Proceedings of CSCW 04, pages , [11] B. Lok and S. Naik. Effects of handling real objects and self-avatar fidelity on cognitive task performance and sense of presence in virtual environment s. Presence, 12(6): , June [12] A. H. Mason, M. A. Walji, E. J. Lee, and C. L. MacKenzie. Reaching movements to augmented and graphic objects in virtual environments. In Proceedings of CHI 01, pages , [13] T. Ohshima, K. Satoh, H. Yamamoto, and H. Tamura. Ar2 hockey: A case study of collaborative augmented reality. In Proceedings of VRAIS 98, pages , [14] O. Otto, D. Roberts, and R. Wolf. A review on effective closelycoupled collaboration using immersive cve s. In Proceedings of VRCIA 06, pages , [15] D. Schmalstieg, A. Fuhrman, and G. Hesina. Bridging multiple user interface dimensions with augmented reality. In Proceedings of ISAR 2000, pages 20-29, [16] T. B. Sheridan. Telerobotics, Automation andhuman Supervisory Control. MIT Press, [17] D. H. Shin, P. S. Dunston, and X. Wang. View changes in augmented reality computer-aided-drawing. ACM Transaction of Applied Perceptions, 2(1):1-14, Jan [18] M. Stefik, G. Foster, D. G. Borrow, K. Kahin, S. Lanning, and L. Suchman. Beyond the chalkboard: Computer support for collaboration and problem solving in meetings. Communications of ACM, 30(1):32-47, Jan [19] S. Uchiyama, K. Takemoto, K. Sato, H. Yamamoto, and H. T. H. Mr platform: A basic body on which mixed reality applications are built. In Proceedings of ISMAR 02, pages , [20] K. Watabe, S. Sakata, K. Maeno, H. Fukuoka, and T. Ohmori. Distributed multiparty desktop conferencing system: Mermaid. In Proceedings of CSCW 90, pages 27-38, ACKNOWLEDGEMENTS This research was supported in part by the Ministry of Internal Affairs and Communications, SCOPE.

COLLABORATION WITH TANGIBLE AUGMENTED REALITY INTERFACES.

COLLABORATION WITH TANGIBLE AUGMENTED REALITY INTERFACES. COLLABORATION WITH TANGIBLE AUGMENTED REALITY INTERFACES. Mark Billinghurst a, Hirokazu Kato b, Ivan Poupyrev c a Human Interface Technology Laboratory, University of Washington, Box 352-142, Seattle,

More information

Asymmetries in Collaborative Wearable Interfaces

Asymmetries in Collaborative Wearable Interfaces Asymmetries in Collaborative Wearable Interfaces M. Billinghurst α, S. Bee β, J. Bowskill β, H. Kato α α Human Interface Technology Laboratory β Advanced Communications Research University of Washington

More information

Application of 3D Terrain Representation System for Highway Landscape Design

Application of 3D Terrain Representation System for Highway Landscape Design Application of 3D Terrain Representation System for Highway Landscape Design Koji Makanae Miyagi University, Japan Nashwan Dawood Teesside University, UK Abstract In recent years, mixed or/and augmented

More information

synchrolight: Three-dimensional Pointing System for Remote Video Communication

synchrolight: Three-dimensional Pointing System for Remote Video Communication synchrolight: Three-dimensional Pointing System for Remote Video Communication Jifei Ou MIT Media Lab 75 Amherst St. Cambridge, MA 02139 jifei@media.mit.edu Sheng Kai Tang MIT Media Lab 75 Amherst St.

More information

3D and Sequential Representations of Spatial Relationships among Photos

3D and Sequential Representations of Spatial Relationships among Photos 3D and Sequential Representations of Spatial Relationships among Photos Mahoro Anabuki Canon Development Americas, Inc. E15-349, 20 Ames Street Cambridge, MA 02139 USA mahoro@media.mit.edu Hiroshi Ishii

More information

HandsIn3D: Supporting Remote Guidance with Immersive Virtual Environments

HandsIn3D: Supporting Remote Guidance with Immersive Virtual Environments HandsIn3D: Supporting Remote Guidance with Immersive Virtual Environments Weidong Huang 1, Leila Alem 1, and Franco Tecchia 2 1 CSIRO, Australia 2 PERCRO - Scuola Superiore Sant Anna, Italy {Tony.Huang,Leila.Alem}@csiro.au,

More information

Virtual Object Manipulation on a Table-Top AR Environment

Virtual Object Manipulation on a Table-Top AR Environment Virtual Object Manipulation on a Table-Top AR Environment H. Kato 1, M. Billinghurst 2, I. Poupyrev 3, K. Imamoto 1, K. Tachibana 1 1 Faculty of Information Sciences, Hiroshima City University 3-4-1, Ozuka-higashi,

More information

The Mixed Reality Book: A New Multimedia Reading Experience

The Mixed Reality Book: A New Multimedia Reading Experience The Mixed Reality Book: A New Multimedia Reading Experience Raphaël Grasset raphael.grasset@hitlabnz.org Andreas Dünser andreas.duenser@hitlabnz.org Mark Billinghurst mark.billinghurst@hitlabnz.org Hartmut

More information

Toward an Augmented Reality System for Violin Learning Support

Toward an Augmented Reality System for Violin Learning Support Toward an Augmented Reality System for Violin Learning Support Hiroyuki Shiino, François de Sorbier, and Hideo Saito Graduate School of Science and Technology, Keio University, Yokohama, Japan {shiino,fdesorbi,saito}@hvrl.ics.keio.ac.jp

More information

Development of a Finger Mounted Type Haptic Device Using a Plane Approximated to Tangent Plane

Development of a Finger Mounted Type Haptic Device Using a Plane Approximated to Tangent Plane Journal of Communication and Computer 13 (2016) 329-337 doi:10.17265/1548-7709/2016.07.002 D DAVID PUBLISHING Development of a Finger Mounted Type Haptic Device Using a Plane Approximated to Tangent Plane

More information

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT TAYSHENG JENG, CHIA-HSUN LEE, CHI CHEN, YU-PIN MA Department of Architecture, National Cheng Kung University No. 1, University Road,

More information

Chapter 2 Introduction to Haptics 2.1 Definition of Haptics

Chapter 2 Introduction to Haptics 2.1 Definition of Haptics Chapter 2 Introduction to Haptics 2.1 Definition of Haptics The word haptic originates from the Greek verb hapto to touch and therefore refers to the ability to touch and manipulate objects. The haptic

More information

Interior Design using Augmented Reality Environment

Interior Design using Augmented Reality Environment Interior Design using Augmented Reality Environment Kalyani Pampattiwar 2, Akshay Adiyodi 1, Manasvini Agrahara 1, Pankaj Gamnani 1 Assistant Professor, Department of Computer Engineering, SIES Graduate

More information

Tiles: A Mixed Reality Authoring Interface

Tiles: A Mixed Reality Authoring Interface Tiles: A Mixed Reality Authoring Interface Ivan Poupyrev 1,i, Desney Tan 2,i, Mark Billinghurst 3, Hirokazu Kato 4, 6, Holger Regenbrecht 5 & Nobuji Tetsutani 6 1 Interaction Lab, Sony CSL 2 School of

More information

Development of Informal Communication Environment Using Interactive Tiled Display Wall Tetsuro Ogi 1,a, Yu Sakuma 1,b

Development of Informal Communication Environment Using Interactive Tiled Display Wall Tetsuro Ogi 1,a, Yu Sakuma 1,b Development of Informal Communication Environment Using Interactive Tiled Display Wall Tetsuro Ogi 1,a, Yu Sakuma 1,b 1 Graduate School of System Design and Management, Keio University 4-1-1 Hiyoshi, Kouhoku-ku,

More information

Development of A Finger Mounted Type Haptic Device Using A Plane Approximated to Tangent Plane

Development of A Finger Mounted Type Haptic Device Using A Plane Approximated to Tangent Plane Development of A Finger Mounted Type Haptic Device Using A Plane Approximated to Tangent Plane Makoto Yoda Department of Information System Science Graduate School of Engineering Soka University, Soka

More information

Augmented and mixed reality (AR & MR)

Augmented and mixed reality (AR & MR) Augmented and mixed reality (AR & MR) Doug Bowman CS 5754 Based on original lecture notes by Ivan Poupyrev AR/MR example (C) 2008 Doug Bowman, Virginia Tech 2 Definitions Augmented reality: Refers to a

More information

Exploring Surround Haptics Displays

Exploring Surround Haptics Displays Exploring Surround Haptics Displays Ali Israr Disney Research 4615 Forbes Ave. Suite 420, Pittsburgh, PA 15213 USA israr@disneyresearch.com Ivan Poupyrev Disney Research 4615 Forbes Ave. Suite 420, Pittsburgh,

More information

An augmented-reality (AR) interface dynamically

An augmented-reality (AR) interface dynamically COVER FEATURE Developing a Generic Augmented-Reality Interface The Tiles system seamlessly blends virtual and physical objects to create a work space that combines the power and flexibility of computing

More information

Guidelines for choosing VR Devices from Interaction Techniques

Guidelines for choosing VR Devices from Interaction Techniques Guidelines for choosing VR Devices from Interaction Techniques Jaime Ramírez Computer Science School Technical University of Madrid Campus de Montegancedo. Boadilla del Monte. Madrid Spain http://decoroso.ls.fi.upm.es

More information

VEWL: A Framework for Building a Windowing Interface in a Virtual Environment Daniel Larimer and Doug A. Bowman Dept. of Computer Science, Virginia Tech, 660 McBryde, Blacksburg, VA dlarimer@vt.edu, bowman@vt.edu

More information

Remote Shoulder-to-shoulder Communication Enhancing Co-located Sensation

Remote Shoulder-to-shoulder Communication Enhancing Co-located Sensation Remote Shoulder-to-shoulder Communication Enhancing Co-located Sensation Minghao Cai and Jiro Tanaka Graduate School of Information, Production and Systems Waseda University Kitakyushu, Japan Email: mhcai@toki.waseda.jp,

More information

Simultaneous Object Manipulation in Cooperative Virtual Environments

Simultaneous Object Manipulation in Cooperative Virtual Environments 1 Simultaneous Object Manipulation in Cooperative Virtual Environments Abstract Cooperative manipulation refers to the simultaneous manipulation of a virtual object by multiple users in an immersive virtual

More information

Tangible Augmented Reality

Tangible Augmented Reality Tangible Augmented Reality Mark Billinghurst Hirokazu Kato Ivan Poupyrev HIT Laboratory Faculty of Information Sciences Interaction Lab University of Washington Hiroshima City University Sony CSL Box 352-142,

More information

Augmented Reality Lecture notes 01 1

Augmented Reality Lecture notes 01 1 IntroductiontoAugmentedReality Lecture notes 01 1 Definition Augmented reality (AR) is a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated

More information

Beyond: collapsible tools and gestures for computational design

Beyond: collapsible tools and gestures for computational design Beyond: collapsible tools and gestures for computational design The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

Tele-Nursing System with Realistic Sensations using Virtual Locomotion Interface

Tele-Nursing System with Realistic Sensations using Virtual Locomotion Interface 6th ERCIM Workshop "User Interfaces for All" Tele-Nursing System with Realistic Sensations using Virtual Locomotion Interface Tsutomu MIYASATO ATR Media Integration & Communications 2-2-2 Hikaridai, Seika-cho,

More information

Shopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction

Shopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction Shopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction Minghao Cai 1(B), Soh Masuko 2, and Jiro Tanaka 1 1 Waseda University, Kitakyushu, Japan mhcai@toki.waseda.jp, jiro@aoni.waseda.jp

More information

New interface approaches for telemedicine

New interface approaches for telemedicine New interface approaches for telemedicine Associate Professor Mark Billinghurst PhD, Holger Regenbrecht Dipl.-Inf. Dr-Ing., Michael Haller PhD, Joerg Hauber MSc Correspondence to: mark.billinghurst@hitlabnz.org

More information

Measuring Presence in Augmented Reality Environments: Design and a First Test of a Questionnaire. Introduction

Measuring Presence in Augmented Reality Environments: Design and a First Test of a Questionnaire. Introduction Measuring Presence in Augmented Reality Environments: Design and a First Test of a Questionnaire Holger Regenbrecht DaimlerChrysler Research and Technology Ulm, Germany regenbre@igroup.org Thomas Schubert

More information

Interactive Multimedia Contents in the IllusionHole

Interactive Multimedia Contents in the IllusionHole Interactive Multimedia Contents in the IllusionHole Tokuo Yamaguchi, Kazuhiro Asai, Yoshifumi Kitamura, and Fumio Kishino Graduate School of Information Science and Technology, Osaka University, 2-1 Yamada-oka,

More information

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception

More information

Efficient In-Situ Creation of Augmented Reality Tutorials

Efficient In-Situ Creation of Augmented Reality Tutorials Efficient In-Situ Creation of Augmented Reality Tutorials Alexander Plopski, Varunyu Fuvattanasilp, Jarkko Polvi, Takafumi Taketomi, Christian Sandor, and Hirokazu Kato Graduate School of Information Science,

More information

Virtual Object Manipulation using a Mobile Phone

Virtual Object Manipulation using a Mobile Phone Virtual Object Manipulation using a Mobile Phone Anders Henrysson 1, Mark Billinghurst 2 and Mark Ollila 1 1 NVIS, Linköping University, Sweden {andhe,marol}@itn.liu.se 2 HIT Lab NZ, University of Canterbury,

More information

Collaborative Mixed Reality Abstract Keywords: 1 Introduction

Collaborative Mixed Reality Abstract Keywords: 1 Introduction IN Proceedings of the First International Symposium on Mixed Reality (ISMR 99). Mixed Reality Merging Real and Virtual Worlds, pp. 261-284. Berlin: Springer Verlag. Collaborative Mixed Reality Mark Billinghurst,

More information

ISCW 2001 Tutorial. An Introduction to Augmented Reality

ISCW 2001 Tutorial. An Introduction to Augmented Reality ISCW 2001 Tutorial An Introduction to Augmented Reality Mark Billinghurst Human Interface Technology Laboratory University of Washington, Seattle grof@hitl.washington.edu Dieter Schmalstieg Technical University

More information

Occlusion based Interaction Methods for Tangible Augmented Reality Environments

Occlusion based Interaction Methods for Tangible Augmented Reality Environments Occlusion based Interaction Methods for Tangible Augmented Reality Environments Gun A. Lee α Mark Billinghurst β Gerard J. Kim α α Virtual Reality Laboratory, Pohang University of Science and Technology

More information

Development of Video Chat System Based on Space Sharing and Haptic Communication

Development of Video Chat System Based on Space Sharing and Haptic Communication Sensors and Materials, Vol. 30, No. 7 (2018) 1427 1435 MYU Tokyo 1427 S & M 1597 Development of Video Chat System Based on Space Sharing and Haptic Communication Takahiro Hayashi 1* and Keisuke Suzuki

More information

Ubiquitous Home Simulation Using Augmented Reality

Ubiquitous Home Simulation Using Augmented Reality Proceedings of the 2007 WSEAS International Conference on Computer Engineering and Applications, Gold Coast, Australia, January 17-19, 2007 112 Ubiquitous Home Simulation Using Augmented Reality JAE YEOL

More information

Immersive Authoring of Tangible Augmented Reality Applications

Immersive Authoring of Tangible Augmented Reality Applications International Symposium on Mixed and Augmented Reality 2004 Immersive Authoring of Tangible Augmented Reality Applications Gun A. Lee α Gerard J. Kim α Claudia Nelles β Mark Billinghurst β α Virtual Reality

More information

Interaction, Collaboration and Authoring in Augmented Reality Environments

Interaction, Collaboration and Authoring in Augmented Reality Environments Interaction, Collaboration and Authoring in Augmented Reality Environments Claudio Kirner1, Rafael Santin2 1 Federal University of Ouro Preto 2Federal University of Jequitinhonha and Mucury Valeys {ckirner,

More information

Experiencing a Presentation through a Mixed Reality Boundary

Experiencing a Presentation through a Mixed Reality Boundary Experiencing a Presentation through a Mixed Reality Boundary Boriana Koleva, Holger Schnädelbach, Steve Benford and Chris Greenhalgh The Mixed Reality Laboratory, University of Nottingham Jubilee Campus

More information

Collaboration en Réalité Virtuelle

Collaboration en Réalité Virtuelle Réalité Virtuelle et Interaction Collaboration en Réalité Virtuelle https://www.lri.fr/~cfleury/teaching/app5-info/rvi-2018/ Année 2017-2018 / APP5 Info à Polytech Paris-Sud Cédric Fleury (cedric.fleury@lri.fr)

More information

Ungrounded Kinesthetic Pen for Haptic Interaction with Virtual Environments

Ungrounded Kinesthetic Pen for Haptic Interaction with Virtual Environments The 18th IEEE International Symposium on Robot and Human Interactive Communication Toyama, Japan, Sept. 27-Oct. 2, 2009 WeIAH.2 Ungrounded Kinesthetic Pen for Haptic Interaction with Virtual Environments

More information

COMET: Collaboration in Applications for Mobile Environments by Twisting

COMET: Collaboration in Applications for Mobile Environments by Twisting COMET: Collaboration in Applications for Mobile Environments by Twisting Nitesh Goyal RWTH Aachen University Aachen 52056, Germany Nitesh.goyal@rwth-aachen.de Abstract In this paper, we describe a novel

More information

Autonomic gaze control of avatars using voice information in virtual space voice chat system

Autonomic gaze control of avatars using voice information in virtual space voice chat system Autonomic gaze control of avatars using voice information in virtual space voice chat system Kinya Fujita, Toshimitsu Miyajima and Takashi Shimoji Tokyo University of Agriculture and Technology 2-24-16

More information

Introduction to Virtual Reality (based on a talk by Bill Mark)

Introduction to Virtual Reality (based on a talk by Bill Mark) Introduction to Virtual Reality (based on a talk by Bill Mark) I will talk about... Why do we want Virtual Reality? What is needed for a VR system? Examples of VR systems Research problems in VR Most Computers

More information

Haptic Camera Manipulation: Extending the Camera In Hand Metaphor

Haptic Camera Manipulation: Extending the Camera In Hand Metaphor Haptic Camera Manipulation: Extending the Camera In Hand Metaphor Joan De Boeck, Karin Coninx Expertise Center for Digital Media Limburgs Universitair Centrum Wetenschapspark 2, B-3590 Diepenbeek, Belgium

More information

Steady Steps and Giant Leap Toward Practical Mixed Reality Systems and Applications

Steady Steps and Giant Leap Toward Practical Mixed Reality Systems and Applications Steady Steps and Giant Leap Toward Practical Mixed Reality Systems and Applications Hideyuki Tamura MR Systems Laboratory, Canon Inc. 2-2-1 Nakane, Meguro-ku, Tokyo 152-0031, JAPAN HideyTamura@acm.org

More information

Interactive intuitive mixed-reality interface for Virtual Architecture

Interactive intuitive mixed-reality interface for Virtual Architecture I 3 - EYE-CUBE Interactive intuitive mixed-reality interface for Virtual Architecture STEPHEN K. WITTKOPF, SZE LEE TEO National University of Singapore Department of Architecture and Fellow of Asia Research

More information

Remote Collaboration Using Augmented Reality Videoconferencing

Remote Collaboration Using Augmented Reality Videoconferencing Remote Collaboration Using Augmented Reality Videoconferencing Istvan Barakonyi Tamer Fahmy Dieter Schmalstieg Vienna University of Technology Email: {bara fahmy schmalstieg}@ims.tuwien.ac.at Abstract

More information

VIRTUAL REALITY Introduction. Emil M. Petriu SITE, University of Ottawa

VIRTUAL REALITY Introduction. Emil M. Petriu SITE, University of Ottawa VIRTUAL REALITY Introduction Emil M. Petriu SITE, University of Ottawa Natural and Virtual Reality Virtual Reality Interactive Virtual Reality Virtualized Reality Augmented Reality HUMAN PERCEPTION OF

More information

A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems

A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems F. Steinicke, G. Bruder, H. Frenz 289 A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems Frank Steinicke 1, Gerd Bruder 1, Harald Frenz 2 1 Institute of Computer Science,

More information

ExTouch: Spatially-aware embodied manipulation of actuated objects mediated by augmented reality

ExTouch: Spatially-aware embodied manipulation of actuated objects mediated by augmented reality ExTouch: Spatially-aware embodied manipulation of actuated objects mediated by augmented reality The MIT Faculty has made this article openly available. Please share how this access benefits you. Your

More information

Capability for Collision Avoidance of Different User Avatars in Virtual Reality

Capability for Collision Avoidance of Different User Avatars in Virtual Reality Capability for Collision Avoidance of Different User Avatars in Virtual Reality Adrian H. Hoppe, Roland Reeb, Florian van de Camp, and Rainer Stiefelhagen Karlsruhe Institute of Technology (KIT) {adrian.hoppe,rainer.stiefelhagen}@kit.edu,

More information

Collaboration in Multimodal Virtual Environments

Collaboration in Multimodal Virtual Environments Collaboration in Multimodal Virtual Environments Eva-Lotta Sallnäs NADA, Royal Institute of Technology evalotta@nada.kth.se http://www.nada.kth.se/~evalotta/ Research question How is collaboration in a

More information

Cooperative Object Manipulation in Collaborative Virtual Environments

Cooperative Object Manipulation in Collaborative Virtual Environments Cooperative Object Manipulation in s Marcio S. Pinho 1, Doug A. Bowman 2 3 1 Faculdade de Informática PUCRS Av. Ipiranga, 6681 Phone: +55 (44) 32635874 (FAX) CEP 13081-970 - Porto Alegre - RS - BRAZIL

More information

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real... v preface Motivation Augmented reality (AR) research aims to develop technologies that allow the real-time fusion of computer-generated digital content with the real world. Unlike virtual reality (VR)

More information

HeroX - Untethered VR Training in Sync'ed Physical Spaces

HeroX - Untethered VR Training in Sync'ed Physical Spaces Page 1 of 6 HeroX - Untethered VR Training in Sync'ed Physical Spaces Above and Beyond - Integrating Robotics In previous research work I experimented with multiple robots remotely controlled by people

More information

ThumbsUp: Integrated Command and Pointer Interactions for Mobile Outdoor Augmented Reality Systems

ThumbsUp: Integrated Command and Pointer Interactions for Mobile Outdoor Augmented Reality Systems ThumbsUp: Integrated Command and Pointer Interactions for Mobile Outdoor Augmented Reality Systems Wayne Piekarski and Bruce H. Thomas Wearable Computer Laboratory School of Computer and Information Science

More information

Immersive Augmented Reality Display System Using a Large Semi-transparent Mirror

Immersive Augmented Reality Display System Using a Large Semi-transparent Mirror IPT-EGVE Symposium (2007) B. Fröhlich, R. Blach, and R. van Liere (Editors) Short Papers Immersive Augmented Reality Display System Using a Large Semi-transparent Mirror K. Murase 1 T. Ogi 1 K. Saito 2

More information

Haptic control in a virtual environment

Haptic control in a virtual environment Haptic control in a virtual environment Gerard de Ruig (0555781) Lourens Visscher (0554498) Lydia van Well (0566644) September 10, 2010 Introduction With modern technological advancements it is entirely

More information

Evaluation of Five-finger Haptic Communication with Network Delay

Evaluation of Five-finger Haptic Communication with Network Delay Tactile Communication Haptic Communication Network Delay Evaluation of Five-finger Haptic Communication with Network Delay To realize tactile communication, we clarify some issues regarding how delay affects

More information

CSC 2524, Fall 2017 AR/VR Interaction Interface

CSC 2524, Fall 2017 AR/VR Interaction Interface CSC 2524, Fall 2017 AR/VR Interaction Interface Karan Singh Adapted from and with thanks to Mark Billinghurst Typical Virtual Reality System HMD User Interface Input Tracking How can we Interact in VR?

More information

Representation of Human Movement: Enhancing Social Telepresence by Zoom Cameras and Movable Displays

Representation of Human Movement: Enhancing Social Telepresence by Zoom Cameras and Movable Displays 1,2,a) 1 1 3 2011 6 26, 2011 10 3 (a) (b) (c) 3 3 6cm Representation of Human Movement: Enhancing Social Telepresence by Zoom Cameras and Movable Displays Kazuaki Tanaka 1,2,a) Kei Kato 1 Hideyuki Nakanishi

More information

A Kinect-based 3D hand-gesture interface for 3D databases

A Kinect-based 3D hand-gesture interface for 3D databases A Kinect-based 3D hand-gesture interface for 3D databases Abstract. The use of natural interfaces improves significantly aspects related to human-computer interaction and consequently the productivity

More information

Tangible interaction : A new approach to customer participatory design

Tangible interaction : A new approach to customer participatory design Tangible interaction : A new approach to customer participatory design Focused on development of the Interactive Design Tool Jae-Hyung Byun*, Myung-Suk Kim** * Division of Design, Dong-A University, 1

More information

Occlusion-Aware Menu Design for Digital Tabletops

Occlusion-Aware Menu Design for Digital Tabletops Occlusion-Aware Menu Design for Digital Tabletops Peter Brandl peter.brandl@fh-hagenberg.at Jakob Leitner jakob.leitner@fh-hagenberg.at Thomas Seifried thomas.seifried@fh-hagenberg.at Michael Haller michael.haller@fh-hagenberg.at

More information

INTERIOUR DESIGN USING AUGMENTED REALITY

INTERIOUR DESIGN USING AUGMENTED REALITY INTERIOUR DESIGN USING AUGMENTED REALITY Miss. Arti Yadav, Miss. Taslim Shaikh,Mr. Abdul Samad Hujare Prof: Murkute P.K.(Guide) Department of computer engineering, AAEMF S & MS, College of Engineering,

More information

Augmented Reality And Ubiquitous Computing using HCI

Augmented Reality And Ubiquitous Computing using HCI Augmented Reality And Ubiquitous Computing using HCI Ashmit Kolli MS in Data Science Michigan Technological University CS5760 Topic Assignment 2 akolli@mtu.edu Abstract : Direct use of the hand as an input

More information

Building a bimanual gesture based 3D user interface for Blender

Building a bimanual gesture based 3D user interface for Blender Modeling by Hand Building a bimanual gesture based 3D user interface for Blender Tatu Harviainen Helsinki University of Technology Telecommunications Software and Multimedia Laboratory Content 1. Background

More information

Development of a telepresence agent

Development of a telepresence agent Author: Chung-Chen Tsai, Yeh-Liang Hsu (2001-04-06); recommended: Yeh-Liang Hsu (2001-04-06); last updated: Yeh-Liang Hsu (2004-03-23). Note: This paper was first presented at. The revised paper was presented

More information

AR 2 kanoid: Augmented Reality ARkanoid

AR 2 kanoid: Augmented Reality ARkanoid AR 2 kanoid: Augmented Reality ARkanoid B. Smith and R. Gosine C-CORE and Memorial University of Newfoundland Abstract AR 2 kanoid, Augmented Reality ARkanoid, is an augmented reality version of the popular

More information

Motion Capturing Empowered Interaction with a Virtual Agent in an Augmented Reality Environment

Motion Capturing Empowered Interaction with a Virtual Agent in an Augmented Reality Environment Motion Capturing Empowered Interaction with a Virtual Agent in an Augmented Reality Environment Ionut Damian Human Centered Multimedia Augsburg University damian@hcm-lab.de Felix Kistler Human Centered

More information

MRT: Mixed-Reality Tabletop

MRT: Mixed-Reality Tabletop MRT: Mixed-Reality Tabletop Students: Dan Bekins, Jonathan Deutsch, Matthew Garrett, Scott Yost PIs: Daniel Aliaga, Dongyan Xu August 2004 Goals Create a common locus for virtual interaction without having

More information

Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms

Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms Published in the Proceedings of CHI '97 Hiroshi Ishii and Brygg Ullmer MIT Media Laboratory Tangible Media Group 20 Ames Street,

More information

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces Huidong Bai The HIT Lab NZ, University of Canterbury, Christchurch, 8041 New Zealand huidong.bai@pg.canterbury.ac.nz Lei

More information

Eye-Hand Co-ordination with Force Feedback

Eye-Hand Co-ordination with Force Feedback Eye-Hand Co-ordination with Force Feedback Roland Arsenault and Colin Ware Faculty of Computer Science University of New Brunswick Fredericton, New Brunswick Canada E3B 5A3 Abstract The term Eye-hand co-ordination

More information

COLLABORATIVE VIRTUAL ENVIRONMENT TO SIMULATE ON- THE-JOB AIRCRAFT INSPECTION TRAINING AIDED BY HAND POINTING.

COLLABORATIVE VIRTUAL ENVIRONMENT TO SIMULATE ON- THE-JOB AIRCRAFT INSPECTION TRAINING AIDED BY HAND POINTING. COLLABORATIVE VIRTUAL ENVIRONMENT TO SIMULATE ON- THE-JOB AIRCRAFT INSPECTION TRAINING AIDED BY HAND POINTING. S. Sadasivan, R. Rele, J. S. Greenstein, and A. K. Gramopadhye Department of Industrial Engineering

More information

A Remote Communication System to Provide Out Together Feeling

A Remote Communication System to Provide Out Together Feeling [DOI: 10.2197/ipsjjip.22.76] Recommended Paper A Remote Communication System to Provide Out Together Feeling Ching-Tzun Chang 1,a) Shin Takahashi 2 Jiro Tanaka 2 Received: April 11, 2013, Accepted: September

More information

Chapter 1 - Introduction

Chapter 1 - Introduction 1 "We all agree that your theory is crazy, but is it crazy enough?" Niels Bohr (1885-1962) Chapter 1 - Introduction Augmented reality (AR) is the registration of projected computer-generated images over

More information

Recent Progress on Wearable Augmented Interaction at AIST

Recent Progress on Wearable Augmented Interaction at AIST Recent Progress on Wearable Augmented Interaction at AIST Takeshi Kurata 12 1 Human Interface Technology Lab University of Washington 2 AIST, Japan kurata@ieee.org Weavy The goal of the Weavy project team

More information

Ubiquitous Computing Summer Episode 16: HCI. Hannes Frey and Peter Sturm University of Trier. Hannes Frey and Peter Sturm, University of Trier 1

Ubiquitous Computing Summer Episode 16: HCI. Hannes Frey and Peter Sturm University of Trier. Hannes Frey and Peter Sturm, University of Trier 1 Episode 16: HCI Hannes Frey and Peter Sturm University of Trier University of Trier 1 Shrinking User Interface Small devices Narrow user interface Only few pixels graphical output No keyboard Mobility

More information

NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS

NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS Xianjun Sam Zheng, George W. McConkie, and Benjamin Schaeffer Beckman Institute, University of Illinois at Urbana Champaign This present

More information

- applications on same or different network node of the workstation - portability of application software - multiple displays - open architecture

- applications on same or different network node of the workstation - portability of application software - multiple displays - open architecture 12 Window Systems - A window system manages a computer screen. - Divides the screen into overlapping regions. - Each region displays output from a particular application. X window system is widely used

More information

The architectural walkthrough one of the earliest

The architectural walkthrough one of the earliest Editors: Michael R. Macedonia and Lawrence J. Rosenblum Designing Animal Habitats within an Immersive VE The architectural walkthrough one of the earliest virtual environment (VE) applications is still

More information

How Many Pixels Do We Need to See Things?

How Many Pixels Do We Need to See Things? How Many Pixels Do We Need to See Things? Yang Cai Human-Computer Interaction Institute, School of Computer Science, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA ycai@cmu.edu

More information

EnhancedTable: Supporting a Small Meeting in Ubiquitous and Augmented Environment

EnhancedTable: Supporting a Small Meeting in Ubiquitous and Augmented Environment EnhancedTable: Supporting a Small Meeting in Ubiquitous and Augmented Environment Hideki Koike 1, Shin ichiro Nagashima 1, Yasuto Nakanishi 2, and Yoichi Sato 3 1 Graduate School of Information Systems,

More information

EnSight in Virtual and Mixed Reality Environments

EnSight in Virtual and Mixed Reality Environments CEI 2015 User Group Meeting EnSight in Virtual and Mixed Reality Environments VR Hardware that works with EnSight Canon MR Oculus Rift Cave Power Wall Canon MR MR means Mixed Reality User looks through

More information

Feelable User Interfaces: An Exploration of Non-Visual Tangible User Interfaces

Feelable User Interfaces: An Exploration of Non-Visual Tangible User Interfaces Feelable User Interfaces: An Exploration of Non-Visual Tangible User Interfaces Katrin Wolf Telekom Innovation Laboratories TU Berlin, Germany katrin.wolf@acm.org Peter Bennett Interaction and Graphics

More information

Theory and Practice of Tangible User Interfaces Tuesday, Week 9

Theory and Practice of Tangible User Interfaces Tuesday, Week 9 Augmented Reality Theory and Practice of Tangible User Interfaces Tuesday, Week 9 Outline Overview Examples Theory Examples Supporting AR Designs Examples Theory Outline Overview Examples Theory Examples

More information

Immersive Simulation in Instructional Design Studios

Immersive Simulation in Instructional Design Studios Blucher Design Proceedings Dezembro de 2014, Volume 1, Número 8 www.proceedings.blucher.com.br/evento/sigradi2014 Immersive Simulation in Instructional Design Studios Antonieta Angulo Ball State University,

More information

3D Modelling Is Not For WIMPs Part II: Stylus/Mouse Clicks

3D Modelling Is Not For WIMPs Part II: Stylus/Mouse Clicks 3D Modelling Is Not For WIMPs Part II: Stylus/Mouse Clicks David Gauldie 1, Mark Wright 2, Ann Marie Shillito 3 1,3 Edinburgh College of Art 79 Grassmarket, Edinburgh EH1 2HJ d.gauldie@eca.ac.uk, a.m.shillito@eca.ac.uk

More information

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright E90 Project Proposal 6 December 2006 Paul Azunre Thomas Murray David Wright Table of Contents Abstract 3 Introduction..4 Technical Discussion...4 Tracking Input..4 Haptic Feedack.6 Project Implementation....7

More information

Evaluation of Visuo-haptic Feedback in a 3D Touch Panel Interface

Evaluation of Visuo-haptic Feedback in a 3D Touch Panel Interface Evaluation of Visuo-haptic Feedback in a 3D Touch Panel Interface Xu Zhao Saitama University 255 Shimo-Okubo, Sakura-ku, Saitama City, Japan sheldonzhaox@is.ics.saitamau.ac.jp Takehiro Niikura The University

More information

Welcome to this course on «Natural Interactive Walking on Virtual Grounds»!

Welcome to this course on «Natural Interactive Walking on Virtual Grounds»! Welcome to this course on «Natural Interactive Walking on Virtual Grounds»! The speaker is Anatole Lécuyer, senior researcher at Inria, Rennes, France; More information about him at : http://people.rennes.inria.fr/anatole.lecuyer/

More information

Expression of 2DOF Fingertip Traction with 1DOF Lateral Skin Stretch

Expression of 2DOF Fingertip Traction with 1DOF Lateral Skin Stretch Expression of 2DOF Fingertip Traction with 1DOF Lateral Skin Stretch Vibol Yem 1, Mai Shibahara 2, Katsunari Sato 2, Hiroyuki Kajimoto 1 1 The University of Electro-Communications, Tokyo, Japan 2 Nara

More information

Einführung in die Erweiterte Realität. 5. Head-Mounted Displays

Einführung in die Erweiterte Realität. 5. Head-Mounted Displays Einführung in die Erweiterte Realität 5. Head-Mounted Displays Prof. Gudrun Klinker, Ph.D. Institut für Informatik,Technische Universität München klinker@in.tum.de Nov 30, 2004 Agenda 1. Technological

More information

Abstract. Keywords: Multi Touch, Collaboration, Gestures, Accelerometer, Virtual Prototyping. 1. Introduction

Abstract. Keywords: Multi Touch, Collaboration, Gestures, Accelerometer, Virtual Prototyping. 1. Introduction Creating a Collaborative Multi Touch Computer Aided Design Program Cole Anagnost, Thomas Niedzielski, Desirée Velázquez, Prasad Ramanahally, Stephen Gilbert Iowa State University { someguy tomn deveri

More information

Study of the touchpad interface to manipulate AR objects

Study of the touchpad interface to manipulate AR objects Study of the touchpad interface to manipulate AR objects Ryohei Nagashima *1 Osaka University Nobuchika Sakata *2 Osaka University Shogo Nishida *3 Osaka University ABSTRACT A system for manipulating for

More information