Getting in Touch with a Cognitive Character
|
|
- Alison Sullivan
- 6 years ago
- Views:
Transcription
1 Getting in Touch with a Cognitive Character Torsten Bierz * Peter Dannenmann Kai Hergenröther * Martin Bertram * Henning Barthel Gerik Scheuermann Hans Hagen * (*) University of Kaiserslautern, Germany ( ) German Research Center for Artificial Intelligence, Germany ( ) University of Leipzig, Germany bierz@informatik.uni-kl.de, Peter.Dannenmann@dfki.de, kai@hergenrother.de, bertram@informatik.uni-kl.de, Henning.Barthel@dfki.de, scheuermann@informatik.uni-leipzig.de, hagen@informatik.uni-kl.de Abstract We provide computer-animated characters with haptic interaction, allowing human users to interfere with cognitive characters. Our work presents an interface between control and animation of virtual characters (CONTACT) and HapTEK, a framework for haptic interaction of a human hand with virtual reality (VR). This way, humancomputer interaction becomes truly bidirectional, i.e. an animated character directly provides feedback to a human user, and vice versa, introducing a new quality with respect to the behavior of both. Our system for the first time offers the opportunity for human and artificial intelligence to get in touch. 1. Introduction Cognitive characters are introduced in computeranimated virtual environments as autonomous actors. Their behavior is driven by artificial intelligence, such that only a minimum of control is required. For example, a human animator can specify a certain task, like moving an item from point A to point B, and the cognitive character automatically decides which way to go and how to accomplish this task. This includes proper reaction on obstacles introduced by the user or by other cognitive characters. The objective is to make the character as self-dependent as possible, providing only abstract input. The simulation and animation of human characters in virtual environments has recently emerged in numerous areas of applications. At first only gaming, movie and advertising industries were interested in virtual characters. Nowadays, these are also integrated into development and manufacturing processes. In a virtual environment without haptics, interaction capabilities between human users and cognitive characters are quite limited. In a typical scenario, a user will specify a list of tasks and will observe the character s actions. In order to make a cognitive character even more selfdependent, we add the ability to provide force-feedback to a human user based on a haptic device. Haptic is referred to the tactile sense, which represents the perception of mechanical stimuli. In order to support the visualization of virtual environments, in our system mechanical stimuli representing shapes and surfaces are given to the user. To simulate the haptic properties of different shapes and materials, efficient collision detection and visualization methods are required. Although today s commercially available animation packages provide advanced tools for the creation of computer animations of virtual characters (e.g. inverse kinematics, key-frame editors), high quality animations are still costly and depend on skilled animators and programmers. To provide a higher degree of automation, artificial intelligence is introduced, supporting the animation process. Particularly, for the requirements of changing roles, different behavioral models need to be used. This issue is addressed by a research project named Control and Animation of Cognitive Characters in Virtual Environments (CONTACT, [3, 6]). CONTACT is the prototype of an animation system capable of generating motion sequences of virtual characters as well as permitting autonomous characters to be directed on multiple levels of abstraction. The system allows to define animations on a high-level basis mainly by specifying a task for a virtual character. This character will automatically work out an appropriate action plan, based on a list of possible actions and his environment. Actions implying movement of a character are created automatically by
2 combining reference motions from a database. The user is allowed to overrule the character s decisions and to force it to fall back on some predefined behavior. In the present work, we introduce a haptic interface for CONTACT, providing a new quality of interaction between a human user and a cognitive character. Therefore, we build an interface between our haptic controller HapTEK [10] and the CONTACT system. These are the requirements for our application: Collisions of the user s hand and the cognitive character of the scene are detected and proper force feedback is provided to the hand. The cognitive character will notice the human hand as an obstacle and will adjust its action plan to avoid a collision. On collision, the character may be displaced. Rendering and force feedback must be provided at interactive frame rates. In the next section we give a brief review of the current state of the art on haptics and on cognitive characters. Section 3 summarizes the basic principles of the CONTACT system. In section 4, the developed haptic system Hap- TEK with its basic functions is presented shortly, including underlying objectives. Afterwards, the combination of the two systems is described, including the goals and the difficulties we had encountered during implementation. We conclude our work in section 6. 2 A survey on haptic systems and animated characters Interactive force feedback offers a more realistic impression to the user, complementing the visual impression provided by an immersive VR environment. Haptic devices are being used in many applications, including surgical and medical simulation, painting, sculpting, visualization, and assisting technology for the blind and visually impaired [14]. There are also several approaches for using haptic devices as tele-operational systems [8, 13, 9] based on data exchange over a network. Other approaches offer haptic devices for feeling certain material properties [12]. Geometric models were among the first attempts to create computer animations. Forward and inverse kinematics are now widely used for key-frame interpolation. Physical principles are often introduced to reduce animation parameters to a minimum. Corresponding simulation tools are part of many animation packages, like 3D Studio. Max R, Alias WaveFront R, or Maya R. In recent years several researchers have extended these models towards behavioral animation in order to further automate the process of generating animations (see figure Figure 1: Models in Computer Animation [7] 1). Nevertheless, the modelling of a virtual character s behavior and consequently its animation is very difficult. Trying to focus on this issue in literature, there cannot be found a unified solution on how to perform this task. In fact, the difficulty is the the diversity of natural behavior within humans (see e.g. [15], [18]). One way to model a character s behavior is to employ scripting languages. The Character Markup Language CML, [2] describes a sequence of actions the character can perform in the virtual world. CML only serves as a means for representing the action sequences. It is also possible to generate such CML sequences by software tools that build up such scripts based on a set of action models, e.g., according to a set of current stimuli and an existing rule set that maps such stimuli to actions. In [2] such a piece of software is mentioned that generates a CML script based on inputs from a planning component taking into account the character s state and personality as well as his domain knowledge. However, (in contrast to the CONTACT framework) this approach only handles static environments where the characters actions are computed in advance. The control of characters on a high (or cognitive) level as described by Funge in [7], Chen et al. in [5] or in an earlier version by Tu and Terzopoulos [19] marks the next important step towards giving the characters a more intelligent behavior. In contrast to the reactive behaviors introduced by Reynolds and Anderson et al. they have got more sophisticated action selection techniques and are able to plan their actions further into the future. A step towards pursuing goals within a so-called Rulebased Behavior System is described by Raupp-Musse and Thalmann in [16]. They describe different levels of control for their actors depending on their role within the group. Another approach for directing the characters of a scene on several levels of abstraction has been introduced by Blumberg and Galyean in [4]. Within the CONTACT project we adopt Funge s view [7] (see figure 1) of explicitly using a cognitive model on top of a behavioral model in order to automate the process 2
3 Specification of behavior on the cognitive level. This task covers the development of methods for specifying all possible actions of each character on that level as logical descriptions of the action s preconditions and effect axioms in the Cognitive Modelling Language (CML, see [7]) as well as methods for specifying the character s goals in that description language. Based on this logical description of actions and goals a mechanism is provided to automatically generate in advance an appropriate sequence of actions to fulfill the given goals under consideration of the character s knowledge and perception of his environment. Figure 2: Architecture of the CONTACT Animation System of generating animations. This method of controlling virtual characters on a high (or cognitive) level marks the next important step in giving the characters a more intelligent behavior. A different approach, which describes the human interaction with virtual prototype products using haptic feedback was presented by Hiroo Iwata in [11]. Concerning the prototypes, the user is able to grab and feel their surface as force feedback. In [11] Iwata also suggested using haptic devices to manipulate character animations. A sccording to our knowledge, connecting a haptic system, permitting humans to closely interact with a computer to a cognitive character animation system is a new approach and has not been described before. 3 Basics of the CONTACT Project 3.1 Controlling and Animating Cognitive Characters Funge [7] raised the level of abstraction in animation generation by introducing autonomous characters. This idea was adopted by the CONTACT project, with the objective of developing a scalable and extensible working platform for autonomous characters in dynamic virtual environments. By this means, the user defines high-level goals within the cognitive layer, and virtual characters determine appropriate action sequences before starting the animation in order to obtain the given goal. These decisions of the cognitive character in the generated animation can be rejected or abandoned by the user by forcing the model to execute different actions. In order to reach the defined goals of the CONTACT project above the following research objectives were discussed: Specification of behavior on the behavioral level. For the implementation of behavioral patterns below the cognitive level methods for specifying such patterns and assigning them to actors are described by Reynolds in [17] and by Anderson et al. in [1]. Such methods are integrated into our animation system, too. Provision of a motion library. The animation of characters is generated by combining a number of motion sequences corresponding to the possible actions the characters can perform. These motion sequences are provided in the form of a library that can be accessed for retrieving the appropriate sequences according to the characters actions. The sequences have been established using motion capture techniques or by generating keyframed sequences. In order to use the motion sequences with different characters, a set of different sequences will be provided according to the characters differing anthropometries. Generation of the final animation. Based on the provided plans and the available reference motion sequences the final animation is generated. Supervision and interaction feasibilities. If the animator is not satisfied with a character s behavior on cognitive level, the system will allow the animator to force the character to fall back from the cognitive level to the behavioral level. In addition, the system allows the animator to force any actor to select and immediately execute one specific action of the available set of actions. Implementation of dynamic environments. For providing a realistic interaction of the virtual characters with their environment, dynamic environment objects are also integrated into the system. This permits the characters,for example, to pick up and 3
4 move objects, open or close doors and drawers, etc. In addition, due to changing dynamic environments, virtual characters must also be able to acquire environmental information to avoid collisions. 3.2 System Architecture The core concept of CONTACT is an object-oriented component system, such that new components can be integrated with little effort. The system provides its own communication infrastructure, implementing interfaces between the different modules. Figure 2 shows the main elements of the CONTACT animation framework for one cognitive character: The character s planning component has got access to a database containing the description of the character s possible actions as well as to a logical description of the character s current situation. This information, combined with the information about the character s (dynamic) environment is used by the planning component to generate an action plan. This action plan serves as a basis to generate the corresponding animation from a set of reference motions that are linked to the respective actions of the plan. reality engine HapTEK (see figure 3), only being dependent on the device drivers themselves. This offered the opportunity to implement all required functionalities within a single framework. System Architecture The HapTEK framework supporting the CyberGrasp R, CyberGlove R and CyberForce R device has been developed as part of a master thesis [10] at the University of Kaiserslautern. The development environment is based on Linux using non commercial software like the gnu C++ compiler and QT R 1. Additionally the Mico 2 CORBA 3 Interface has been integrated into the basic system. In addition, it contains a feature to exchange the virtual scenes. The system is built up modularly facilitating simple development of further extensions. An own virtual reality engine has been implemented using a scene graph, where every knot has its own homogeneous transformation matrix. OpenGL R 4 is used to traverse the graph and to generate and draw the current primitives. This framework provides stereoscopic viewing capabilities based on shutter glasses. This VR environment allows to estimate distances and positions of current objects more easily. Figure 3: Example of the HapTEK System. The user can grab, displace and throw geometric objects providing force feedback to the hand and to the five fingers. 4 The Haptic Framework When using Common-of-the-Shelf (COTS) haptic systems for applications of his own, the user encounters some difficulties since every haptic device has got its own device drivers and specifications. According to these drivers every vendor developed its own applications and extensions, e.g., for collision detection. In order to be independent of the predesigned software and to include additional functionalities required, we decided to develop our own virtual Figure 4: The operating range of the virtual hand One enormous problem is the navigation in the virtual world without dealing with additional devices. Classical input devices, like mouse and keyboard, are based on the assumption that the user can use both hands. However, the user is connected with one hand to the exoskeletal device of the CyberGrasp R and CyberForce R system. Navigating with the other hand may be uncomfortable. Therefore, we Common Object Request Broker Architecture 4 4
5 decided to integrate the navigation into the virtual hand of the haptic device (see figure 4). When the user leaves a neutral zone towards the border of the display, a movement is performed, e.g. if the hand leaves the zone to the left, the scene is rotated clockwise around the z-axis. By leaving the box at the bottom the scene rotates around the x-axis. If the user leaves the front or back border, the action performed is moving backward or forward. This allows the user to move and to grab objects simply by using only the haptic device. Within this solution it is also feasible to transport one object from one place in the scene to a different one without any usage of any further devices, e.g., the keyboard or the mouse. This approach offers more flexibility and simplicity than dealing with multiple devices at the same time. In our framework (see figure 5) we integrated two different kinds of objects: one that can be influenced by the haptic device and the others which cannot. Each of these influenceable objects definitely returns a force feedback to the current user in case it is grabbed, moved, thrown, shifted or touched. For interaction with those objects the integrated attributes are described now more in detail: Grabbing of objects: The interaction of grabbing is realized by touching an object with the thumb and at least one other finger, e.g. the middle or the index finger. Moving of objects: When a surface or object is grabbed, it is possible to transport or move it to a different point in the virtual scene. Throwing of objects: This interaction is performed when the user releases an object while the haptic device is still moving towards a specific direction. Shifting of objects: In order to be able to move an object without grabbing a shifting operation is integrated. The user can manipulate the object e.g. by pushing it with any finger. Now, the object moves according to the direction and the strength of the accomplished shifting. Modelling every surface or object in the application or source code is very time-consuming and counterproductive. By trying to simplify this integration, we integrated an interface for loading WaveFront R, VRML and MD3 R objects. Thus, it is possible to simply load and render the predefined objects in OpenGL. After the scene is loaded, the user is able to assign any of the aforementioned attributes, in order to adapt the object to his own perception. Figure 5: Combination of the HapTEK and CONTACT Framework 5 Combining the Haptic and the CONTACT Framework 5.1 The Integration During the integration of the haptic and the CONTACT framework a number of tasks had to be addressed. They can be classified to the following subjects: Merging the two different systems. The two systems have been developed using two different programming languages. In the CONTACT project Java was used and the software of the Hap- TEK system was realized using C++. However, both systems are designed to communicate via a network by using a CORBA client-server architecture (see figure 6) which can handle these issues. In this context the HapTEK system works as a server, providing its services to arbitrary clients. For permitting these clients to connect, we included a name service which dynamically returns the currently opened port to them. On this basis, the CORBA client integrated into the CONTACT framework can connect to the HapTEK CORBA server using the services provided by this system. Depending on the issues of merging the two different systems, we had to define an interface, which is based on IDL 5. The IDL is necessary to define the interface for calling procedures on remote machines in middleware systems. The remote procedures are: Set the force on the single fingers of the haptic device Get the actual force of the haptic device 5 Interface Definition Language 5
6 Get the position of the fingers of the haptic device Set the position of the fingers on the CORBA Server Get the position of the cognitive character Set the position of the cognitive character on the CORBA Server The values transmitted via the CORBA server are the actual position of the virtual hand s center and the position of the single fingers, that are provided by the HapTEK system. This system in turn receives the current position of the virtual character and its movement as well as the force acting on the hand if a collision has been detected. Accordingly, the CONTACT system emits these forces for the haptic device and the current position of the cognitive character. On the other hand, it receives the positions provided by the haptic device. Dealing with a firewall. In some computing environments it may be possible that the HapTEK device and the CONTACT animation framework are running in different networks protected by a firewall. In our case the Hap- TEK project has been developed at the University of Kaiserslautern whereas the CONTACT project is under development at the German Research Center for Artificial Intelligence (DFKI). Each institution handles its own network and, what is more important, has got its own subnet and firewall, which filters traffic quite effectively. So, one major issue in connecting the two components was the tunnelling of these firewalls, in order to realize the communication and data exchange. Handling these issues we decided to tunnel the firewalls by using the secure shell (SSH) protocol, which provides the possibility of making available a port from a remote computer as a virtual local port on another machine. Doing so, we were able to realize the data exchange via the internet. Synchronization. The two systems had to be synchronized in order to avoid displacement or invalid positions of the virtual hand and the character. This synchronization was integrated in the application. 5.2 Example and evaluation The combined system now works in the following manner: 1. Loading a scene. At the beginning a virtual scenario has to be loaded, which geometrically describes the environment. The Figure 6: Inter network connection. file could be in any of the data formats mentioned in section Defining the initial state. The cognitive character and the haptic device, represented as a virtual hand in the scene, are initialized and rendered corresponding to the user defined location. 3. Defining the character s goals. The user defines the character s high-level goals using the Cognitive Modelling Language (CML). These goals may include topics like (in an informal description) Take item box to position camera. These high-level goals together with a (CML-) description for the character s possible atomic actions serve as the basis for calculating the character s action plan. 4. Calculating the action plan. During the computation of an action plan for pursuing the above described goals, the CONTACT system determines a sequence of actions, typically including a plan for getting from one position in the scene to another one, avoiding collisions and using the heuristically best and fastest path. The different plans are displayed to the user, enabling him to possibly overrule the selected heuristically best plan and force the character to execute one of the other computed action plans. Having selected an appropriate action plan, the animated character starts moving. 5. Haptic interactive events. During the execution of the action plan, the haptic device offers the capability to dynamically influence the virtual character s calculated plan by simply holding the virtual hand as an obstacle in the virtual character s path. When a collision between the character and the hand is detected, the currently performed plan will be aborted. According to the current direction and speed of the cognitive character, forces for the haptic device are generated. Hereafter, the recalculation of the action plan for fulfilling the still uncompleted 6
7 Figure 7: Sequence of screenshots showing the cognitive character walking along a planned path and its interaction with the haptic fingers, which are represented by five spheres. tasks begins, avoiding interference with the hand as a newly emerged obstacle which cannot be passed. Thereupon the cognitive character continues its path in order to reach its objective and the forces are released. Of course, the user is certainly able to interfere with the character in this way not only once but multiple times. An example representing such an interaction is presented in figure 7. To simplify geometry for interactive rendering (using the CONTACT renderer on the remote side), the fingertips of the virtual hand are represented by spheres. In the first of the three screenshots the character approaches the virtual hand. In the second one a collision is detected and the character stops its planned path intermediately, in order to calculate a new path bypassing the hand. After having found a new path, the character turns around and walks a different way in order to reach its target. The remaining question is why using a haptic device for interfering with a cognitive character. One of the simplest reasons is related to human beings. If you want to prevent a person from walking a way, you can grab his shoulder or arm and he probably stops his movement. Another example is a police man controlling the traffic on a non traffic lighted junction. He rises his hand in order to stop the traffic. This basic gesture is commonly known in nearly every country in the world. It might also be feasible to use the keyboard, a joystick or a space mouse in order to realize this interaction with the cognitive character. For untrained persons, however, the handling of these devices and the corresponding navigation tasks are often very difficult. This disadvantage is repealed by using a haptic device. Its handling is based on natural movement of humans and thus easier to learn. 6 Conclusion and Future Work In this work we described our approach for merging a haptic with a cognitive character animation system. We employed a way of interacting with the cognitive character and influencing its planned path through the virtual scene. The corresponding interaction with the human user, which is the collision with the representation of the virtual hand, can be felt with the haptic device. After this collision the user may choose to withdraw the obstacle again. Then, the character replans its path and continues to pursue its task. In the current system, the character plans its paths from a starting position to target positions corresponding to the different goals in the logical (CML) description of its actions and objectives. A further step in refining the presented system could be selecting the character s different target points using the haptic device. By touching an object it could be added to the current plan, making the interaction and navigation much simpler. 7 Acknowledgement The HapTEK project was developed with funding of the Support Program of the Foundation for Innovation of Rheinland-Pfalz. The CONTACT system is funded by the German Federal Ministry for Education and Research under the contract number 01 IW A03. 7
8 References [1] Matt Anderson, Eric McDaniel, and Stephen Chenney. Constrained animation of flocks. Proc. Eurographics / SIGGRAPH Symposium on Computer Animation, [2] Yasmine Arafa, Kaveh Kamyab, Ebrahim Mamdani, Sumedha Kshirsagar, Nadia Magnenat-Thalmann, Anthony Guye-Vuilleme, and Daniel Thalmann. Two approaches to scripting character animation. Proc. Workshop Embodied Conversational Agents - let s specify and evaluate them! at AAMAS, [3] H. Barthel and P. Dannenmann. Animating virtual characters controlled on the cognitive level. IASTED International Conference on Visualization, Imaging and Image Processing (VIIP 2004), [4] Bruce M. Blumberg and Tinsley A. Galyean. Multilevel direction of autonomous creatures for real-time environments. Proceedings of SIGGRAPH 95, August [5] L. Chen, K. Bechkoum, and G. Clapworthy. A logical approach to high-level agent control. ACM Proceedings of the Fifth International Conference on Autonomous Agents, [6] P. Dannenmann and H. Barthel. Controlling the behavior of virtual characters on cognitive level. IASTED International Conference on Artificial Intelligence and Soft Computing (ASC 2004), [7] John Funge, Xiaoyuan Tu, and Demetri Terzopoulos. Cognitive modeling: Knowledge, reasoning and planning for intelligent characters. Proc. ACM SIG- GRAPH99, [8] Mario Gutierrez, Renaud Ott, Daniel Thalmann, and Frederic Vexo. Mediators: Virtual haptic interfaces for tele-operated robots. 13th IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN 2004), [11] Hiroo Iwata. Artificial reality with force-feedback: development of desktop virtual space with compact master manipulator. Proceedings of the 17th annual conference on Computer graphics and interactive techniques, [12] Hiroo Iwata, Hiroaki Yano, Fumitaka Nakaizumi, and Ryo Kawamura. Feelex: adding haptic surface to graphics. Proceedings of the 28th annual conference on Computer graphics and interactive techniques, [13] P. Lemoine, M. Gutierrez, F. Vexo, and D. Thalmann. Mediators: Virtual interfaces with haptic feedback. Proceedings of EuroHaptics 2004, [14] Margaret L. McLaughlin, Joao P. Hespanha, and Gaurav S. Sukhatme. Touch In Virtual Environments - Haptics and the design of Interactive Systems. Prentice-Hall Inc., first edition, [15] Christoph Niederberger and Markus H. Gross. Towards a game agent. CS Technical Report 377, ETH Zürich, Switzerland, August [16] Soraia Raupp-Musse and Dahiel Thalmann. Hierarchical model for real-time simulation of virtual human crowds. IEEE Transactions on Visualization and Computer Graphics, 7(2): , [17] Craig W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. Computer Graphics, 21(4):25 34, Proc. SIGGRAPH 87. [18] Daniel Thalmann and Jean-Sébastien Monzani. Behavioural animation of virtual humans: What kind of law and rules? Proc. Computer Animation IEEE Computer Society Press, [19] Xiaoyuan Tu and Demetri Terzopoulos. Artificial fishes: Physics, locomotion, perception, behavior. Proc. ACM SIGGRAPH 94, July [9] Mario Gutiérrez, Renaud Ott, Daniel Thalmann, and Frédéric Vexo. Mediators: Virtual haptic interfaces for tele-operated robots. 13th IEEE International Workshop on Robot and Human Interactive Communication, [10] Kai Hergenröther. Design and implementation of a haptics-enabled vr framework supporting immersion cyberglove / cybergrasp / cyberforce haptics hardware. Master s thesis, Technical University of Kaiserslautern,
Craig Barnes. Previous Work. Introduction. Tools for Programming Agents
From: AAAI Technical Report SS-00-04. Compilation copyright 2000, AAAI (www.aaai.org). All rights reserved. Visual Programming Agents for Virtual Environments Craig Barnes Electronic Visualization Lab
More informationVR Haptic Interfaces for Teleoperation : an Evaluation Study
VR Haptic Interfaces for Teleoperation : an Evaluation Study Renaud Ott, Mario Gutiérrez, Daniel Thalmann, Frédéric Vexo Virtual Reality Laboratory Ecole Polytechnique Fédérale de Lausanne (EPFL) CH-1015
More informationINTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY
INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY T. Panayiotopoulos,, N. Zacharis, S. Vosinakis Department of Computer Science, University of Piraeus, 80 Karaoli & Dimitriou str. 18534 Piraeus, Greece themisp@unipi.gr,
More informationNetworked Virtual Environments
etworked Virtual Environments Christos Bouras Eri Giannaka Thrasyvoulos Tsiatsos Introduction The inherent need of humans to communicate acted as the moving force for the formation, expansion and wide
More informationFramework for Simulating the Human Behavior for Intelligent Virtual Agents. Part I: Framework Architecture
Framework for Simulating the Human Behavior for Intelligent Virtual Agents. Part I: Framework Architecture F. Luengo 1,2 and A. Iglesias 2 1 Department of Computer Science, University of Zulia, Post Office
More informationVIEW: Visual Interactive Effective Worlds Lorentz Center International Center for workshops in the Sciences June Dr.
Virtual Reality & Presence VIEW: Visual Interactive Effective Worlds Lorentz Center International Center for workshops in the Sciences 25-27 June 2007 Dr. Frederic Vexo Virtual Reality & Presence Outline:
More informationMoving Path Planning Forward
Moving Path Planning Forward Nathan R. Sturtevant Department of Computer Science University of Denver Denver, CO, USA sturtevant@cs.du.edu Abstract. Path planning technologies have rapidly improved over
More informationMECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES
INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL
More informationMULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT
MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003
More informationACE: A Platform for the Real Time Simulation of Virtual Human Agents
ACE: A Platform for the Real Time Simulation of Virtual Human Agents Marcelo Kallmann, Jean-Sébastien Monzani, Angela Caicedo and Daniel Thalmann EPFL Computer Graphics Lab LIG CH-1015 Lausanne Switzerland
More informationRobot Task-Level Programming Language and Simulation
Robot Task-Level Programming Language and Simulation M. Samaka Abstract This paper presents the development of a software application for Off-line robot task programming and simulation. Such application
More informationDevelopment of a telepresence agent
Author: Chung-Chen Tsai, Yeh-Liang Hsu (2001-04-06); recommended: Yeh-Liang Hsu (2001-04-06); last updated: Yeh-Liang Hsu (2004-03-23). Note: This paper was first presented at. The revised paper was presented
More informationAI Framework for Decision Modeling in Behavioral Animation of Virtual Avatars
AI Framework for Decision Modeling in Behavioral Animation of Virtual Avatars A. Iglesias 1 and F. Luengo 2 1 Department of Applied Mathematics and Computational Sciences, University of Cantabria, Avda.
More informationAn Unreal Based Platform for Developing Intelligent Virtual Agents
An Unreal Based Platform for Developing Intelligent Virtual Agents N. AVRADINIS, S. VOSINAKIS, T. PANAYIOTOPOULOS, A. BELESIOTIS, I. GIANNAKAS, R. KOUTSIAMANIS, K. TILELIS Knowledge Engineering Lab, Department
More informationDistributed Vision System: A Perceptual Information Infrastructure for Robot Navigation
Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation Hiroshi Ishiguro Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606-01, Japan E-mail: ishiguro@kuis.kyoto-u.ac.jp
More informationCSE 165: 3D User Interaction. Lecture #14: 3D UI Design
CSE 165: 3D User Interaction Lecture #14: 3D UI Design 2 Announcements Homework 3 due tomorrow 2pm Monday: midterm discussion Next Thursday: midterm exam 3D UI Design Strategies 3 4 Thus far 3DUI hardware
More informationAGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira
AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS Nuno Sousa Eugénio Oliveira Faculdade de Egenharia da Universidade do Porto, Portugal Abstract: This paper describes a platform that enables
More informationDEVELOPMENT OF A ROBOID COMPONENT FOR PLAYER/STAGE ROBOT SIMULATOR
Proceedings of IC-NIDC2009 DEVELOPMENT OF A ROBOID COMPONENT FOR PLAYER/STAGE ROBOT SIMULATOR Jun Won Lim 1, Sanghoon Lee 2,Il Hong Suh 1, and Kyung Jin Kim 3 1 Dept. Of Electronics and Computer Engineering,
More informationA New Architecture for Simulating the Behavior of Virtual Agents
A New Architecture for Simulating the Behavior of Virtual Agents F. Luengo 1,2 and A. Iglesias 2 1 Department of Computer Science, University of Zulia, Post Office Box #527, Maracaibo, Venezuela fluengo@cantv.net
More informationVirtual Environments. Ruth Aylett
Virtual Environments Ruth Aylett Aims of the course 1. To demonstrate a critical understanding of modern VE systems, evaluating the strengths and weaknesses of the current VR technologies 2. To be able
More informationInteracting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)
Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception
More informationEvaluation of Five-finger Haptic Communication with Network Delay
Tactile Communication Haptic Communication Network Delay Evaluation of Five-finger Haptic Communication with Network Delay To realize tactile communication, we clarify some issues regarding how delay affects
More informationABSTRACT. Keywords Virtual Reality, Java, JavaBeans, C++, CORBA 1. INTRODUCTION
Tweek: Merging 2D and 3D Interaction in Immersive Environments Patrick L Hartling, Allen D Bierbaum, Carolina Cruz-Neira Virtual Reality Applications Center, 2274 Howe Hall Room 1620, Iowa State University
More informationModeling and Simulation: Linking Entertainment & Defense
Calhoun: The NPS Institutional Archive Faculty and Researcher Publications Faculty and Researcher Publications 1998 Modeling and Simulation: Linking Entertainment & Defense Zyda, Michael 1 April 98: "Modeling
More informationIntegrating PhysX and OpenHaptics: Efficient Force Feedback Generation Using Physics Engine and Haptic Devices
This is the Pre-Published Version. Integrating PhysX and Opens: Efficient Force Feedback Generation Using Physics Engine and Devices 1 Leon Sze-Ho Chan 1, Kup-Sze Choi 1 School of Nursing, Hong Kong Polytechnic
More informationBuilding a bimanual gesture based 3D user interface for Blender
Modeling by Hand Building a bimanual gesture based 3D user interface for Blender Tatu Harviainen Helsinki University of Technology Telecommunications Software and Multimedia Laboratory Content 1. Background
More informationToward Gesture-Based Behavior Authoring
Toward Gesture-Based Behavior Authoring Edward Yu-Te Shen Bing-Yu Chen National Taiwan University ABSTRACT Creating lifelike, autonomous, and interactive virtual behaviors is important in generating character
More informationIntroduction to Virtual Reality (based on a talk by Bill Mark)
Introduction to Virtual Reality (based on a talk by Bill Mark) I will talk about... Why do we want Virtual Reality? What is needed for a VR system? Examples of VR systems Research problems in VR Most Computers
More informationCapturing and Adapting Traces for Character Control in Computer Role Playing Games
Capturing and Adapting Traces for Character Control in Computer Role Playing Games Jonathan Rubin and Ashwin Ram Palo Alto Research Center 3333 Coyote Hill Road, Palo Alto, CA 94304 USA Jonathan.Rubin@parc.com,
More informationENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS
BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of
More informationDESIGN AGENTS IN VIRTUAL WORLDS. A User-centred Virtual Architecture Agent. 1. Introduction
DESIGN GENTS IN VIRTUL WORLDS User-centred Virtual rchitecture gent MRY LOU MHER, NING GU Key Centre of Design Computing and Cognition Department of rchitectural and Design Science University of Sydney,
More informationChapter 2 Introduction to Haptics 2.1 Definition of Haptics
Chapter 2 Introduction to Haptics 2.1 Definition of Haptics The word haptic originates from the Greek verb hapto to touch and therefore refers to the ability to touch and manipulate objects. The haptic
More informationA Very High Level Interface to Teleoperate a Robot via Web including Augmented Reality
A Very High Level Interface to Teleoperate a Robot via Web including Augmented Reality R. Marín, P. J. Sanz and J. S. Sánchez Abstract The system consists of a multirobot architecture that gives access
More informationDiVA Digitala Vetenskapliga Arkivet
DiVA Digitala Vetenskapliga Arkivet http://umu.diva-portal.org This is a paper presented at First International Conference on Robotics and associated Hightechnologies and Equipment for agriculture, RHEA-2012,
More informationArtificial Life Simulation on Distributed Virtual Reality Environments
Artificial Life Simulation on Distributed Virtual Reality Environments Marcio Lobo Netto, Cláudio Ranieri Laboratório de Sistemas Integráveis Universidade de São Paulo (USP) São Paulo SP Brazil {lobonett,ranieri}@lsi.usp.br
More informationA Multimodal Locomotion User Interface for Immersive Geospatial Information Systems
F. Steinicke, G. Bruder, H. Frenz 289 A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems Frank Steinicke 1, Gerd Bruder 1, Harald Frenz 2 1 Institute of Computer Science,
More informationThe Control of Avatar Motion Using Hand Gesture
The Control of Avatar Motion Using Hand Gesture ChanSu Lee, SangWon Ghyme, ChanJong Park Human Computing Dept. VR Team Electronics and Telecommunications Research Institute 305-350, 161 Kajang-dong, Yusong-gu,
More informationThe CHAI Libraries. F. Conti, F. Barbagli, R. Balaniuk, M. Halg, C. Lu, D. Morris L. Sentis, E. Vileshin, J. Warren, O. Khatib, K.
The CHAI Libraries F. Conti, F. Barbagli, R. Balaniuk, M. Halg, C. Lu, D. Morris L. Sentis, E. Vileshin, J. Warren, O. Khatib, K. Salisbury Computer Science Department, Stanford University, Stanford CA
More informationGuidelines for choosing VR Devices from Interaction Techniques
Guidelines for choosing VR Devices from Interaction Techniques Jaime Ramírez Computer Science School Technical University of Madrid Campus de Montegancedo. Boadilla del Monte. Madrid Spain http://decoroso.ls.fi.upm.es
More informationBeyond Actuated Tangibles: Introducing Robots to Interactive Tabletops
Beyond Actuated Tangibles: Introducing Robots to Interactive Tabletops Sowmya Somanath Department of Computer Science, University of Calgary, Canada. ssomanat@ucalgary.ca Ehud Sharlin Department of Computer
More informationIMPLEMENTING MULTIPLE ROBOT ARCHITECTURES USING MOBILE AGENTS
IMPLEMENTING MULTIPLE ROBOT ARCHITECTURES USING MOBILE AGENTS L. M. Cragg and H. Hu Department of Computer Science, University of Essex, Wivenhoe Park, Colchester, CO4 3SQ E-mail: {lmcrag, hhu}@essex.ac.uk
More informationHaptic Feedback in Mixed-Reality Environment
The Visual Computer manuscript No. (will be inserted by the editor) Haptic Feedback in Mixed-Reality Environment Renaud Ott, Daniel Thalmann, Frédéric Vexo Virtual Reality Laboratory (VRLab) École Polytechnique
More informationKinect Interface for UC-win/Road: Application to Tele-operation of Small Robots
Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Hafid NINISS Forum8 - Robot Development Team Abstract: The purpose of this work is to develop a man-machine interface for
More informationResearch on Presentation of Multimedia Interactive Electronic Sand. Table
International Conference on Education Technology and Economic Management (ICETEM 2015) Research on Presentation of Multimedia Interactive Electronic Sand Table Daogui Lin Fujian Polytechnic of Information
More informationReVRSR: Remote Virtual Reality for Service Robots
ReVRSR: Remote Virtual Reality for Service Robots Amel Hassan, Ahmed Ehab Gado, Faizan Muhammad March 17, 2018 Abstract This project aims to bring a service robot s perspective to a human user. We believe
More information6 System architecture
6 System architecture is an application for interactively controlling the animation of VRML avatars. It uses the pen interaction technique described in Chapter 3 - Interaction technique. It is used in
More informationIntelligent Modelling of Virtual Worlds Using Domain Ontologies
Intelligent Modelling of Virtual Worlds Using Domain Ontologies Wesley Bille, Bram Pellens, Frederic Kleinermann, and Olga De Troyer Research Group WISE, Department of Computer Science, Vrije Universiteit
More informationA Virtual Reality Tool for Teleoperation Research
A Virtual Reality Tool for Teleoperation Research Nancy RODRIGUEZ rodri@irit.fr Jean-Pierre JESSEL jessel@irit.fr Patrice TORGUET torguet@irit.fr IRIT Institut de Recherche en Informatique de Toulouse
More informationCONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM
CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM Aniket D. Kulkarni *1, Dr.Sayyad Ajij D. *2 *1(Student of E&C Department, MIT Aurangabad, India) *2(HOD of E&C department, MIT Aurangabad, India) aniket2212@gmail.com*1,
More informationAdvancements in Gesture Recognition Technology
IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) Volume 4, Issue 4, Ver. I (Jul-Aug. 2014), PP 01-07 e-issn: 2319 4200, p-issn No. : 2319 4197 Advancements in Gesture Recognition Technology 1 Poluka
More informationMy Accessible+ Math: Creation of the Haptic Interface Prototype
DREU Final Paper Michelle Tocora Florida Institute of Technology mtoco14@gmail.com August 27, 2016 My Accessible+ Math: Creation of the Haptic Interface Prototype ABSTRACT My Accessible+ Math is a project
More informationH2020 RIA COMANOID H2020-RIA
Ref. Ares(2016)2533586-01/06/2016 H2020 RIA COMANOID H2020-RIA-645097 Deliverable D4.1: Demonstrator specification report M6 D4.1 H2020-RIA-645097 COMANOID M6 Project acronym: Project full title: COMANOID
More informationIndividual Test Item Specifications
Individual Test Item Specifications 8208120 Game and Simulation Design 2015 The contents of this document were developed under a grant from the United States Department of Education. However, the content
More informationINTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT
INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT TAYSHENG JENG, CHIA-HSUN LEE, CHI CHEN, YU-PIN MA Department of Architecture, National Cheng Kung University No. 1, University Road,
More informationCS277 - Experimental Haptics Lecture 2. Haptic Rendering
CS277 - Experimental Haptics Lecture 2 Haptic Rendering Outline Announcements Human haptic perception Anatomy of a visual-haptic simulation Virtual wall and potential field rendering A note on timing...
More informationR (2) Controlling System Application with hands by identifying movements through Camera
R (2) N (5) Oral (3) Total (10) Dated Sign Assignment Group: C Problem Definition: Controlling System Application with hands by identifying movements through Camera Prerequisite: 1. Web Cam Connectivity
More informationChapter 1 - Introduction
1 "We all agree that your theory is crazy, but is it crazy enough?" Niels Bohr (1885-1962) Chapter 1 - Introduction Augmented reality (AR) is the registration of projected computer-generated images over
More informationCRYPTOSHOOTER MULTI AGENT BASED SECRET COMMUNICATION IN AUGMENTED VIRTUALITY
CRYPTOSHOOTER MULTI AGENT BASED SECRET COMMUNICATION IN AUGMENTED VIRTUALITY Submitted By: Sahil Narang, Sarah J Andrabi PROJECT IDEA The main idea for the project is to create a pursuit and evade crowd
More informationDevelopment of Virtual Reality Simulation Training System for Substation Zongzhan DU
6th International Conference on Mechatronics, Materials, Biotechnology and Environment (ICMMBE 2016) Development of Virtual Reality Simulation Training System for Substation Zongzhan DU School of Electrical
More informationComputer Animation of Creatures in a Deep Sea
Computer Animation of Creatures in a Deep Sea Naoya Murakami and Shin-ichi Murakami Olympus Software Technology Corp. Tokyo Denki University ABSTRACT This paper describes an interactive computer animation
More informationVEWL: A Framework for Building a Windowing Interface in a Virtual Environment Daniel Larimer and Doug A. Bowman Dept. of Computer Science, Virginia Tech, 660 McBryde, Blacksburg, VA dlarimer@vt.edu, bowman@vt.edu
More informationNCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects
NCCT Promise for the Best Projects IEEE PROJECTS in various Domains Latest Projects, 2009-2010 ADVANCED ROBOTICS SOLUTIONS EMBEDDED SYSTEM PROJECTS Microcontrollers VLSI DSP Matlab Robotics ADVANCED ROBOTICS
More informationEvaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment
Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment Helmut Schrom-Feiertag 1, Christoph Schinko 2, Volker Settgast 3, and Stefan Seer 1 1 Austrian
More informationPLC-PROGRAMMING BY DEMONSTRATION USING GRASPABLE MODELS. Kai Schäfer, Willi Bruns
PLC-PROGRAMMING BY DEMONSTRATION USING GRASPABLE MODELS Kai Schäfer, Willi Bruns University of Bremen Research Center Work Environment Technology (artec) Enrique Schmidt Str. 7 (SFG) D-28359 Bremen Fon:
More information* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged
ADVANCED ROBOTICS SOLUTIONS * Intelli Mobile Robot for Multi Specialty Operations * Advanced Robotic Pick and Place Arm and Hand System * Automatic Color Sensing Robot using PC * AI Based Image Capturing
More informationTouching and Walking: Issues in Haptic Interface
Touching and Walking: Issues in Haptic Interface Hiroo Iwata 1 1 Institute of Engineering Mechanics and Systems, University of Tsukuba, 80, Tsukuba, 305-8573 Japan iwata@kz.tsukuba.ac.jp Abstract. This
More informationUniversity of California, Santa Barbara. CS189 Fall 17 Capstone. VR Telemedicine. Product Requirement Documentation
University of California, Santa Barbara CS189 Fall 17 Capstone VR Telemedicine Product Requirement Documentation Jinfa Zhu Kenneth Chan Shouzhi Wan Xiaohe He Yuanqi Li Supervised by Ole Eichhorn Helen
More informationSensible Chuckle SuperTuxKart Concrete Architecture Report
Sensible Chuckle SuperTuxKart Concrete Architecture Report Sam Strike - 10152402 Ben Mitchell - 10151495 Alex Mersereau - 10152885 Will Gervais - 10056247 David Cho - 10056519 Michael Spiering Table of
More informationIndividual Test Item Specifications
Individual Test Item Specifications 8208110 Game and Simulation Foundations 2015 The contents of this document were developed under a grant from the United States Department of Education. However, the
More informationTele-Nursing System with Realistic Sensations using Virtual Locomotion Interface
6th ERCIM Workshop "User Interfaces for All" Tele-Nursing System with Realistic Sensations using Virtual Locomotion Interface Tsutomu MIYASATO ATR Media Integration & Communications 2-2-2 Hikaridai, Seika-cho,
More informationThe Study on the Architecture of Public knowledge Service Platform Based on Collaborative Innovation
The Study on the Architecture of Public knowledge Service Platform Based on Chang ping Hu, Min Zhang, Fei Xiang Center for the Studies of Information Resources of Wuhan University, Wuhan,430072,China,
More informationDevelopment of excavator training simulator using leap motion controller
Journal of Physics: Conference Series PAPER OPEN ACCESS Development of excavator training simulator using leap motion controller To cite this article: F Fahmi et al 2018 J. Phys.: Conf. Ser. 978 012034
More informationComputer Haptics and Applications
Computer Haptics and Applications EURON Summer School 2003 Cagatay Basdogan, Ph.D. College of Engineering Koc University, Istanbul, 80910 (http://network.ku.edu.tr/~cbasdogan) Resources: EURON Summer School
More informationSDN Architecture 1.0 Overview. November, 2014
SDN Architecture 1.0 Overview November, 2014 ONF Document Type: TR ONF Document Name: TR_SDN ARCH Overview 1.1 11112014 Disclaimer THIS DOCUMENT IS PROVIDED AS IS WITH NO WARRANTIES WHATSOEVER, INCLUDING
More informationUSER-ORIENTED INTERACTIVE BUILDING DESIGN *
USER-ORIENTED INTERACTIVE BUILDING DESIGN * S. Martinez, A. Salgado, C. Barcena, C. Balaguer RoboticsLab, University Carlos III of Madrid, Spain {scasa@ing.uc3m.es} J.M. Navarro, C. Bosch, A. Rubio Dragados,
More informationResearch Statement MAXIM LIKHACHEV
Research Statement MAXIM LIKHACHEV My long-term research goal is to develop a methodology for robust real-time decision-making in autonomous systems. To achieve this goal, my students and I research novel
More informationAffordance based Human Motion Synthesizing System
Affordance based Human Motion Synthesizing System H. Ishii, N. Ichiguchi, D. Komaki, H. Shimoda and H. Yoshikawa Graduate School of Energy Science Kyoto University Uji-shi, Kyoto, 611-0011, Japan Abstract
More informationMethodology for Agent-Oriented Software
ب.ظ 03:55 1 of 7 2006/10/27 Next: About this document... Methodology for Agent-Oriented Software Design Principal Investigator dr. Frank S. de Boer (frankb@cs.uu.nl) Summary The main research goal of this
More informationMSMS Software for VR Simulations of Neural Prostheses and Patient Training and Rehabilitation
MSMS Software for VR Simulations of Neural Prostheses and Patient Training and Rehabilitation Rahman Davoodi and Gerald E. Loeb Department of Biomedical Engineering, University of Southern California Abstract.
More informationComponents for virtual environments Michael Haller, Roland Holm, Markus Priglinger, Jens Volkert, and Roland Wagner Johannes Kepler University of Linz
Components for virtual environments Michael Haller, Roland Holm, Markus Priglinger, Jens Volkert, and Roland Wagner Johannes Kepler University of Linz Altenbergerstr 69 A-4040 Linz (AUSTRIA) [mhallerjrwagner]@f
More informationVirtual Grasping Using a Data Glove
Virtual Grasping Using a Data Glove By: Rachel Smith Supervised By: Dr. Kay Robbins 3/25/2005 University of Texas at San Antonio Motivation Navigation in 3D worlds is awkward using traditional mouse Direct
More informationVIRTUAL REALITY FOR NONDESTRUCTIVE EVALUATION APPLICATIONS
VIRTUAL REALITY FOR NONDESTRUCTIVE EVALUATION APPLICATIONS Jaejoon Kim, S. Mandayam, S. Udpa, W. Lord, and L. Udpa Department of Electrical and Computer Engineering Iowa State University Ames, Iowa 500
More informationINTERACTIVE ARCHITECTURAL COMPOSITIONS INTERACTIVE ARCHITECTURAL COMPOSITIONS IN 3D REAL-TIME VIRTUAL ENVIRONMENTS
INTERACTIVE ARCHITECTURAL COMPOSITIONS IN 3D REAL-TIME VIRTUAL ENVIRONMENTS RABEE M. REFFAT Architecture Department, King Fahd University of Petroleum and Minerals, Dhahran, 31261, Saudi Arabia rabee@kfupm.edu.sa
More informationShared Virtual Environments for Telerehabilitation
Proceedings of Medicine Meets Virtual Reality 2002 Conference, IOS Press Newport Beach CA, pp. 362-368, January 23-26 2002 Shared Virtual Environments for Telerehabilitation George V. Popescu 1, Grigore
More informationSimulation of Tangible User Interfaces with the ROS Middleware
Simulation of Tangible User Interfaces with the ROS Middleware Stefan Diewald 1 stefan.diewald@tum.de Andreas Möller 1 andreas.moeller@tum.de Luis Roalter 1 roalter@tum.de Matthias Kranz 2 matthias.kranz@uni-passau.de
More informationSubject Description Form. Upon completion of the subject, students will be able to:
Subject Description Form Subject Code Subject Title EIE408 Principles of Virtual Reality Credit Value 3 Level 4 Pre-requisite/ Corequisite/ Exclusion Objectives Intended Subject Learning Outcomes Nil To
More information3D Form Display with Shape Memory Alloy
ICAT 2003 December 3-5, Tokyo, JAPAN 3D Form Display with Shape Memory Alloy Masashi Nakatani, Hiroyuki Kajimoto, Dairoku Sekiguchi, Naoki Kawakami, and Susumu Tachi The University of Tokyo 7-3-1 Hongo,
More informationGLOSSARY for National Core Arts: Media Arts STANDARDS
GLOSSARY for National Core Arts: Media Arts STANDARDS Attention Principle of directing perception through sensory and conceptual impact Balance Principle of the equitable and/or dynamic distribution of
More informationUMI3D Unified Model for Interaction in 3D. White Paper
UMI3D Unified Model for Interaction in 3D White Paper 30/04/2018 Introduction 2 The objectives of the UMI3D project are to simplify the collaboration between multiple and potentially asymmetrical devices
More informationMPEG-V Based Web Haptic Authoring Tool
MPEG-V Based Web Haptic Authoring Tool by Yu Gao Thesis submitted to the Faculty of Graduate and Postdoctoral Studies In partial fulfillment of the requirements For the M.A.Sc degree in Electrical and
More informationUSING VIRTUAL REALITY SIMULATION FOR SAFE HUMAN-ROBOT INTERACTION 1. INTRODUCTION
USING VIRTUAL REALITY SIMULATION FOR SAFE HUMAN-ROBOT INTERACTION Brad Armstrong 1, Dana Gronau 2, Pavel Ikonomov 3, Alamgir Choudhury 4, Betsy Aller 5 1 Western Michigan University, Kalamazoo, Michigan;
More informationVirtual Reality as Innovative Approach to the Interior Designing
SSP - JOURNAL OF CIVIL ENGINEERING Vol. 12, Issue 1, 2017 DOI: 10.1515/sspjce-2017-0011 Virtual Reality as Innovative Approach to the Interior Designing Pavol Kaleja, Mária Kozlovská Technical University
More informationPath Planning for Mobile Robots Based on Hybrid Architecture Platform
Path Planning for Mobile Robots Based on Hybrid Architecture Platform Ting Zhou, Xiaoping Fan & Shengyue Yang Laboratory of Networked Systems, Central South University, Changsha 410075, China Zhihua Qu
More informationSPIDERMAN VR. Adam Elgressy and Dmitry Vlasenko
SPIDERMAN VR Adam Elgressy and Dmitry Vlasenko Supervisors: Boaz Sternfeld and Yaron Honen Submission Date: 09/01/2019 Contents Who We Are:... 2 Abstract:... 2 Previous Work:... 3 Tangent Systems & Development
More informationSTRATEGO EXPERT SYSTEM SHELL
STRATEGO EXPERT SYSTEM SHELL Casper Treijtel and Leon Rothkrantz Faculty of Information Technology and Systems Delft University of Technology Mekelweg 4 2628 CD Delft University of Technology E-mail: L.J.M.Rothkrantz@cs.tudelft.nl
More informationCS494/594: Software for Intelligent Robotics
CS494/594: Software for Intelligent Robotics Spring 2007 Tuesday/Thursday 11:10 12:25 Instructor: Dr. Lynne E. Parker TA: Rasko Pjesivac Outline Overview syllabus and class policies Introduction to class:
More informationPangolin: A Look at the Conceptual Architecture of SuperTuxKart. Caleb Aikens Russell Dawes Mohammed Gasmallah Leonard Ha Vincent Hung Joseph Landy
Pangolin: A Look at the Conceptual Architecture of SuperTuxKart Caleb Aikens Russell Dawes Mohammed Gasmallah Leonard Ha Vincent Hung Joseph Landy Abstract This report will be taking a look at the conceptual
More informationA User Friendly Software Framework for Mobile Robot Control
A User Friendly Software Framework for Mobile Robot Control Jesse Riddle, Ryan Hughes, Nathaniel Biefeld, and Suranga Hettiarachchi Computer Science Department, Indiana University Southeast New Albany,
More informationA flexible application framework for distributed real time systems with applications in PC based driving simulators
A flexible application framework for distributed real time systems with applications in PC based driving simulators M. Grein, A. Kaussner, H.-P. Krüger, H. Noltemeier Abstract For the research at the IZVW
More informationDistributed Simulation of Dense Crowds
Distributed Simulation of Dense Crowds Sergei Gorlatch, Christoph Hemker, and Dominique Meilaender University of Muenster, Germany Email: {gorlatch,hemkerc,d.meil}@uni-muenster.de Abstract By extending
More information