Shapes: A Multi-Sensory Environment for the B/VI and Hearing Impaired Community
|
|
- Kelley Derrick Carpenter
- 6 years ago
- Views:
Transcription
1 Shapes: A Multi-Sensory Environment for the B/VI and Hearing Impaired Community Keith Adam Johnson and Sudhanshu Kumar Semwal* Department of Computer Science, University of Colorado, Colorado Springs, CO, USA ABSTRACT The focus of our paper is to describe a multi-sensory Virtual Environment (MSVE), called Shapes, which includes touch (haptic), scent and sound feedback. The touch and sound I/O creates a 3-D environment for the user, while scent feedback replaces sound feedback for the hearing impaired user. Scent also enhances the environment and acts as a catalyst during the exploration of the MSVE. The Shapes virtual environment consists of three unique solid objects (shapes) and three associated containers. The user is asked to find and select the solid shapes and move them into their appropriate container. Implementation details and experimental results are summarized. The success of scent-feedback as an additional I/O channel is measured in correlation to task performance in Shapes. The overall experience of this multi-sensory HCI is expected to foster future development of hardware I/O for visually and/or hearing impaired computer users. Keywords: Virtual Environments for Disability, Visual and hearing impaired; scent, haptics, sound interaction. Index Terms: K.4.2 [Assistive Technologies for persons with disabilities]; H.5.2: User Interfaces Haptic I/O. 1 INTRODUCTION Most of today s human-computer interfaces concentrate on the visual aspect of human senses. The demand for ever-increasing capabilities from display graphics stems from the gaming and multimedia industry as well. However, for the visually impaired, and the blind, little to no benefit is realized from this demand on graphics improvements. For the visually impaired, a lot of research has been done to use 3-D Sound instead of a visual display (Fraunenberger & Noisternig, 2003; Lecuyer, Mobuchon, Megard, Perret, Andriot, & Colinot 2003). While 3-D sounds may help a blind person better understand their surroundings, performing 3-D tasks in a virtual environment, for example placing one object inside another, usually is hard by using just 3D sound, and could be more convenient in the presence of additional forms of feedback. We use sound, haptics and aromas to help with non-trivial 3-D tasks at hand. In the work presented in this paper, there is no attempt to use taste because there is no apparent relationship of taste to our work. Hill, Rieser, Hill, Halpin & Halping (1993) showed that the auditory channel can provide visually impaired information about external events, for example the presence of other people, the material that objects are made of (hard or soft depending on reflections) and distances in the surrounding space. Frauenberger et al. (2003) observed that the entire visually impaired group in * kajohn10@msn.com ssemwal@uccs.edu LEAVE 0.5 INCH SPACE AT BOTTOM OF LEFT COLUMN ON FIRST PAGE FOR COPYRIGHT BLOCK the study showed a distinct ability of orientation simply on the basis of hearing. Finally, understanding how to design an effective sound-based interface for the visually impaired is important for overall success of the application (Morley, Petrie, O Neill &MacNally, 1998). It is extremely important to generate 3-D sound when the target group contains the visually impaired. Most modern sound cards support 3-D sound playback and there are numerous audio application programming interfaces (APIs) which support generation of 3-D sounds through a collection of libraries and sources, e.g. OpenAL (2009). Perhaps the more difficult part of supporting 3-D sound is the hardware setup. Most applications are limited to the number of channels a source can provide, which in turn can drive up the cost of a virtual system due the requirement of additional hardware (speakers) for each additional channel utilized. There are other ways to present 3-D audio though, for example by using only two output devices (speakers or headphones) for creating ambisonic sounds. An excellent overview of an algorithm to deliver ambisonic 3-D sound consisting of n loudspeakers in a horizontal plane surrounding the listener is in (Franuenberger et al., 2003). Haptics is considered both an effecter, through the use of muscles, tendons and articulations (kinesthetics) which causes movements and relates positions, and a receptor, allowing us to feel temperature, pressure or pain, which is relayed by sensors located under the skin. Similar to 3-D sound, this form of feedback has a plethora of research dedicated to tactile and force feedback (Baptise-Jessel et al., 2004; Iwata, 1990; Jansson & Billberger, 1999; Colwell et al., 1998; Johnson 2010; Lahav & Mioduser, 2000; Lecuyer et al., 2003, Raisamo et al., 2007; Sjostrom, 2001; Tzovaras et al., 2002). Constraining the user to the VE space through a haptic device is important as it helps prevent blind users from becoming confused on how much space is available in the virtual environment. Haptics also can provide body-centered references to navigate 3-D space (Unger, Blades, & Spencer 2010). Forcefeedback devices can provide lessons on shape manipulation (Semwal, & Evans-Kamp 2000), weight, mass and forces, and even object orientation. Cane-like force feedback devices have obvious benefits, such as training visually impaired users to navigate through real-world locations. Haptic tools help students to be integrated in regular classroom (Moll and Pysander, 2013). Visual graphics is presented using tactile zooming in (Rastogi and Pawluk). Real-world tangible model (Eriksson, & Gardenfors, 2006) can also play important role in forming cognitive maps and then transferring that knowledge in the virtual world. Users are not limited by the input devices and can explore the tangible model with greater precision by both hands and all fingers. The sense of smell, or olfaction, is by far the least explored of the human senses except probably taste. Most research covering olfaction could be considered more of an enhancement to the virtual environment; it enriches the users experience, making it more realistic (Ademoye, & Ghinea, 2007; Mochizuki et al., 2004). The intent of our research is to look at the olfaction as another complete I/O channel, eliciting a specific response from
2 the user based on the scents, much like sounds do. A few applications utilized are in (Bodnar et al., 2004; Brewster et al., 2006). However both were based around sighted tasks. Research in Olfoto (Brewster, McGookin, & Miller, 2006) showed that scent could be used to classify categories of photos. The AROMA project (Bodnar, Corbett, & Nekrasovski, 2004) provided detailed experiments of olfaction versus vision and sound-based notification mechanisms. They found that while olfaction was less effective in delivering notifications to the user, it was still usable as a feedback channel and that it was less disruptive to the user s primary task. In our implementation called Shapes, scents are tied to specific objects, and it is anticipated that this will assist the working memory of the Shapes player. If the user selects an object, a scent tied to that object will be emitted, hopefully allowing the user to recall what that object is without the need to feel the entire object multiple times for identification. Because human olfaction can occur at a subconscious level, we could replace an intense visual, touch, or sound interaction by a subtle olfaction based interaction. Creating an application that simultaneously utilizes all the four feedback channels is very complicated and takes an enormous amount of planning and design work as explained in the web site Tiresias.org. Shapes is a four-sense feedback virtual environment and presents unique opportunities for both the user and the computer, especially for those whom are visually and/or hearing impaired. The environment can extends outside the screen to the real world in front of the screen, and can now be felt by a haptic device, and has a possibility aroma interaction. 2 SHAPES VIRTUAL ENVIRONMENT Incorporating sense of smell in experiments poses a unique and difficult task. It is a common occurrence for someone to smell a particular scent and instantaneously recall some specific event in their past when that same scent was present. That type of recall is related to long-term memory. However, our project is centered on a user s working memory, and neuroscience research has shown strong links between olfaction and working memory, along with attentiveness, reaction time, mood and emotions (Brewster et al. 2006; Michael et al., 2003). Research done by Cann and Ross (1989) demonstrates the link between smell and memory, where most memories evoked were complex images or experiences. We have about a thousand receptors in our nose, and Turin (1966) showed how each receptor could sense a single chemical bond in a molecule. Therefore, generating artificial scents to match realworld scents can be, and often is, a difficult task. Another problem is that scents cannot be easily categorized, and are considered subjective. A minty smell could be categorized as peppermint or spearmint, and to some people, they may be able to differentiate those two smells and assign the exact name to them, but others may not be able to distinguish the smells, and will call both of them mint-like. Also, previous research and applications attempted to classify certain smell as pleasant and offensive, however one person may think a smoky-campfire smell is pleasant while another may not, and so using these classifiers may be misleading. There are numerous types of scent-emitting devices on the market, most of which are for commercial purposes such as at Disney s Soaring exhibit. There are a few more smaller, personal devices capable of supporting our research and also being affordable. For this research, the SDS100 from BIOPAC is utilized. This device forces air through a chamber that contains the scent-mixture and has four rear fans to gently push the scent towards the user. Other devices like the Scent Dome from TriSenx use a heating device to evaporate small drops of scented oils as it lifts the scent into the air with a small fan. An important feature to consider for these devices is the speed that scents can be delivered. If there is too much delay between an action that is supposed to trigger a scent and the actual release and diffusion of that scent, the user may not successfully tie the two together. Most research in olfaction has been limited due to: (a) Bandwidth available for both outputting scents and humans receiving scents; (b) Identifying and classifying scents (is it a flowery, rosy, or lavender scent?) (c) Providing immediate/timely and controlled delivery of scents. For an output device, bandwidth is defined by how many different scents can be emitted at the same time, and the intensity of those scents. Most hardware is limited to emitting just one scent at a time for a fixed period. The amount or intensity of the scent emitted varies based on what is providing the scent, i.e. concentrated scented oils, and also on the delivery method, such as heating the oil. Bandwidth for humans receiving scents relates to how many different smells the human nose can distinguish simultaneously, and research has shown that humans can only distinguish about three levels of smell intensity (Engen, 1960). Finally, scents must be delivered synchronously with the data or events in which they represent. This requires a controllable output that can provide immediate delivery of scents. For this research, the number of different scents utilized has been kept to a minimum to ensure the user can easily distinguish between the scents, which is consistent and supports the idea of human bandwidth limitation and scent classification. The user will not be expected to precisely classify the scents, but would simply identify what they smell in their own words, and use the scent as a form of positive reinforcement of the information being presented. The Shapes virtual environment was created using multiple media devices: PHANToM force-feedback for haptics, OpenAL for audio and OpenGL for graphics. Shapes also includes smell, by incorporating an SDS100 from BIOPAC. The application was ported to a Visual C++ project, utilizing OpenGL and OpenAL. 2.1 Shapes - Objects Earlier Shapes contained only three distinct shapes in which the user can select and move: Sphere, Cube and Pyramid. Each shape has a respective container in which it must be placed, a Cylinder, Box and Prism. Figure 1 below shows two screenshots, the image on top is the layout of the virtual environment when Shapes begins, and the image on bottom shows that each draggable object has been placed in the correct container. Note the small blue dot in each scene of Figure 1, this represents the end-effecter of the PHANToM device, called the proxy, and will be discussed further in the next section. Also, we will explain what is meant by draggable objects in later section. Figure 1: Shapes scenes displayed on a non-immersive monitor D Scenes After receiving the feedback from individuals who did not have prior experience with a tactile device or did not have any experience of manipulating a 3-D virtual environment, it became obvious that some type of training scenario was needed. Therefore, the virtual environment was simply broken up into three different virtual environments, one for each shape container combination. Once the user places the shape into the container, the game automatically loads the next shape container
3 pair, until all three have been completed. Figure 2 shows multiple screenshots as a user progresses through the training scenes.finally, a more challenging virtual environment was created (Figure 3 Left), which included two sizes for each shape and container pair, and multiple shapes to find and move into their respective containers. It was expected that users would take much longer at completing the tasks. 2.3 Haptic and force feedback Two different force feedback (Figure 3 Right) models can be used: CONTACT and CONSTRAINT. In one version of the game, the draggable shapes within the virtual environment were defined using the CONSTRAINT model, which will assist players in finding the draggable shapes [14]. This is because this model sets the property of the object s surface to act as a gravity well. Using the CONSTRAINT model requires defining the magnitude of the object s pull-force, which essentially defines how close the PHANToM proxy (blue cursor in the scene) needs to be to the object s surface before being attracted to it. The greater the value of this pull-force, the farther the proxy can be from the object and will begin to feel an attracting force. Upon reaching the surface, the proxy snaps-to it, and becomes constrained by the object s surface. The user can now move the proxy along the object s surface without fear of leaving the surface and possibly losing the object within the virtual space. In order to leave, or escape an object s surface, a force will be required by the user, normal to the object s surface. Using the CONTACT model, objects are simply felt by pressing against their surfaces; there are no attraction forces towards the surface. This model will allow the player to place as much force as they need along the surface of an object in order to determine its shape. Each of the containers in the virtual environment was built using the CONTACT model. These objects are larger in size than the draggable objects, and therefore should be easier to find. Similarly, all four walls providing boundaries for the virtual environment are built using the CONTACT model. The downside of this technique is the player must now find all objects within the virtual environment without any force-related assistance. This is part of the reason why we created a tangible model which is explained later in the next section. The term draggable objects means that an object is haptically defined to be selectable by the PHANToM device, or proxy in the virtual environment, and can be moved around within the virtual environment. In Shapes, only the shape objects, sphere, cube and pyramid, are defined as draggable, while the containers and bounding walls are not. The PHANToM force-feedback device (Figure 3 Right) allows for six degrees-of-freedom. The mechanical arms are connected to a spherical centre that rotates freely on its base. The other three degrees are considered along the pen-shaped end-effecter, which allows for rotation along all three axes, i.e. pitch, yaw and roll. The tip of the pen-shaped end-effecter is what is referred to as the proxy in the virtual environments. Forces are enacted based on the location of this tip, or proxy, in the virtual environment. This is easily correlated to a real-world ballpoint pen. As you push the pen down onto a piece of paper to write, you feel the force of the surface upon which the paper rests at the tip of the pen. 2.4 Audio Feedback When a user selects an object within the virtual environment, a sound will play describing that object, such as Sphere, Cube, Pyramid, Cylinder, Box or Prism. There are also sound cues to notify the player if the draggable object has been placed in the correct ( chime ) or incorrect ( buzzer ) container. Finally, walls which define the virtual environment will also provide a sound cue ( left wall, back wall, etc) to identify which wall is being touched if it is selected by the user. All sound cues are produced to provide 3D-localization to visually impaired players. Given two speakers, left and right, sounds play heavier (higher gain) from the left speaker if the object is selected on the left half of the virtual environment, and similarly for the right. Sounds will also play louder from both speakers the closer the object is to the player. The farther away the object is, the less gain it is given (in the OpenAL code) and therefore the sound is quieter. Figure 2: Shapes Training Scenes Figure 3 (Left): Complex Shapes Multiple Shape and Container Pairs. Figure 3 (Right): PHANToM Force-Feedback Device. 2.5 Scent Feedback In Shapes we provide a specific scent for each shape and container in the virtual environments. When a user selects one of these objects, a specific scent-chamber is opened via forced air by a small air compressor connected through the back of the device, which will continually deliver scents until the object is released. Four 80mm fans mounted on the rear of the device can be activated to assist in pushing the scent toward the user (Figures 4 and 7). Our focus is to determine whether the sense of smell can be fully utilized as an I/O channel for the visually impaired, and perhaps also replace sound for hearing impaired users, which may lead to broader disability applications for aromas in computing environments. Figure 4: SDS-100 Scent Palette 3 CREATION OF TANGIBLE MODEL A tangible model (Figure 5) was created to assist the user in learning the basic virtual environment before they use Shapes, and to show what is expected from them as the end-goal. Presenting blind users with a real-life tangible-model of the virtual environment will help with cognitive maps, as was demonstrated in previous research (Raisamo et al., 2007). For Shapes, a simple tangible model was constructed with the three unique shapes and containers (Figure 5, [17]). The containers are held in place with wires to simulate their rigidity in the virtual environment (they cannot be moved). The three shapes are free to move around and place inside their respective containers. Everything was placed in
4 a large cardboard box, which simulates the bounding walls of the virtual environment. A pen is attached to the box to simulate the end-effecter of the PHANToM device and exemplify its physical limits. 4 INTERACTION In all scenes, the floor is flat and parallel to the real-world floor, and the rotation of 45 degrees were removed (Figure 6) for hapticease. Along with this change, two additional walls were added, the top and front. This completed the boundaries for the entire virtual environment, which should help users feel more comfortable exploring the entire space and prevent them from becoming lost. The third change randomly places the shapes and containers in a new location after each successful game for both the single (during training) and multiple shape (three shapes container pairs) virtual environments. This allowed the player to continually practice finding and recognizing shapes and moving them to their containers in a new location. The following figure (Figure 6) shows how the new environment looks, without the front boundary as otherwise we will not see anything. Blossom. The other three scents chosen to cover the containers were: Vanilla, Mango Citrus and Pina Colada. Note that these two groups of three scents are related, or could be categorized together as a similar scent family, flowery scents for the shapes, and food flavours for the containers. Finally, a seventh scent, coffee, was added due to its known affect of clearing the nose of any scents, returning the human nose to a baseline for scent-detection (Czarney et al., 1999). Figure 7 shows a single scent capsule placed in the first scent chamber of the scent device. Figure 7: Scent Chambers 5 Figure 5: Shapes Tangible Model Figure 6: Shapes Scenes without 45 degree Angle Finally, two different 3-D sound sources, which are simple tones and vary only in pitch, were attached to the proxy. One source will provide constant audio feedback for the location of the proxy within the virtual environment when no shapes are currently selected and being dragged. This tone repeats about once per second, providing ample feedback during proxy movement. The second tone is utilized once a player selects a shape and is actively dragging it around the virtual environment. The rate of playback for this tone varies based on the distance between the selected object and its respective container. As the shape is moved closer to the correct container, the rate of playback is increased, and conversely as the shape moves away from the correct container, the rate is decreased. These changes were made per recommendation of one of our advisors who worked directly with visually disabled students. The constant 3-D feedback via sound should help visually impaired players keep track of where they are while moving within the virtual environment, and assist in finding the location of containers 4.1 Scent Selection Given three unique shapes and three unique containers, a total of six distinct scents were required. All scents were chosen in an attempt to group similar scents to cover the shapes, and another group of similar scents to cover the containers. For the shapes, the scent group selected includes Lavender, Rose and Garden RESULTS AND INFORMAL FEEDBACK To foster this learning, all 3-D sound cues for the shapes and containers were removed as this removal forces the user to rely on their sense of touch and smell in order to complete the game. Note that if a test subject is both visually and hearing impaired, these two senses are all that remain for them to learn about the environment around them, except taste of course. No formal unbiased testing was performed due to the time, resources, and the monetary constraints. Family members and friends, who were willing participant on their own rights, provided informal observations that are summarized in the following sections. Out of five players, one player was legally blind, while the other players had good vision but agreed to be blindfolded during the tests. It should be understood that further formal, comprehensive and statistically significant testing would be required if this entire system was to be made available commercially. 5.1 Training and Observations All but two out of five players completed just one virtual environment training cycle even though they were all told they could complete as many as they wanted. This shows that the training VEs were perhaps very successful introducing the overall intent of Shapes for a small pool of participants tested in our experiments. The average time a player spent in the first training virtual environment (sphere cylinder pair) was 7 minutes and 57 seconds (7:57), this includes both those who could hear and those without an audio feedback. Our observation was that this extended amount of time was due to players becoming familiar with the virtual environment and the PHANToM device. About half of the five participants, including the legally blind player, spent most of this time exploring the bounding walls, especially along the back wall, forming a mental model of the virtual environment. The other players moved randomly around the space. It was felt that players who had previous experience working with 2-D and 3-D graphics (CAD software, games) were more organized in their exploration of the 3-D virtual environment. The next training virtual environment (cube box) was completed a little faster, 6:41 mins:sec, on average, generally faster times for those who had sound feedback. Players were more comfortable using the PHANToM device and quickly realized
5 the bounding walls were the same, allowing them to explore the entire virtual environment much more quickly. However, it was not until the final training virtual environment (pyramid prism) where players really started moving comfortably in the +Z direction, which is out or away from the computer screen. It should be noted that the physical limit of the PHANToM in the +Z direction was rarely attained; players always stopped short of the limit and began moving back in the -Z direction ( in or towards the computer screen). This final training virtual environment was completed the quickest, within 5:44 on average. Finally, it was observed that the true-blind player did better during the training virtual environments than the sighted players, having faster completion times over all training virtual environments except the first one. Players who had sound feedback performed very well, completing the game within 15 minutes on average, while the player without sound took longer than 15 minutes and the game was never completed due to time constraints. The player without sound stated that they really concentrated on building a mental model of the virtual environment through touch feedback. 5.2 Audio Feedback Observations 3-D audio provided an excellent source for feedback. Utilizing sound feedback in the training virtual environments, some players noted that they didn t pay too much attention to the scents emitted when selecting shapes or containers. Most of their focus was on the name of the object they selected and the beeping after selecting a shape to help guide them to the correct container. Also, some players noted they were not concerned with remembering the location of each container since they knew they would be guided to it with sound, except for the blind and deaf player of course. Finally, the chime and buzzer sounds for correct and incorrect shape placements were well received by the players to tell if they were successful or unsuccessful respectfully. 5.3 Touch Feedback Observations The sense of touch was instrumental in providing players a way to build a mental model of the virtual environment. The players who had previous experience with 3-D applications immediately used the haptic device to explore the boundaries of the virtual environment, tracing the edges of each wall, mostly along the back wall, which helped to get an understanding of the relative size of the virtual environment. For those players who could hear, less emphasis was placed on the haptic device when exploring shapes and containers. Once the player found an object, there was limited exploring of the object s surface using the haptic device, as they quickly clicked to determine what object it was. As the visually and hearing impaired player moved more towards the middle of the virtual environment in an attempt to find shapes and containers, the player utilized distances from walls in order to construct their mental model. After finding a shape and selecting it to drag around, some players immediately moved to one of the top-back corners and then began moving out the distance they remembered the correct container to be; this technique worked surprisingly well, specially for the legally blind player. 5.4 Scent Feedback Observations All players described the scent feedback as an enjoyable, fun and unique experience. Players who were blind but not deaf successfully utilized scent feedback, although to a lesser extent. It was observed that scents with significant personal impact were easily recalled. The player either didn t like the scent or they immediately recognized and categorized it, even to the exact name of the concentrated oil. Other scents were either unnoticeable to the player, or they stated it was too similar to other scents to make a clear distinction, and so scent-shape relations could not be established. This problem was made evident in previous research, however readily available scented oil selection for this research was fairly limited to flowery or food-related scents. Regardless, the players were able to make at least one shape-toscent connection. The legally blind player had a slightly higher scent-to-object relation success, matching at least one more scentto-shape and one more scent-to-container assignment than the other players. This informally suggests that aroma is definitely a useful feedback channel in virtual environments, especially for people who may be visually and/or hearing impaired. This was a crucial milestone for our research, as it showed that the scents (output) and the sense of smell (input) were able to provide sufficient feedback to enable a player to complete the particular tasks within this virtual environment. Finally, the legally-blind player did identify a few scents and recognized three scent-toobject pairs. Obviously it is impossible to build statistical models from just one player, but having at least one legally blind player was crucial in understanding how well different forms of feedback are received when combined. The following are some quotes from the players about the scents they smelled: - Ah, that is Rose, I know I have the pyramid - I think that is that sweet smell, which is the sphere - Eww, yuck, I know I have the pyramid, and it goes to that tangy smell triangle container (prism) The coffee scent had two different affects on certain players. Because the timing of the release of the coffee scent was only one second after any other scent chamber closing, it had the effect of blending into the previous scent, which caused some players to misclassify the original scent. For example, vanilla-scent was classified as burnt cookies by one of the player. The second effect had the intended use for coffee, clearing the player s nose of scents, and as stated by numerous players, it allowed for better recognition of two unique scents emitted sequentially when selecting different objects. 5.5 General Observations The placement of the PHANToM device in respect to the player s body is very important. The device s +/- Z-axis should align with the players forearm. Sitting straight-on with the force device ensures alignment of the virtual space and the player s movements in real space. For all players, it appeared that the beeping of proxy location was not processed in parallel with touch feedback, the touch feedback took precedence; players felt their way around and never said that the beeps were helpful. However, once the player selected a draggable shape and the beeps changed to relate distance to the correct container, all players shifted their focus away from the touch feedback and listened intently to the beeps. At this point, only two players utilized their sense of touch to help determine if they were inside a particular container. There was some level of frustration demonstrated by all players, and most would begin moving the proxy very quickly around the virtual environment. During this period of rapid movement, players passed right over shape and container surfaces and would not notice the slight change in touch feedback. Sound feedback was similarly affected when players dragged a shape around trying to find its container. 6 CONCLUSION Every player benefited from the scent feedback. The sound feedback, specifically the beeping for distances between shapes and containers, proved extremely useful, enabling players to successfully complete the tasks within each virtual environment. Haptic force-feedback for sense of touch provided the means for all players to understand the virtual environment in which they were interacting with. By utilizing the pen attached to the tangible model to feel the objects in the box (instead of the person using their hands), translated well as the player moved to the virtual
6 environment and feeling it with the PHANToM device was fairly seamless. In summary, our research showed a definite promise toward incorporating haptics and scent interaction as effective channels of communication for the visual and hearing-impaired community.there are numerous possibilities for future research leveraging the Shapes application; the following provide only a few examples: (a) We have reported interesting observations based on limited number of players who volunteered their time. Obviously a larger test base, with formal double blind study, is required in order to perform more convincing statistical analysis and projections of the usability of touch and smell versus sight and sound. (b) An interesting direction of research could look at how well the sense of smell can leverage the human working memory. It may be possible to increase a person s productivity using scent feedback as a more passive form of I/O. (c) One could study the possible emergence of patterns of exploration utilized by the players to explore and locate objects in the virtual environment, and correlate those to previous experiences with 3-D environments. ACKNOWLEDGEMENTS Ms. Bonnie Snyder, Technology Consultant for the Blind and Visually Impaired, LLC, for her insightful comments. REFERENCES [1] Ademoye, O.A., & Ghinea, G. (2007). Olfactory Enhanced Multimedia Applications: Perspectives from an Empirical Study. SPIE 6504, 65040A. [2] Baptiste-Jessel, N., Tornil, B., & Encelle, B. (2004). Using SVG and a force feedback mouse to enable blind people to access "Graphical" Web based documents. In Proceedings of Ninth International Conference on Computers Helping People with Special Needs (ICCHP'04). Lecture Notes in Computer Science, 3118, [3] Bodnar, A., Corbett, R., & Nekrasovski, D. (2004). AROMA: Ambient awareness through Olfaction in a Messaging Application. Proceedings of the 6th International Conference on Multimodal Interfaces, [4] Brewster, S.A., McGookin, D.K. & Miller, C.A. (2006). Olfoto: Designing a Smell-Based Interaction. Proceedings of the Conference on Human Factors in Computing Systems, [5] Cann, A. & Ross, D. (1989) Olfactory stimuli as context cues in human memory. American Journal of Psychology, 102 (1), [6] Colwell, C., Petrie, H., Kornbrot, D., Hardwick, A., & Furner, S. (1998). Haptic virtual reality for blind computer users. In Proceedings of the Third international ACM Conference on Assistive Technologies, Marina del Rey, California, United States, April Assets '98. ACM, New York, NY, [7] Czerny M, Mayer F, & Grosch W. (1999). Sensory study on the character impact odorants of roasted Arabica coffee. J Agric Food Chem, 47: [8] Engen, T. & Pfaffman, C. (1960) Absolute judgement of odor quality. Journal of Experimental Psychology 59, [9] Eriksson, Y. & Gardenfors, D. (2006). Computer games for partially sighted and blind children. Project Report. (Active as of 4 Sep 09). [10] Frauenberger, C. & Noisternig, M. (2003). 3D Audio Interfaces for the Blind. Workshop on Nomadic Data Services and Mobility, Graz, Austria, March, [11] Hill, E., Rieser, J., Hill, M., Halpin, J., & Halpin R. (1993) How persons with visual impairments explore novel spaces: Strategies of good and poor performers. Journal of Visual Impairment and Blindness, October, [12] Iwata, H. (1990). Artificial reality with force-feedback: development of desktop virtual space with compact master manipulator. In Proceedings of the 17th Annual Conference on Computer Graphics and interactive Techniques (Dallas, TX, USA). SIGGRAPH '90. ACM, New York, NY, [13] Jansson, G. and Billberger, K., The PHANToM Used without Visual Guidance. In The First Phantom Users Research Symposium (PURS 99). [14] First author aaaaaa, Advisor: Second Author, Department of Computer Science, University of xxxxxx, [15] Lahav, O., & Mioduser, D. (2000). Multisensory virtual environment for supporting blind persons acquisition of spatial cognitive mapping, orientation, and mobility skills. Proceedings of the Third International Conference on Disability, Virtual Reality and Associated Technologies, ICDVRAT 2000, [16] Lécuyer, A., Mobuchon, P., Mégard, C., Perret, J., Andriot, C., & Colinot, J. (2003). HOMERE: a Multimodal System for Visually Impaired People to Explore Virtual Environments. In Proceedings of the IEEE Virtual Reality 2003 (March 22-26, 2003). VR. IEEE Computer Society, Washington, DC, 251. [17] Michael, G.A., Jacquot, L., Millot, J.-L. & Brand, G. (2003). Ambient odors modulate visual Attentional capture. Neuroscience Letters (352), [18] Mochizuki, A., Amada, T., Sawa, S., Takeda, T., Motoyashiki, S., Kohyama, K., Imura, M., & Chihara, K. (2004). Fragra: a visualolfactory VR game. In ACM SIGGRAPH 2004 Sketches, Los Angeles, California, August R. Barzel, Ed. SIGGRAPH '04. ACM, New York, NY, 123. [19] Morley, S., Petrie, H., O'Neill, A., and McNally, P. (1998). Auditory navigation in hyperspace: design and evaluation of a non-visual hypermedia system for blind users. In Proceedings of the Third international ACM Conference on Assistive Technologies. Marina del Rey, California, United States, April Assets '98. ACM, New York, NY, [20] OpenAL. (Active as of 4 Sep 09). [21] Raisamo, R., Patomäki, S., Hasu, M., & Pasto, V. (2007). Design and evaluation of a tactile memory game for visually impaired children. Interact. Comput. 19(2), [22] Sjostrom, C. (2001). Designing haptic computer interfaces for blind people.in Signal Processing and its Applications, Sixth International, Symposium on Signal Processing and its Applications. 1, [23] Semwal, S. K. and Evans-Kamp, D. L. (2000). Virtual Environments for Visually Impaired. In Proceedings of the Second international Conference on Virtual Worlds, July J. Heudin, Ed. Lecture Notes In Computer Science, vol Springer-Verlag, London, [24] Tiresias.org. Guidelines for the Design of Accessible Information and Communication Technology Systems. [25] Turin, L. A spectroscopic mechanism for primary olfactory reception (1966). Chemical Senses 2, [26] Tzovaras, D., Nikolakis, G., Fergadis, G., Malasiotis, S., & Stavrakis, M. (2002). Design and implementation of virtual environments training of the visually impaire. In Proceedings of the Fifth international ACM Conference on Assistive Technologies, Edinburgh, Scotland, July Assets '02. ACM, New York, NY, [27] Ungar, S., Blades, M., & Spencer, C. (1996). The Construction of Cognitive Maps By Children with Visual Impairments. In J. Portugali (ed.) The Construction of Cognitive Maps, Springer Netherlands. 32, [28] Moll, J. and Pysander, ELS, A haptic tool for group on Geometrical Concepts engaging Blind and Sighted Pupils, ACM Transactions on Accessible Computing, 4(4), Article 14:1-37, July [29] Rastogi, R. and pawluk T.V.D. Intuitive Tactile Zooming for Graphics accessed by individuals who are blond and visually impaired, IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 21 (4), pp , July 2013.
Multisensory Virtual Environment for Supporting Blind Persons' Acquisition of Spatial Cognitive Mapping a Case Study
Multisensory Virtual Environment for Supporting Blind Persons' Acquisition of Spatial Cognitive Mapping a Case Study Orly Lahav & David Mioduser Tel Aviv University, School of Education Ramat-Aviv, Tel-Aviv,
More informationMultisensory virtual environment for supporting blind persons acquisition of spatial cognitive mapping, orientation, and mobility skills
Multisensory virtual environment for supporting blind persons acquisition of spatial cognitive mapping, orientation, and mobility skills O Lahav and D Mioduser School of Education, Tel Aviv University,
More informationComparison of Haptic and Non-Speech Audio Feedback
Comparison of Haptic and Non-Speech Audio Feedback Cagatay Goncu 1 and Kim Marriott 1 Monash University, Mebourne, Australia, cagatay.goncu@monash.edu, kim.marriott@monash.edu Abstract. We report a usability
More informationHuman Factors. We take a closer look at the human factors that affect how people interact with computers and software:
Human Factors We take a closer look at the human factors that affect how people interact with computers and software: Physiology physical make-up, capabilities Cognition thinking, reasoning, problem-solving,
More informationHaptic presentation of 3D objects in virtual reality for the visually disabled
Haptic presentation of 3D objects in virtual reality for the visually disabled M Moranski, A Materka Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, Lodz, POLAND marcin.moranski@p.lodz.pl,
More informationMultisensory Virtual Environment for Supporting Blind. Persons' Acquisition of Spatial Cognitive Mapping. a Case Study I
1 Multisensory Virtual Environment for Supporting Blind Persons' Acquisition of Spatial Cognitive Mapping a Case Study I Orly Lahav & David Mioduser Tel Aviv University, School of Education Ramat-Aviv,
More informationInteractive Exploration of City Maps with Auditory Torches
Interactive Exploration of City Maps with Auditory Torches Wilko Heuten OFFIS Escherweg 2 Oldenburg, Germany Wilko.Heuten@offis.de Niels Henze OFFIS Escherweg 2 Oldenburg, Germany Niels.Henze@offis.de
More informationMECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES
INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL
More informationProprioception & force sensing
Proprioception & force sensing Roope Raisamo Tampere Unit for Computer-Human Interaction (TAUCHI) School of Information Sciences University of Tampere, Finland Based on material by Jussi Rantala, Jukka
More informationWaves Nx VIRTUAL REALITY AUDIO
Waves Nx VIRTUAL REALITY AUDIO WAVES VIRTUAL REALITY AUDIO THE FUTURE OF AUDIO REPRODUCTION AND CREATION Today s entertainment is on a mission to recreate the real world. Just as VR makes us feel like
More informationComparing Two Haptic Interfaces for Multimodal Graph Rendering
Comparing Two Haptic Interfaces for Multimodal Graph Rendering Wai Yu, Stephen Brewster Glasgow Interactive Systems Group, Department of Computing Science, University of Glasgow, U. K. {rayu, stephen}@dcs.gla.ac.uk,
More informationvirtual reality SANJAY SINGH B.TECH (EC)
virtual reality SINGH (EC) SANJAY B.TECH What is virtual reality? A satisfactory definition may be formulated like this: "Virtual Reality is a way for humans to visualize, manipulate and interact with
More informationTest of pan and zoom tools in visual and non-visual audio haptic environments. Magnusson, Charlotte; Gutierrez, Teresa; Rassmus-Gröhn, Kirsten
Test of pan and zoom tools in visual and non-visual audio haptic environments Magnusson, Charlotte; Gutierrez, Teresa; Rassmus-Gröhn, Kirsten Published in: ENACTIVE 07 2007 Link to publication Citation
More informationAbdulmotaleb El Saddik Associate Professor Dr.-Ing., SMIEEE, P.Eng.
Abdulmotaleb El Saddik Associate Professor Dr.-Ing., SMIEEE, P.Eng. Multimedia Communications Research Laboratory University of Ottawa Ontario Research Network of E-Commerce www.mcrlab.uottawa.ca abed@mcrlab.uottawa.ca
More informationInteracting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)
Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception
More informationI've Seen That Shape Before Lesson Plan
I've Seen That Shape Before Lesson Plan I) Overview II) Conducting the Lesson III) Teacher to Teacher IV) Handouts I. OVERVIEW Lesson Summary Students learn the names and explore properties of solid geometric
More informationDrumtastic: Haptic Guidance for Polyrhythmic Drumming Practice
Drumtastic: Haptic Guidance for Polyrhythmic Drumming Practice ABSTRACT W e present Drumtastic, an application where the user interacts with two Novint Falcon haptic devices to play virtual drums. The
More informationCSE 165: 3D User Interaction. Lecture #14: 3D UI Design
CSE 165: 3D User Interaction Lecture #14: 3D UI Design 2 Announcements Homework 3 due tomorrow 2pm Monday: midterm discussion Next Thursday: midterm exam 3D UI Design Strategies 3 4 Thus far 3DUI hardware
More informationOutput Devices - Non-Visual
IMGD 5100: Immersive HCI Output Devices - Non-Visual Robert W. Lindeman Associate Professor Department of Computer Science Worcester Polytechnic Institute gogo@wpi.edu Overview Here we are concerned with
More informationHUMAN-COMPUTER INTERACTION: OVERVIEW ON STATE OF THE ART TECHNOLOGY
HUMAN-COMPUTER INTERACTION: OVERVIEW ON STATE OF THE ART TECHNOLOGY *Ms. S. VAISHNAVI, Assistant Professor, Sri Krishna Arts And Science College, Coimbatore. TN INDIA **SWETHASRI. L., Final Year B.Com
More informationPERFORMANCE IN A HAPTIC ENVIRONMENT ABSTRACT
PERFORMANCE IN A HAPTIC ENVIRONMENT Michael V. Doran,William Owen, and Brian Holbert University of South Alabama School of Computer and Information Sciences Mobile, Alabama 36688 (334) 460-6390 doran@cis.usouthal.edu,
More informationTowards a Google Glass Based Head Control Communication System for People with Disabilities. James Gips, Muhan Zhang, Deirdre Anderson
Towards a Google Glass Based Head Control Communication System for People with Disabilities James Gips, Muhan Zhang, Deirdre Anderson Boston College To be published in Proceedings of HCI International
More informationE90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright
E90 Project Proposal 6 December 2006 Paul Azunre Thomas Murray David Wright Table of Contents Abstract 3 Introduction..4 Technical Discussion...4 Tracking Input..4 Haptic Feedack.6 Project Implementation....7
More information6 Ubiquitous User Interfaces
6 Ubiquitous User Interfaces Viktoria Pammer-Schindler May 3, 2016 Ubiquitous User Interfaces 1 Days and Topics March 1 March 8 March 15 April 12 April 26 (10-13) April 28 (9-14) May 3 May 10 Administrative
More informationRunning an HCI Experiment in Multiple Parallel Universes
Author manuscript, published in "ACM CHI Conference on Human Factors in Computing Systems (alt.chi) (2014)" Running an HCI Experiment in Multiple Parallel Universes Univ. Paris Sud, CNRS, Univ. Paris Sud,
More informationEvaluating Haptic and Auditory Guidance to Assist Blind People in Reading Printed Text Using Finger-Mounted Cameras
Evaluating Haptic and Auditory Guidance to Assist Blind People in Reading Printed Text Using Finger-Mounted Cameras TACCESS ASSETS 2016 Lee Stearns 1, Ruofei Du 1, Uran Oh 1, Catherine Jou 1, Leah Findlater
More informationThe Mixed Reality Book: A New Multimedia Reading Experience
The Mixed Reality Book: A New Multimedia Reading Experience Raphaël Grasset raphael.grasset@hitlabnz.org Andreas Dünser andreas.duenser@hitlabnz.org Mark Billinghurst mark.billinghurst@hitlabnz.org Hartmut
More informationUsing Haptic Cues to Aid Nonvisual Structure Recognition
Using Haptic Cues to Aid Nonvisual Structure Recognition CAROLINE JAY, ROBERT STEVENS, ROGER HUBBOLD, and MASHHUDA GLENCROSS University of Manchester Retrieving information presented visually is difficult
More informationCOMS W4172 Design Principles
COMS W4172 Design Principles Steven Feiner Department of Computer Science Columbia University New York, NY 10027 www.cs.columbia.edu/graphics/courses/csw4172 January 25, 2018 1 2D & 3D UIs: What s the
More informationInteractive Simulation: UCF EIN5255. VR Software. Audio Output. Page 4-1
VR Software Class 4 Dr. Nabil Rami http://www.simulationfirst.com/ein5255/ Audio Output Can be divided into two elements: Audio Generation Audio Presentation Page 4-1 Audio Generation A variety of audio
More informationModaDJ. Development and evaluation of a multimodal user interface. Institute of Computer Science University of Bern
ModaDJ Development and evaluation of a multimodal user interface Course Master of Computer Science Professor: Denis Lalanne Renato Corti1 Alina Petrescu2 1 Institute of Computer Science University of Bern
More informationthe human chapter 1 Traffic lights the human User-centred Design Light Vision part 1 (modified extract for AISD 2005) Information i/o
Traffic lights chapter 1 the human part 1 (modified extract for AISD 2005) http://www.baddesigns.com/manylts.html User-centred Design Bad design contradicts facts pertaining to human capabilities Usability
More informationWelcome to this course on «Natural Interactive Walking on Virtual Grounds»!
Welcome to this course on «Natural Interactive Walking on Virtual Grounds»! The speaker is Anatole Lécuyer, senior researcher at Inria, Rennes, France; More information about him at : http://people.rennes.inria.fr/anatole.lecuyer/
More informationVEWL: A Framework for Building a Windowing Interface in a Virtual Environment Daniel Larimer and Doug A. Bowman Dept. of Computer Science, Virginia Tech, 660 McBryde, Blacksburg, VA dlarimer@vt.edu, bowman@vt.edu
More informationIntroduction to Haptics
Introduction to Haptics Roope Raisamo Multimodal Interaction Research Group Tampere Unit for Computer Human Interaction (TAUCHI) Department of Computer Sciences University of Tampere, Finland Definition
More informationVirtual Tactile Maps
In: H.-J. Bullinger, J. Ziegler, (Eds.). Human-Computer Interaction: Ergonomics and User Interfaces. Proc. HCI International 99 (the 8 th International Conference on Human-Computer Interaction), Munich,
More informationPinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data
Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data Hrvoje Benko Microsoft Research One Microsoft Way Redmond, WA 98052 USA benko@microsoft.com Andrew D. Wilson Microsoft
More informationBEST PRACTICES COURSE WEEK 14 PART 2 Advanced Mouse Constraints and the Control Box
BEST PRACTICES COURSE WEEK 14 PART 2 Advanced Mouse Constraints and the Control Box Copyright 2012 by Eric Bobrow, all rights reserved For more information about the Best Practices Course, visit http://www.acbestpractices.com
More informationHaptic Camera Manipulation: Extending the Camera In Hand Metaphor
Haptic Camera Manipulation: Extending the Camera In Hand Metaphor Joan De Boeck, Karin Coninx Expertise Center for Digital Media Limburgs Universitair Centrum Wetenschapspark 2, B-3590 Diepenbeek, Belgium
More informationSalient features make a search easy
Chapter General discussion This thesis examined various aspects of haptic search. It consisted of three parts. In the first part, the saliency of movability and compliance were investigated. In the second
More informationUniversidade de Aveiro Departamento de Electrónica, Telecomunicações e Informática. Interaction in Virtual and Augmented Reality 3DUIs
Universidade de Aveiro Departamento de Electrónica, Telecomunicações e Informática Interaction in Virtual and Augmented Reality 3DUIs Realidade Virtual e Aumentada 2017/2018 Beatriz Sousa Santos Interaction
More informationUsing haptic cues to aid nonvisual structure recognition
Loughborough University Institutional Repository Using haptic cues to aid nonvisual structure recognition This item was submitted to Loughborough University's Institutional Repository by the/an author.
More informationUNIT 5a STANDARD ORTHOGRAPHIC VIEW DRAWINGS
UNIT 5a STANDARD ORTHOGRAPHIC VIEW DRAWINGS 5.1 Introduction Orthographic views are 2D images of a 3D object obtained by viewing it from different orthogonal directions. Six principal views are possible
More informationCreating Journey In AgentCubes
DRAFT 3-D Journey Creating Journey In AgentCubes Student Version No AgentCubes Experience You are a traveler on a journey to find a treasure. You travel on the ground amid walls, chased by one or more
More informationExploring Surround Haptics Displays
Exploring Surround Haptics Displays Ali Israr Disney Research 4615 Forbes Ave. Suite 420, Pittsburgh, PA 15213 USA israr@disneyresearch.com Ivan Poupyrev Disney Research 4615 Forbes Ave. Suite 420, Pittsburgh,
More informationMy Accessible+ Math: Creation of the Haptic Interface Prototype
DREU Final Paper Michelle Tocora Florida Institute of Technology mtoco14@gmail.com August 27, 2016 My Accessible+ Math: Creation of the Haptic Interface Prototype ABSTRACT My Accessible+ Math is a project
More informationEffective Iconography....convey ideas without words; attract attention...
Effective Iconography...convey ideas without words; attract attention... Visual Thinking and Icons An icon is an image, picture, or symbol representing a concept Icon-specific guidelines Represent the
More informationExploring Geometric Shapes with Touch
Exploring Geometric Shapes with Touch Thomas Pietrzak, Andrew Crossan, Stephen Brewster, Benoît Martin, Isabelle Pecci To cite this version: Thomas Pietrzak, Andrew Crossan, Stephen Brewster, Benoît Martin,
More informationInput-output channels
Input-output channels Human Computer Interaction (HCI) Human input Using senses Sight, hearing, touch, taste and smell Sight, hearing & touch have important role in HCI Input-Output Channels Human output
More informationAn Investigation on Vibrotactile Emotional Patterns for the Blindfolded People
An Investigation on Vibrotactile Emotional Patterns for the Blindfolded People Hsin-Fu Huang, National Yunlin University of Science and Technology, Taiwan Hao-Cheng Chiang, National Yunlin University of
More informationPractical Data Visualization and Virtual Reality. Virtual Reality VR Display Systems. Karljohan Lundin Palmerius
Practical Data Visualization and Virtual Reality Virtual Reality VR Display Systems Karljohan Lundin Palmerius Synopsis Virtual Reality basics Common display systems Visual modality Sound modality Interaction
More informationDesigning Pseudo-Haptic Feedback Mechanisms for Communicating Weight in Decision Making Tasks
Appeared in the Proceedings of Shikakeology: Designing Triggers for Behavior Change, AAAI Spring Symposium Series 2013 Technical Report SS-12-06, pp.107-112, Palo Alto, CA., March 2013. Designing Pseudo-Haptic
More informationTouch Perception and Emotional Appraisal for a Virtual Agent
Touch Perception and Emotional Appraisal for a Virtual Agent Nhung Nguyen, Ipke Wachsmuth, Stefan Kopp Faculty of Technology University of Bielefeld 33594 Bielefeld Germany {nnguyen, ipke, skopp}@techfak.uni-bielefeld.de
More informationHUMAN COMPUTER INTERFACE
HUMAN COMPUTER INTERFACE TARUNIM SHARMA Department of Computer Science Maharaja Surajmal Institute C-4, Janakpuri, New Delhi, India ABSTRACT-- The intention of this paper is to provide an overview on the
More informationAugmented Home. Integrating a Virtual World Game in a Physical Environment. Serge Offermans and Jun Hu
Augmented Home Integrating a Virtual World Game in a Physical Environment Serge Offermans and Jun Hu Eindhoven University of Technology Department of Industrial Design The Netherlands {s.a.m.offermans,j.hu}@tue.nl
More informationCOPYRIGHTED MATERIAL. Overview
In normal experience, our eyes are constantly in motion, roving over and around objects and through ever-changing environments. Through this constant scanning, we build up experience data, which is manipulated
More informationHaptic Abilities of Freshman Engineers as Measured by the Haptic Visual Discrimination Test
a u t u m n 2 0 0 3 Haptic Abilities of Freshman Engineers as Measured by the Haptic Visual Discrimination Test Nancy E. Study Virginia State University Abstract The Haptic Visual Discrimination Test (HVDT)
More informationProblem of the Month: Between the Lines
Problem of the Month: Between the Lines The Problems of the Month (POM) are used in a variety of ways to promote problem solving and to foster the first standard of mathematical practice from the Common
More informationCOPYRIGHTED MATERIAL OVERVIEW 1
OVERVIEW 1 In normal experience, our eyes are constantly in motion, roving over and around objects and through ever-changing environments. Through this constant scanning, we build up experiential data,
More informationDesigning an Audio System for Effective Use in Mixed Reality
Designing an Audio System for Effective Use in Mixed Reality Darin E. Hughes Audio Producer Research Associate Institute for Simulation and Training Media Convergence Lab What I do Audio Producer: Recording
More informationProposal Accessible Arthur Games
Proposal Accessible Arthur Games Prepared for: PBSKids 2009 DoodleDoo 3306 Knoll West Dr Houston, TX 77082 Disclaimers This document is the proprietary and exclusive property of DoodleDoo except as otherwise
More informationt t t rt t s s tr t Manuel Martinez 1, Angela Constantinescu 2, Boris Schauerte 1, Daniel Koester 1, and Rainer Stiefelhagen 1,2
t t t rt t s s Manuel Martinez 1, Angela Constantinescu 2, Boris Schauerte 1, Daniel Koester 1, and Rainer Stiefelhagen 1,2 1 r sr st t t 2 st t t r t r t s t s 3 Pr ÿ t3 tr 2 t 2 t r r t s 2 r t ts ss
More informationTechnology designed to empower people
Edition July 2018 Smart Health, Wearables, Artificial intelligence Technology designed to empower people Through new interfaces - close to the body - technology can enable us to become more aware of our
More informationBenefits of using haptic devices in textile architecture
28 September 2 October 2009, Universidad Politecnica de Valencia, Spain Alberto DOMINGO and Carlos LAZARO (eds.) Benefits of using haptic devices in textile architecture Javier SANCHEZ *, Joan SAVALL a
More informationSeminar: Haptic Interaction in Mobile Environments TIEVS63 (4 ECTS)
Seminar: Haptic Interaction in Mobile Environments TIEVS63 (4 ECTS) Jussi Rantala Tampere Unit for Computer-Human Interaction (TAUCHI) School of Information Sciences University of Tampere, Finland Contents
More informationArbitrating Multimodal Outputs: Using Ambient Displays as Interruptions
Arbitrating Multimodal Outputs: Using Ambient Displays as Interruptions Ernesto Arroyo MIT Media Laboratory 20 Ames Street E15-313 Cambridge, MA 02139 USA earroyo@media.mit.edu Ted Selker MIT Media Laboratory
More informationHand Gesture Recognition Using Radial Length Metric
Hand Gesture Recognition Using Radial Length Metric Warsha M.Choudhari 1, Pratibha Mishra 2, Rinku Rajankar 3, Mausami Sawarkar 4 1 Professor, Information Technology, Datta Meghe Institute of Engineering,
More informationDreamCatcher Agile Studio: Product Brochure
DreamCatcher Agile Studio: Product Brochure Why build a requirements-centric Agile Suite? As we look at the value chain of the SDLC process, as shown in the figure below, the most value is created in the
More informationComplete Drawing and Painting Certificate Course
Complete Drawing and Painting Certificate Course Title: Unit Three Shading and Form Medium: Drawing in graphite pencil Level: Beginners Week: Two Course Code: Page 1 of 15 Week Two: General overview Last
More informationCollaboration in Multimodal Virtual Environments
Collaboration in Multimodal Virtual Environments Eva-Lotta Sallnäs NADA, Royal Institute of Technology evalotta@nada.kth.se http://www.nada.kth.se/~evalotta/ Research question How is collaboration in a
More informationMultimodal Interaction Concepts for Mobile Augmented Reality Applications
Multimodal Interaction Concepts for Mobile Augmented Reality Applications Wolfgang Hürst and Casper van Wezel Utrecht University, PO Box 80.089, 3508 TB Utrecht, The Netherlands huerst@cs.uu.nl, cawezel@students.cs.uu.nl
More informationEnabling Cursor Control Using on Pinch Gesture Recognition
Enabling Cursor Control Using on Pinch Gesture Recognition Benjamin Baldus Debra Lauterbach Juan Lizarraga October 5, 2007 Abstract In this project we expect to develop a machine-user interface based on
More information3D Modelling Is Not For WIMPs Part II: Stylus/Mouse Clicks
3D Modelling Is Not For WIMPs Part II: Stylus/Mouse Clicks David Gauldie 1, Mark Wright 2, Ann Marie Shillito 3 1,3 Edinburgh College of Art 79 Grassmarket, Edinburgh EH1 2HJ d.gauldie@eca.ac.uk, a.m.shillito@eca.ac.uk
More informationMarkerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces
Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces Huidong Bai The HIT Lab NZ, University of Canterbury, Christchurch, 8041 New Zealand huidong.bai@pg.canterbury.ac.nz Lei
More informationHAPTIC USER INTERFACES Final lecture
HAPTIC USER INTERFACES Final lecture Roope Raisamo School of Information Sciences University of Tampere, Finland Content A little more about crossmodal interaction The next steps in the course 1 2 CROSSMODAL
More informationREPORT ON THE CURRENT STATE OF FOR DESIGN. XL: Experiments in Landscape and Urbanism
REPORT ON THE CURRENT STATE OF FOR DESIGN XL: Experiments in Landscape and Urbanism This report was produced by XL: Experiments in Landscape and Urbanism, SWA Group s innovation lab. It began as an internal
More informationFeeding human senses through Immersion
Virtual Reality Feeding human senses through Immersion 1. How many human senses? 2. Overview of key human senses 3. Sensory stimulation through Immersion 4. Conclusion Th3.1 1. How many human senses? [TRV
More informationHaptic messaging. Katariina Tiitinen
Haptic messaging Katariina Tiitinen 13.12.2012 Contents Introduction User expectations for haptic mobile communication Hapticons Example: CheekTouch Introduction Multiple senses are used in face-to-face
More informationReading Together Helping Your Child to Enjoy and Progress in Reading
Reading Together Helping Your Child to Enjoy and Progress in Reading 1. You read aloud to your child. 2. Your child reads aloud to you. 3. You talk about what you are reading. What Is Reading Together?
More informationProject Multimodal FooBilliard
Project Multimodal FooBilliard adding two multimodal user interfaces to an existing 3d billiard game Dominic Sina, Paul Frischknecht, Marian Briceag, Ulzhan Kakenova March May 2015, for Future User Interfaces
More informationChapter 2 Introduction to Haptics 2.1 Definition of Haptics
Chapter 2 Introduction to Haptics 2.1 Definition of Haptics The word haptic originates from the Greek verb hapto to touch and therefore refers to the ability to touch and manipulate objects. The haptic
More informationSensation and Perception. What We Will Cover in This Section. Sensation
Sensation and Perception Dr. Dennis C. Sweeney 2/18/2009 Sensation.ppt 1 What We Will Cover in This Section Overview Psychophysics Sensations Hearing Vision Touch Taste Smell Kinesthetic Perception 2/18/2009
More informationFlexible Active Touch Using 2.5D Display Generating Tactile and Force Sensations
This is the accepted version of the following article: ICIC Express Letters 6(12):2995-3000 January 2012, which has been published in final form at http://www.ijicic.org/el-6(12).htm Flexible Active Touch
More informationCrossmodal Attention & Multisensory Integration: Implications for Multimodal Interface Design. In the Realm of the Senses
Crossmodal Attention & Multisensory Integration: Implications for Multimodal Interface Design Charles Spence Department of Experimental Psychology, Oxford University In the Realm of the Senses Wickens
More informationThumbsUp: Integrated Command and Pointer Interactions for Mobile Outdoor Augmented Reality Systems
ThumbsUp: Integrated Command and Pointer Interactions for Mobile Outdoor Augmented Reality Systems Wayne Piekarski and Bruce H. Thomas Wearable Computer Laboratory School of Computer and Information Science
More informationRethinking Prototyping for Audio Games: On Different Modalities in the Prototyping Process
http://dx.doi.org/10.14236/ewic/hci2017.18 Rethinking Prototyping for Audio Games: On Different Modalities in the Prototyping Process Michael Urbanek and Florian Güldenpfennig Vienna University of Technology
More informationBuddy Bearings: A Person-To-Person Navigation System
Buddy Bearings: A Person-To-Person Navigation System George T Hayes School of Information University of California, Berkeley 102 South Hall Berkeley, CA 94720-4600 ghayes@ischool.berkeley.edu Dhawal Mujumdar
More informationEvaluation of Five-finger Haptic Communication with Network Delay
Tactile Communication Haptic Communication Network Delay Evaluation of Five-finger Haptic Communication with Network Delay To realize tactile communication, we clarify some issues regarding how delay affects
More informationPhysiology Lessons for use with the Biopac Student Lab
Physiology Lessons for use with the Biopac Student Lab ELECTROOCULOGRAM (EOG) The Influence of Auditory Rhythm on Visual Attention PC under Windows 98SE, Me, 2000 Pro or Macintosh 8.6 9.1 Revised 3/11/2013
More informationInvestigating Phicon Feedback in Non- Visual Tangible User Interfaces
Investigating Phicon Feedback in Non- Visual Tangible User Interfaces David McGookin and Stephen Brewster Glasgow Interactive Systems Group School of Computing Science University of Glasgow Glasgow, G12
More informationSketch-Up Guide for Woodworkers
W Enjoy this selection from Sketch-Up Guide for Woodworkers In just seconds, you can enjoy this ebook of Sketch-Up Guide for Woodworkers. SketchUp Guide for BUY NOW! Google See how our magazine makes you
More informationAudio makes a difference in haptic collaborative virtual environments
Audio makes a difference in haptic collaborative virtual environments JONAS MOLL, YING YING HUANG, EVA-LOTTA SALLNÄS HCI Dept., School of Computer Science and Communication, Royal Institute of Technology,
More informationAddendum 18: The Bezier Tool in Art and Stitch
Addendum 18: The Bezier Tool in Art and Stitch About the Author, David Smith I m a Computer Science Major in a university in Seattle. I enjoy exploring the lovely Seattle area and taking in the wonderful
More informationIntegrating PhysX and OpenHaptics: Efficient Force Feedback Generation Using Physics Engine and Haptic Devices
This is the Pre-Published Version. Integrating PhysX and Opens: Efficient Force Feedback Generation Using Physics Engine and Devices 1 Leon Sze-Ho Chan 1, Kup-Sze Choi 1 School of Nursing, Hong Kong Polytechnic
More informationMADE EASY a step-by-step guide
Perspective MADE EASY a step-by-step guide Coming soon! June 2015 ROBBIE LEE One-Point Perspective Let s start with one of the simplest, yet most useful approaches to perspective drawing: one-point perspective.
More informationINTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT
INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT TAYSHENG JENG, CHIA-HSUN LEE, CHI CHEN, YU-PIN MA Department of Architecture, National Cheng Kung University No. 1, University Road,
More informationThe use of gestures in computer aided design
Loughborough University Institutional Repository The use of gestures in computer aided design This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation: CASE,
More informationWelcome. My name is Jason Jerald, Co-Founder & Principal Consultant at Next Gen Interactions I m here today to talk about the human side of VR
Welcome. My name is Jason Jerald, Co-Founder & Principal Consultant at Next Gen Interactions I m here today to talk about the human side of VR Interactions. For the technology is only part of the equationwith
More informationToward an Augmented Reality System for Violin Learning Support
Toward an Augmented Reality System for Violin Learning Support Hiroyuki Shiino, François de Sorbier, and Hideo Saito Graduate School of Science and Technology, Keio University, Yokohama, Japan {shiino,fdesorbi,saito}@hvrl.ics.keio.ac.jp
More informationInterface Design V: Beyond the Desktop
Interface Design V: Beyond the Desktop Rob Procter Further Reading Dix et al., chapter 4, p. 153-161 and chapter 15. Norman, The Invisible Computer, MIT Press, 1998, chapters 4 and 15. 11/25/01 CS4: HCI
More information