Designing Audio and Tactile Crossmodal Icons for Mobile Devices

Size: px
Start display at page:

Download "Designing Audio and Tactile Crossmodal Icons for Mobile Devices"

Transcription

1 Designing Audio and Tactile Crossmodal Icons for Mobile Devices Eve Hoggan and Stephen Brewster Glasgow Interactive Systems Group, Department of Computing Science University of Glasgow, Glasgow, G12 8QQ, UK {eve, ABSTRACT This paper reports an experiment into the design of crossmodal icons which can provide an alternative form of output for mobile devices using audio and tactile modalities to communicate information. A complete set of crossmodal icons was created by encoding three dimensions of information in three crossmodal auditory/tactile parameters. Earcons were used for the audio and Tactons for the tactile crossmodal icons. The experiment investigated absolute identification of audio and tactile crossmodal icons when a user is trained in one modality and tested in the other (and given no training in the other modality) to see if knowledge could be transferred between modalities. We also compared performance when users were static and mobile to see any effects that mobility might have on recognition of the cues. The results showed that if participants were trained in sound with Earcons and then tested with the same messages presented via Tactons they could recognize 85% of messages when stationary and 79% when mobile. When trained with Tactons and tested with Earcons participants could accurately recognize 76.5% of messages when stationary and 78% of messages when mobile. These results suggest that participants can recognize and understand a message in a different modality very effectively. These results will aid designers of mobile displays in creating effective crossmodal cues which require minimal training for users and can provide alternative presentation modalities through which information may be presented if the context requires. Categories and Subject Descriptors H.5.2. [User Interfaces]: Haptic I/O, Auditory (non-speech feedback). General Terms Human Factors. Keywords Tactons (tactile icons), Earcons, crossmodal interaction, mobile interaction. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ICMI 07, November 12 15, 2007, Nagoya, Aichi, Japan. Copyright 2007 ACM /07/ $ INTRODUCTION Providing non-visual information to mobile device users is becoming an important area of research in multimodal and crossmodal interaction. We spend a great deal of our lives using our mobile devices. Whether it is in our bag or we are in a meeting, at a party, or listening to music, we still want to be able to interact with our mobile device. In these situations, visual feedback is not always appropriate. Although a user s eyes may be busy focusing on the primary task, many activities do not otherwise restrict users from attending to information using their remaining available senses. This is when multimodal interaction is of benefit so that, for instance, messages can be presented through the audio modality and alerts can be presented through the tactile. The manufacturers of mobile devices already include audio and vibrotactile feedback in products like PDAs and mobile phones, allowing feedback to be designed for our senses of touch and hearing. Unfortunately, when the device is in a bag or pocket, tactile feedback can go unnoticed. When a user is in noisy environments like parties or listening to music, audio feedback can be ineffective. For example, Sam is on her way to a business meeting walking along a busy street with her mobile phone in her bag when she receives an important calendar reminder. As her phone is not in contact with her body, a tactile alert would probably go unnoticed so the reminder would be best presented in audio. Next, Sam gets on a train to continue her journey with her phone in her pocket. As the train leaves the station, Sam starts downloading some music for her phone. Given that the train is noisy and she has placed her phone back in her pocket so she can read the newspaper, audio alerts alone would be insufficient to inform her of her completed download. At the same time, tactile alerts would be slightly masked as the phone is in her pocket. At this time, a combination of audio and tactile feedback could let her know when her song has been downloaded. Finally, Sam arrives at her business meeting. As the boss makes a presentation, Sam receives an urgent from her husband. Everyone in the meeting room is listening to the presentation and it would be rude for Sam to disrupt the meeting with audio feedback informing her of the incoming . In this case, a tactile cue would be much more subtle and more socially acceptable. This scenario is an example of the need for mobile devices to provide alternative presentation modalities through which information may be presented if the context requires. As the context changes, so should the feedback modality. As mentioned, multimodal feedback is often used to reduce the visual load on mobile device users. There has been a large body of

2 research into mobile multimodal interaction with each individual modality [8, 16, 17]. However, as this scenario has demonstrated, users need to be able to switch effortlessly between different modalities depending on the situation. They also need the option of several different modalities. Much of the research so far does not give the user a choice of modalities but simply provides one modality, resulting in unimodal interaction. The approach used in this research to combat the problems mentioned above involves crossmodal audio and tactile feedback. Unlike multimodal interaction, crossmodal interaction uses the different senses to provide the same information. This is much like sensory substitution where one sensory modality is used to supply information normally gathered by another [12]. Sensory substitution systems have proven to be an effective means of communicating information to people with sensory impairments [12] so could provide an alternative method through which information can be presented to mobile device users. By employing concepts from sensory substitution, mobile devices could translate information into an auditory or tactile form so that it can be presented in the most appropriate modality to suit the context. For example, alerts providing information to the user about incoming messages (e.g. SMS, MMS, or phone call) could be crossmodally encoded in both the audio and tactile modalities. By making this information available to both the auditory and tactile senses, users can receive the information in the most suitable way, without having to abandon their primary task to look at the device. The research presented here investigates the design of crossmodal auditory and tactile messages, called crossmodal icons [9], which are abstract icons that can be instantiated in one of two equivalent forms (auditory or tactile). These can be used in interfaces as a means on non-visual output. 2. CROSSMODAL ICONS Crossmodal icons enable mobile devices to output the same information interchangeably via different modalities. They can be automatically instantiated as either an Earcon or Tacton, such that the resultant Earcons or Tactons are equivalent and can be compared as such [9]. The auditory cues used in crossmodal icons are Earcons which are a common type of non-speech auditory display, which Blattner et al. defines as "non-verbal audio messages that are used in the computer/user interface to provide information to the user about some computer object, operation or interaction" [2]. Brewster [4] has conducted detailed investigations of Earcons, which have shown that they are an effective means of communicating information in sound. Tactons are used as the vibrotactile counterparts of Earcons in the design of crossmodal icons. These are structured vibrotactile messages which can be used to communicate information non-visually [3]. They are the tactile equivalent of Earcons and visual icons, and could be used for communication in situations where vision is overloaded, restricted or unavailable [3]. Tactons are created by manipulating the parameters of cutaneous perception to encode information. For example, Brown et al. [5] encoded three pieces of information using rhythm, roughness and spatial location into a Tacton to create messages for mobile telephones. Any attribute that can specify similar information across modalities is considered to be amodal in nature [13]. Thus, the crossmodal parameters used in auditory and tactile icons to encode the same information are the amodal attributes available in those two senses. Auditory and tactile displays were chosen because they are ideal candidates for crossmodal combination in view of the fact that both modalities share temporal and spatial properties. The amodal attributes between our senses of hearing and touch include intensity, rate, rhythmic structure and spatial location [13]. Several dimensions of information can be represented in crossmodal icons by encoding each dimension in a different amodal parameter (e.g. a parameter available in both modalities). To develop a set of Earcons/Tactons as crossmodal icons, the information represented must be able to be encoded in both modalities. In other words, an alert encoded using a specific melody in the audio domain could not be crossmodal as there is no tactile equivalent to melody. Whereas, an alert encoded using a particular spatial location in audio (e.g. using the cardinal points in a 3D audio soundscape) could be used as a crossmodal alert as there is a tactile equivalent available (e.g. presenting the tactile cue with vibrotactile transducers placed in a circle on the body). Previous research has identified rhythm, texture and spatial location as suitable crossmodal parameters or amodal attributes for use with auditory and vibrotactile cues [9]. Furthermore, it has shown that roughness can be mapped between modalities using amplitude modulation in the vibrotactile cues and differing timbres in the audio domain [9]. Spatial location can be perceived as equivalent in the audio and tactile modalities when tactile body positions around the waist are mapped to audio positions in a 3D soundscape [10] around the head. Although there are now three possible parameters which allow easy mappings between the auditory and tactile modalities, no complete set of crossmodal icons which use a combination of these parameters has been created to test whether the concept works and whether users can transfer knowledge of messages between senses. Therefore, in this research we develop a complete set of crossmodal icons and assess learning and the extent to which this learning transfers between the two modalities before testing recognition rates during an absolute identification and absolute matching experiment for the resulting crossmodal icons. 3. DESIGN OF 3-DIMENSIONAL ICONS In this study, crossmodal icons were created to represent alerts which might occur on a mobile phone to inform the user of incoming messages. Three pieces of information were encoded in each crossmodal icon using the parameters identified earlier: the type of message was encoded in the rhythm, the urgency of the message was encoded in the roughness, and the sender of the message was encoded in spatial location. These types of information were chosen as they are common alerts provided through the visual modality on current mobile devices and would be familiar to participants. The type of message had three possible values: text, , or voic , the urgency of the message had two possible values: urgent or not urgent, and the sender of the message had three possible values: work, personal, or junk. This resulted in a set of 18 crossmodal icons. Therefore, there were 18 Earcons representing the message alerts, and 18 Tactons representing the same message alerts.

3 3.1 Type of Message Three different rhythms were used to represent the three types of message: text, , and voic . These rhythms have already been used successfully in tactile experiments [6]. Each rhythm was made up of a different number of beats, with the text rhythm consisting of one short beat and one long beat, the rhythm consisting of two long beats and two short beats, and the voic rhythm consisting of one long beat, three short beats, and two long beats. Using a different number of beats in each rhythm helps to make the rhythms distinguishable [6]. These rhythms are presented in Figure 1 using standard musical notation. Figure 1. Text rhythm, rhythm, and voic rhythm (from [6]). 3.2 Urgency of Message Two levels of roughness were used to represent urgent (very rough) and not urgent (smooth) messages. Brown et al. used amplitude modulation to create different levels of roughness [6], the ones used here were based on those: an unmodulated 250Hz sine wave (smooth) and a 250Hz sine wave modulated by 30Hz sine wave (rough). The Earcons used differing timbres as levels of roughness based on previous experiments on crossmodal parameters [9]: a piano was used for smooth whilst a vibraphone was used for rough. 3.3 Message Sender Three locations on the user s waist were used to encode information about the sender in the tactile crossmodal icons three vibrotactile actuators were placed on a Velcro belt on the left hand side, the front centre, and the right hand side of the waist (Figure 4). A previous study showed that these body locations can be effectively mapped to 3D audio locations in both a mobile and stationary environment, and that the waist was the most effective location [10]. The audio crossmodal icons used three locations in a 3D audio soundscape to encode the information about the sender of the message sounds were placed on a horizontal plane around the users head. A vibration or sound to the left hand side indicated that the message was from work, the centre indicated that the message was personal, and the right hand side represented junk (Figure 2). As an example, an urgent from work in a tactile form would be the rhythm with a rough texture to the left hand side of the user s waist, and the audio version would present the rhythm played by a vibraphone to the left hand side of the 3D audio soundscape. Figure 2. Junk message indicated by audio panned to the right (Earcon) and tactile pulse on the right of the waist. 4. EXPERIMENT 1 LAB BASED STUDY OF CROSSMODAL DISPLAYS An experiment was conducted to investigate absolute identification of crossmodal icons encoding three dimensions of information to see if users would be able to use them and transfer knowledge of messages learned in one modality to the other. Half of the participants were trained and tested in different modalities: One quarter of the participants was trained to identify the crossmodal Earcons and then tested with crossmodal Tactons, another quarter was trained with Tactons and tested with Earcons. As a control, the other half of the participants were trained and tested in the same modality (Table 1). Data were recorded on the identification of the three parameters type, urgency, and sender. In addition, participants were informally interviewed about their experiences after the experiment. Participant Group Training Testing 1 Audio Tactile 2 Tactile Audio 3 Audio Audio 4 Tactile Tactile Table 1. Experiment conditions 4.1 Aim and Hypotheses The aim of this experiment was to investigate whether, if users are trained to understand alerts in one modality, they can then identify them in the other. The hypotheses were as follows: 1. If trained to identify the information encoded in audio crossmodal icons, participants will be able to identify the same information in the corresponding tactile crossmodal icons. 2. If trained to identify the information encoded in tactile crossmodal icons, the participant will be able to identify the same information in the corresponding audio crossmodal icons. 3. The rate of identification in the crossmodal training will be the same as that for participants trained and tested in the same modality. 4.2 Experiment Set Up The C2 Tactor from EAI (Figure 3) is a small wearable linear vibrotactile actuator, which was designed specifically to provide a lightweight equivalent to large laboratory-based linear actuators [15]. The contactor in the C2 is the moving mass itself, which is mounted above the housing and pre-loaded against the skin. This

4 helps to provide localized feedback as only the contact point vibrates instead of the whole surrounding. The C2 is resonant at 250Hz but is also designed to be able to produce a wide range of frequencies unlike many current mobile phone actuators which have limited frequency ranges [15]. Figure 3. A C2 Tactor from Engineering Acoustics Inc. When being tested or trained in the tactile modality, three C2 EAI Tactors ( were attached to the participant s waist using a belt lined with Velcro (Figure 4). The participant also wore headphones to eliminate any inadvertent audio feedback from the actuators. Tactile sensitivity can vary across the waist therefore the vibrations could feel very different in intensity at different points on the waist [7]. To counteract this, each participant was asked to set the levels of the transducers so that they all felt of equivalent intensity at the start of the experiment. The application is a purpose built experimentation system that can present audio and tactile cues of different types in multiple locations. The system presents the participant with either a tactile or audio cue at the beginning of each task. Then the participant can press the replay button to have the cue presented again. Once participants have identified the information in the cue, they can select the corresponding button and submit their answer (using the tick button). After submitting the answer, a button appears which the participants press when they are ready to move on to the next task. The system records the participant s responses, the time taken to respond, and also the number of times a cue was replayed. Participants were allowed to play each cue up to 4 times per task. Replaying the cues was allowed because the expected usage of these icons is in mobile devices where standard cues such as ringtones for incoming calls are commonly presented several times. When being tested or trained in the audio modality, the participants again wore headphones attached to a soundcard on a PC through which the audio alerts were played. The audio cues used in this experiment were created using the AM:3D ( audio engine and were placed on a plane around the user s head at the height of the ears to avoid problems related to elevation perception. The sounds were located in front of the nose (0 ) and ±90 to the left and right at each ear. Participants were asked to set the volume levels of the audio to a comfortable level at the start of the experiment. Figure 5. Screenshot of training and testing application. Figure 4. Belt lined with velcro used in experiment with 3 C2 Tactors attached. 4.3 Methodology Sixteen people took part in the experiment, aged between years, 9 female and 7 male, all members of staff or students at the University. The experimental method used a between groups design where each participant was trained in either audio or tactile and tested in either audio or tactile (see Table 1). At the beginning of the session participants were presented with a tutorial to introduce them to the concept of crossmodal icons, roughness, rhythm, etc., they were then allowed to experiment with either the crossmodal Earcons or Tactons (depending on the group to which they belonged). After familiarizing themselves with either the Earcons or Tactons, the participants began training using a custom training/testing application we developed (Figure 5). 4.4 Training For training and testing, the standard Absolute Identification (AI) paradigm with trial-by-trial correct-answer feedback was employed. The AI paradigm used involves a set of k stimuli, a set of k responses, and a one-to-one mapping between the stimuli and responses. The stimuli are presented one at a time in random order and the subject is instructed to respond to each stimulus presentation with the response defined by the one-to-one mapping, i.e., to identify which of the k stimuli was presented. Originally, for the purposes of this experiment, training was used purely to ensure that all participants reached an appropriate level of understanding. However, we also became interested in how long it would take the participants to learn the sets of Earcons and Tactons as there is little data on how long it takes to learn such cues and if the learning required by the modalities is different. This would also allow us to compare the results of crossmodal

5 training to training within the same modality to see if there were differences. The set of stimuli used to train the participants was identical to the set on which they would be later tested except that during the training phase the stimuli were presented in the same modality. The application shown in Figure 6 was used to record participants answers. The participants had to identify the information in the cue they heard or felt and then choose the appropriate button on the display shown in Figure 6. Each stimulus alternative was applied twice during each training run, resulting in a total of 36 tasks per run. During training the participants were required to repeat experimental runs (in audio or tactile) until a run with >= 90% correct identification was achieved so that we could measure how long it took for them to reach a good level of performance. If a participant did not reach 90% at the end of a training run, he/she received further training before being given another training run Training Results During the training and the experiment itself data were collected on the number of correct responses to the complete crossmodal icons. The learning curves for each participant and each stimulus set during training are shown in Figures 6 and 7. The amount of time to reach the performance criterion varied across participants. These results show that, on average, it takes 2 training sessions for participants to be able to identify Earcons with recognition rates of 90% or higher. They also show that, on average, it takes 3 training sessions for participants to identify Tactons with recognition rates of 90% or above. Figure 7. Learning curve for tactile training. 4.5 Testing in Alternative Modality Once the participants in Groups 1 and 2 in Table 1 had achieved the correct level of training, they completed the absolute identification test using the same online system and tasks but with cues presented in the other modality. Participants in the control groups (Groups 3 and 4) continued through the absolute identification test using the same tasks in the same modality after training. In total there were 36 tasks in the experiment, with all 18 crossmodal icons (either audio or tactile) presented twice during the experiment. The order in which the crossmodal icons were presented was random for each participant. In each task the participant was presented with a crossmodal icon which he/she could replay up to 4 times. The participants had to identify the corresponding alert and then select the corresponding button in the dialogue box (Figure 5). 4.6 Results The results from the control group in comparison to the crossmodal testing group are shown in Figure 8. Figure 6. Learning curve for audio training. There have been no other such studies into the training and learning of Earcons and Tactons. These results are promising for using audio and tactile interchangeably and would seem to indicate that there is no significant difference in the time taken to learn these crossmodal cues in either modality. Further studies will look at the effectiveness of explicit versus implicit learning in crossmodal interaction to reduce the amount of training time needed. Figure 8. Average percentage correct responses during testing. The results for overall Earcon recognition when trained with Tactons showed an average recognition rate of 85.1%. The alert personal urgent text achieved the highest recognition rate of 94% while the alert work not urgent voic resulted in the lowest recognition rate of 61%. The results for overall Tacton recognition when trained with Earcons showed an average recognition rate of 76.5%. The alert personal not urgent text achieved the highest recognition rate of 83% and once again the alert work not urgent

6 voic resulted in the lowest recognition rate of 56%. Thus hypothesis 2 can be accepted. Having examined the data in depth, there does not seem to be any clear reason for the low scores produced by the work not urgent voic cue. All of the individual parameters performed well in general (Figure 12) and there was no apparent misunderstanding by the participants. Further analysis will be done in the future to investigate this to ensure that it is an anomaly and not an issue with the design of the cues. An ANOVA showed that there was no significant difference in the recognition rates between the results of the four different Groups (training in audio / tested in tactile, training in tactile / tested in audio, training and testing in tactile, training and testing in audio) with (F(3,60) = 2.1, p=0.1). With the standard deviations in each condition varying only slightly from 8.9 to 9.9 and the mean scores very close, the analysis suggests that information learnt in one modality can be recovered in the alternative modality in a way which is comparable with recognition of the same information in the trained modality. Thus hypothesis 1 can be accepted The results suggest that if a user is taught to understand alerts provided by crossmodal Tactons, they could be expected to understand crossmodal Earcons with no audio training with approximately 85 % accuracy and if a user is taught to understand alerts provided by crossmodal Earcons, they could be expected to understand crossmodal Tactons with no tactile training with approximately 76.5% accuracy. These results are comparable to previous research in 3-dimensional Earcons where McGookin s results [14] showed recognition rates of around 70% for identification of complete 3-dimensional messages in audio. They are also comparable with previous Tactons research which produced recognition rates of 81% for identification of complete 3- dimensional messages in tactile icons [5]. 5. EXPERIMENT 2 MOBILE STUDY OF CROSSMODAL DISPLAYS As discussed at the start of the paper, crossmodal icons are being developed for users of mobile devices. Such users are often in motion when they use their devices so any alerts provided by the mobile device must be designed to be discernible in these situations too and not just when the user is stationary. There are many ways in which motion could affect perception of crossmodal output: mobile environments tend to change frequently with light, volume and vibration levels changing often. Consequently, another experiment in crossmodal identification was conducted which investigated the effects of motion on the results and assessed whether the good results observed in the laboratory would carry over to a more real world situation. The overall experiment involved 16 new participants who were either trained in audio or in tactile and then tested in audio or tactile whilst walking. Both the methodology and the crossmodal icons used in the experiment were the same as before to allow result comparisons. The setup of this experiment was identical to the stationary one above in every respect except that participants were asked to walk on a treadmill during the experiment as opposed to sitting in a chair (Figure 9). Figure 9. Mobile condition experimental set up. This mobile experiment used a treadmill set up in a usability lab to simulate mobility because the actuators used to present the tactile cues were controlled from a PC and therefore we could not test in a real mobile environment. Studies show that using treadmills to simulate motion is good for mimicking workload [11] when performance measures are of key interest and is a more controllable environment [1]. Furthermore, using a treadmill permitted us to set a standard walking speed for all participants (in this case, all participants walked at a constant speed of 5km/hr during the experiment). The hypothesis in this experiment was: 4. Being mobile will increase errors produced during crossmodal icon identification and matching between modalities as compared to being stationary. 5.1 Results The average number of errors for audio and tactile identification is shown in Figure 10. As before, the average recognition rate for both the audio and tactile groups was calculated but this time for the mobile condition as well. Figure 10. Average correct responses in stationary and mobile conditions. The results for overall Earcon recognition when mobile and trained with Tactons showed an average recognition rate of 78%. The results for overall Tacton recognition when mobile and when trained with Earcons showed an average recognition rate of 79%. To establish whether there is a significant difference between the

7 mobile and stationary results, a 2 factor ANOVA was applied using the two training conditions (audio or tactile) and stationary/mobile as the two factors. The ANOVA showed that there was no significant difference in the recognition rates between the results from the mobile and stationary conditions (F(1,15) = 3.4, p > 0.01) or when trained in audio or tactile with (F(1,30) = 0.7, p>0.01). These results show that training with crossmodal Tactons achieves slightly better instant recognition in crossmodal Earcons when mobile than vice versa but there are no actual significant differences. Therefore if a user is taught to understand alerts provided by crossmodal Tactons, they could be expected to understand crossmodal Earcons with no training when mobile with about 78 % accuracy and if a user is taught to understand alerts provided by crossmodal Earcons, they could be expected to understand crossmodal Tactons with no training when mobile with approximately 79% accuracy Individual Parameter Results and Discussion To establish the performance of each of the crossmodal parameters used, further analysis was performed on the data produced by both the audio and tactile versions of each parameter. The average percentage of correct responses in each audio parameter and each tactile parameter are shown in Figures 11 and 12. Figure 11. Average percentage of correct responses in each audio condition. Figure 12. Average percentage of correct responses in each tactile condition. An ANOVA showed no significant differences between audio rhythm and tactile rhythm (stationary or mobile) or between audio spatial location and tactile spatial location (stationary or mobile), however there was a significant difference between audio roughness (F(5,18) = 4.01, p = 0.09) and tactile roughness (F(5,18) = 6.76 p = 0.04) with both producing significantly poorer results than the other parameters in stationary and mobile environments. These results suggest two different issues: firstly, overall the crossmodal roughness parameter is not as effective as rhythm and spatial location indicating that a different parameter may need to be used; secondly, when trained to identify roughness in one modality, participants struggle to then identify it in another modality. Although results in the stationary and mobile experiments show no significant difference in performance with crossmodal icons using rhythm and spatial location, audio and tactile roughness recognition rates are significantly lower when mobile. The mobile results are comparable to the results of the stationary conditions. Overall, the results indicate that, if a user is trained in one modality, the accuracy achieved when they are asked to identify the same information in the other modality is comparable even when they are placed in a mobile situation. These results indicate that crossmodal icons could be effective in mobile displays. Although the mobile environment used in this experiment was much more controlled than a real world environment, these results give an indication of the sorts of effects that may be seen when a user is in motion. Future experiments will be conducted in real world situations such as walking and traveling on a train or bus for example. 6. CONCLUSIONS This paper presented an experiment which investigated the crossmodal transfer of information between the auditory and tactile modalities. Previous research had investigated identification of information in Tactons [5] and Earcons [4] showing that both could effectively encode information in three dimensions. Previous research had also established that information encoded in single parameters in the auditory and tactile modalities can be perceived as equivalent [9, 10] if the appropriate crossmodal parameters are used (rhythm, spatial location locations in a 3D audio soundscape matched with body locations around the waist, and texture roughness levels created with amplitude modulation and differing audio timbres e.g. smooth piano, rough vibraphone. This research investigated whether, if trained to understand multidimensional audio alerts, a user can then also understand the corresponding tactile alerts with no additional training and vice versa. Our results suggest that this is possible. The experiments described here are the first studies to be conducted investigating training and the transfer of training to other modalities in multimodal interaction. The overall findings from the experiment can be summarized as follows: Users in a stationary environment can accurately recognize 85% of messages presented by Earcons, if they have been trained to recognize the same alerts presented by Tactons. Users in a mobile environment can accurately recognize 78% of messages presented by Earcons, if they have been trained to recognize the same alerts presented by Tactons.

8 Users in a stationary environment can accurately recognize 76.5% of messages presented by Tactons, if they have been trained to recognize the same alerts presented by Earcons. Users in a mobile environment can accurately recognize 79% of messages presented by Tactons, if they have been trained to recognize the same alerts presented by Earcons. The results of this research indicate that it may not be necessary to train users to understand icons in all the modalities a system might use. If crossmodal icons are used to present information, training is only required in one modality as results show that users will then be able to understand the same messages in the other modality. Using crossmodal icons to communicate information to mobile device users could therefore reduce the learning time for the user and also increase the number of modalities through which this information may be transmitted. The crossmodal icons described in this paper were designed for a mobile phone notification application. Based on the positive results gained so far, there are many other potential applications that could benefit from the inclusion of crossmodal icons such as context aware navigation applications for the visually impaired and outdoor mobile games where it is dangerous for participants to concentrate visually on the device instead of their environment. Also, as mentioned earlier, mobile devices often have cluttered displays due to the lack of screen space. Crossmodal features could be added to buttons, scrollbars, and menus, etc. on touchscreen mobile devices so that information about those widgets can be presented non-visually. This would allow the widget size to be reduced (or even removed from the screen) and allow more information to be presented on the display. Different tactile spatial locations do not necessarily have to be on the body but could be on the actual device. For instance, localized tactile feedback on touchscreen mobile devices can provide spatial information. Furthermore, audio spatial location using 3D audio without wearing headphones is now becoming available as mobile device manufacturers are beginning to incorporate stereo audio output. Mobile technology incorporating audio and tactile output has now become widely available and our research has shown that feedback can be created which exploits users abilities to transfer knowledge from one modality to another. By taking this into account and designing mobile applications with adaptive crossmodal icons, users will have the ability to interact with their devices even when their situation and surroundings are changing. 7. ACKNOWLEDGEMENTS This work was supported by EPSRC Advanced Research Fellowship GR/S Hoggan is joint funded by Nokia and EPSRC. REFERENCES [1] Barnard, L., Yi, J. S., Jacko, J. A. and Sears, A., An Empirical Comparison of Use-in-Motion Evaluation Scenarios for Mobile Computing Devices, in International Journal of Human-Computer Studies 62 (2005), [2] Blattner, M. M., Sumikawa, D. A. and Greenberg, R. M., Earcons and Icons: Their Structure and Common Design Principles, in Human Computer Interaction 4(1) (1989), [3] Brewster, S. A. and Brown, L. M., Tactons: Structured Tactile Messages for Non-Visual Information Display, in Proc AUI Conference 2004, ACS (2004), [4] Brewster, S. A., Wright, P. C. and Edwards, A. D. N., An evaluation of Earcons for use in auditory human-computer interfaces, in Proc InterCHI'93, Amsterdam, ACM Press (1993), [5] Brown, L. M. and Brewster, S. A., Multidimensional Tactons for Non-Visual Information Display in Mobile Devices, in Proc MobileHCI 2006, ACM Press (2006), [6] Brown, L. M., Brewster, S. A. and Purchase, H. C., A First Investigation into the Effectiveness of Tactons, in Proc WorldHaptics 2005, IEEE (2005), [7] Cholewiak, R. W. and Craig, J. C., Vibrotactile Pattern Recognition and Discrimination at Several Body Sites, in Perception and Psychophysics 35 (1984), [8] Erp, J. B., Tactile Navigation Display, in First international Workshop on Haptic Human - Computer Interaction, Lecture Notes in Computer Science 2058 (2001), [9] Hoggan, E. and Brewster, S. A., Crossmodal Icons for Information Display, in Proc ACM CHI '06 Extended Abstracts, ACM Press (2006), [10] Hoggan, E. and Brewster, S. A., Crossmodal Spatial Location: Initial Experiments, in Proc NordiCHI '06, Norway, ACM Press (2006), [11] Kjeldskov, J. and Stage, J., New Techniques for Usability Evaluation of Mobile Systems in International Journal of Human-Computer Studies 60 (2004), [12] Lenay, C., Canu, S. and Villon, P., Technology and Perception: The Contribution of Sensory Substitution Systems, in Proc ICCT, IEEE (1997), [13] Lewkowicz, D. J., The Development of Intersensory Temporal Perception: An Epigenetic Systems/Limitations View, in Psychological Bulletin 126 (2000), [14] McGookin, D. and Brewster, S. A., Understanding Concurrent Earcons: Applying Auditory Scene Analysis Principles to Concurrent Earcon Recognition, in ACM Transactions on Applied Perception 1(2) (2004), [15] Mortimer, B., Zets, G. and Cholewiak, R. W., Vibrotactile Transduction, submitted to Journal of the Acoustic Society of America (2006). [16] Sawhney, N. and Schmandt, C., Nomadic Radio: Speech and Audio Interaction for Contextual Messaging in Nomadic Environments, in Proc ACM Transactions on Computer Human Interaction, ACM Press (2000), [17] Tan, H. Z. and Pentland, A., Tactual Displays for Wearable Computing, in Proc the 1st IEEE International Symposium on Wearable Computers, IEEE (1997),

Glasgow eprints Service

Glasgow eprints Service Hoggan, E.E and Brewster, S.A. (2006) Crossmodal icons for information display. In, Conference on Human Factors in Computing Systems, 22-27 April 2006, pages pp. 857-862, Montréal, Québec, Canada. http://eprints.gla.ac.uk/3269/

More information

Heads up interaction: glasgow university multimodal research. Eve Hoggan

Heads up interaction: glasgow university multimodal research. Eve Hoggan Heads up interaction: glasgow university multimodal research Eve Hoggan www.tactons.org multimodal interaction Multimodal Interaction Group Key area of work is Multimodality A more human way to work Not

More information

Glasgow eprints Service

Glasgow eprints Service Brown, L.M. and Brewster, S.A. and Purchase, H.C. (2005) A first investigation into the effectiveness of Tactons. In, First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment

More information

Multimodal Interaction and Proactive Computing

Multimodal Interaction and Proactive Computing Multimodal Interaction and Proactive Computing Stephen A Brewster Glasgow Interactive Systems Group Department of Computing Science University of Glasgow, Glasgow, G12 8QQ, UK E-mail: stephen@dcs.gla.ac.uk

More information

Glasgow eprints Service

Glasgow eprints Service Brewster, S.A. and King, A. (2005) An investigation into the use of tactons to present progress information. Lecture Notes in Computer Science 3585:pp. 6-17. http://eprints.gla.ac.uk/3219/ Glasgow eprints

More information

Exploring Surround Haptics Displays

Exploring Surround Haptics Displays Exploring Surround Haptics Displays Ali Israr Disney Research 4615 Forbes Ave. Suite 420, Pittsburgh, PA 15213 USA israr@disneyresearch.com Ivan Poupyrev Disney Research 4615 Forbes Ave. Suite 420, Pittsburgh,

More information

Tutorial Day at MobileHCI 2008, Amsterdam

Tutorial Day at MobileHCI 2008, Amsterdam Tutorial Day at MobileHCI 2008, Amsterdam Text input for mobile devices by Scott MacKenzie Scott will give an overview of different input means (e.g. key based, stylus, predictive, virtual keyboard), parameters

More information

Haptic Cueing of a Visual Change-Detection Task: Implications for Multimodal Interfaces

Haptic Cueing of a Visual Change-Detection Task: Implications for Multimodal Interfaces In Usability Evaluation and Interface Design: Cognitive Engineering, Intelligent Agents and Virtual Reality (Vol. 1 of the Proceedings of the 9th International Conference on Human-Computer Interaction),

More information

DOLPHIN: THE DESIGN AND INITIAL EVALUATION OF MULTIMODAL FOCUS AND CONTEXT

DOLPHIN: THE DESIGN AND INITIAL EVALUATION OF MULTIMODAL FOCUS AND CONTEXT DOLPHIN: THE DESIGN AND INITIAL EVALUATION OF MULTIMODAL FOCUS AND CONTEXT David K McGookin Department of Computing Science University of Glasgow Glasgow Scotland G12 8QQ mcgookdk@dcs.gla.ac.uk www.dcs.gla.ac.uk/~mcgookdk

More information

MELODIOUS WALKABOUT: IMPLICIT NAVIGATION WITH CONTEXTUALIZED PERSONAL AUDIO CONTENTS

MELODIOUS WALKABOUT: IMPLICIT NAVIGATION WITH CONTEXTUALIZED PERSONAL AUDIO CONTENTS MELODIOUS WALKABOUT: IMPLICIT NAVIGATION WITH CONTEXTUALIZED PERSONAL AUDIO CONTENTS Richard Etter 1 ) and Marcus Specht 2 ) Abstract In this paper the design, development and evaluation of a GPS-based

More information

Artex: Artificial Textures from Everyday Surfaces for Touchscreens

Artex: Artificial Textures from Everyday Surfaces for Touchscreens Artex: Artificial Textures from Everyday Surfaces for Touchscreens Andrew Crossan, John Williamson and Stephen Brewster Glasgow Interactive Systems Group Department of Computing Science University of Glasgow

More information

Design and Evaluation of Tactile Number Reading Methods on Smartphones

Design and Evaluation of Tactile Number Reading Methods on Smartphones Design and Evaluation of Tactile Number Reading Methods on Smartphones Fan Zhang fanzhang@zjicm.edu.cn Shaowei Chu chu@zjicm.edu.cn Naye Ji jinaye@zjicm.edu.cn Ruifang Pan ruifangp@zjicm.edu.cn Abstract

More information

The Effect of Frequency Shifting on Audio-Tactile Conversion for Enriching Musical Experience

The Effect of Frequency Shifting on Audio-Tactile Conversion for Enriching Musical Experience The Effect of Frequency Shifting on Audio-Tactile Conversion for Enriching Musical Experience Ryuta Okazaki 1,2, Hidenori Kuribayashi 3, Hiroyuki Kajimioto 1,4 1 The University of Electro-Communications,

More information

Discrimination of Virtual Haptic Textures Rendered with Different Update Rates

Discrimination of Virtual Haptic Textures Rendered with Different Update Rates Discrimination of Virtual Haptic Textures Rendered with Different Update Rates Seungmoon Choi and Hong Z. Tan Haptic Interface Research Laboratory Purdue University 465 Northwestern Avenue West Lafayette,

More information

Design and evaluation of Hapticons for enriched Instant Messaging

Design and evaluation of Hapticons for enriched Instant Messaging Design and evaluation of Hapticons for enriched Instant Messaging Loy Rovers and Harm van Essen Designed Intelligence Group, Department of Industrial Design Eindhoven University of Technology, The Netherlands

More information

Salient features make a search easy

Salient features make a search easy Chapter General discussion This thesis examined various aspects of haptic search. It consisted of three parts. In the first part, the saliency of movability and compliance were investigated. In the second

More information

An Investigation on Vibrotactile Emotional Patterns for the Blindfolded People

An Investigation on Vibrotactile Emotional Patterns for the Blindfolded People An Investigation on Vibrotactile Emotional Patterns for the Blindfolded People Hsin-Fu Huang, National Yunlin University of Science and Technology, Taiwan Hao-Cheng Chiang, National Yunlin University of

More information

Haptic presentation of 3D objects in virtual reality for the visually disabled

Haptic presentation of 3D objects in virtual reality for the visually disabled Haptic presentation of 3D objects in virtual reality for the visually disabled M Moranski, A Materka Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, Lodz, POLAND marcin.moranski@p.lodz.pl,

More information

in HCI: Haptics, Non-Speech Audio, and Their Applications Ioannis Politis, Stephen Brewster

in HCI: Haptics, Non-Speech Audio, and Their Applications Ioannis Politis, Stephen Brewster 7Multimodal Feedback in HCI: Haptics, Non-Speech Audio, and Their Applications Ioannis Politis, Stephen Brewster Euan Freeman, Graham Wilson, Dong-Bach Vo, Alex Ng, Computer interfaces traditionally depend

More information

Comparison of Haptic and Non-Speech Audio Feedback

Comparison of Haptic and Non-Speech Audio Feedback Comparison of Haptic and Non-Speech Audio Feedback Cagatay Goncu 1 and Kim Marriott 1 Monash University, Mebourne, Australia, cagatay.goncu@monash.edu, kim.marriott@monash.edu Abstract. We report a usability

More information

Arbitrating Multimodal Outputs: Using Ambient Displays as Interruptions

Arbitrating Multimodal Outputs: Using Ambient Displays as Interruptions Arbitrating Multimodal Outputs: Using Ambient Displays as Interruptions Ernesto Arroyo MIT Media Laboratory 20 Ames Street E15-313 Cambridge, MA 02139 USA earroyo@media.mit.edu Ted Selker MIT Media Laboratory

More information

Creating Usable Pin Array Tactons for Non- Visual Information

Creating Usable Pin Array Tactons for Non- Visual Information IEEE TRANSACTIONS ON HAPTICS, MANUSCRIPT ID 1 Creating Usable Pin Array Tactons for Non- Visual Information Thomas Pietrzak, Andrew Crossan, Stephen A. Brewster, Benoît Martin and Isabelle Pecci Abstract

More information

From Encoding Sound to Encoding Touch

From Encoding Sound to Encoding Touch From Encoding Sound to Encoding Touch Toktam Mahmoodi King s College London, UK http://www.ctr.kcl.ac.uk/toktam/index.htm ETSI STQ Workshop, May 2017 Immersing a person into the real environment with Very

More information

Andersen, Hans Jørgen; Morrison, Ann Judith; Knudsen, Lars Leegaard

Andersen, Hans Jørgen; Morrison, Ann Judith; Knudsen, Lars Leegaard Downloaded from vbn.aau.dk on: januar 21, 2019 Aalborg Universitet Modeling vibrotactile detection by logistic regression Andersen, Hans Jørgen; Morrison, Ann Judith; Knudsen, Lars Leegaard Published in:

More information

Brewster, S.A. and Brown, L.M. (2004) Tactons: structured tactile messages for non-visual information display. In, Australasian User Interface Conference 2004, 18-22 January 2004 ACS Conferences in Research

More information

the human chapter 1 Traffic lights the human User-centred Design Light Vision part 1 (modified extract for AISD 2005) Information i/o

the human chapter 1 Traffic lights the human User-centred Design Light Vision part 1 (modified extract for AISD 2005) Information i/o Traffic lights chapter 1 the human part 1 (modified extract for AISD 2005) http://www.baddesigns.com/manylts.html User-centred Design Bad design contradicts facts pertaining to human capabilities Usability

More information

Haptic messaging. Katariina Tiitinen

Haptic messaging. Katariina Tiitinen Haptic messaging Katariina Tiitinen 13.12.2012 Contents Introduction User expectations for haptic mobile communication Hapticons Example: CheekTouch Introduction Multiple senses are used in face-to-face

More information

t t t rt t s s tr t Manuel Martinez 1, Angela Constantinescu 2, Boris Schauerte 1, Daniel Koester 1, and Rainer Stiefelhagen 1,2

t t t rt t s s tr t Manuel Martinez 1, Angela Constantinescu 2, Boris Schauerte 1, Daniel Koester 1, and Rainer Stiefelhagen 1,2 t t t rt t s s Manuel Martinez 1, Angela Constantinescu 2, Boris Schauerte 1, Daniel Koester 1, and Rainer Stiefelhagen 1,2 1 r sr st t t 2 st t t r t r t s t s 3 Pr ÿ t3 tr 2 t 2 t r r t s 2 r t ts ss

More information

Evaluation of Visuo-haptic Feedback in a 3D Touch Panel Interface

Evaluation of Visuo-haptic Feedback in a 3D Touch Panel Interface Evaluation of Visuo-haptic Feedback in a 3D Touch Panel Interface Xu Zhao Saitama University 255 Shimo-Okubo, Sakura-ku, Saitama City, Japan sheldonzhaox@is.ics.saitamau.ac.jp Takehiro Niikura The University

More information

Interactive Exploration of City Maps with Auditory Torches

Interactive Exploration of City Maps with Auditory Torches Interactive Exploration of City Maps with Auditory Torches Wilko Heuten OFFIS Escherweg 2 Oldenburg, Germany Wilko.Heuten@offis.de Niels Henze OFFIS Escherweg 2 Oldenburg, Germany Niels.Henze@offis.de

More information

Redundant Coding of Simulated Tactile Key Clicks with Audio Signals

Redundant Coding of Simulated Tactile Key Clicks with Audio Signals Redundant Coding of Simulated Tactile Key Clicks with Audio Signals Hsiang-Yu Chen, Jaeyoung Park and Hong Z. Tan Haptic Interface Research Laboratory Purdue University West Lafayette, IN 47906 Steve Dai

More information

Comparison between audio and tactile systems for delivering simple navigational information to visually impaired pedestrians

Comparison between audio and tactile systems for delivering simple navigational information to visually impaired pedestrians British Journal of Visual Impairment September, 2007 Comparison between audio and tactile systems for delivering simple navigational information to visually impaired pedestrians Dr. Olinkha Gustafson-Pearce,

More information

Abstract. 2. Related Work. 1. Introduction Icon Design

Abstract. 2. Related Work. 1. Introduction Icon Design The Hapticon Editor: A Tool in Support of Haptic Communication Research Mario J. Enriquez and Karon E. MacLean Department of Computer Science University of British Columbia enriquez@cs.ubc.ca, maclean@cs.ubc.ca

More information

A Pilot Study: Introduction of Time-domain Segment to Intensity-based Perception Model of High-frequency Vibration

A Pilot Study: Introduction of Time-domain Segment to Intensity-based Perception Model of High-frequency Vibration A Pilot Study: Introduction of Time-domain Segment to Intensity-based Perception Model of High-frequency Vibration Nan Cao, Hikaru Nagano, Masashi Konyo, Shogo Okamoto 2 and Satoshi Tadokoro Graduate School

More information

AUDITORY ILLUSIONS & LAB REPORT FORM

AUDITORY ILLUSIONS & LAB REPORT FORM 01/02 Illusions - 1 AUDITORY ILLUSIONS & LAB REPORT FORM NAME: DATE: PARTNER(S): The objective of this experiment is: To understand concepts such as beats, localization, masking, and musical effects. APPARATUS:

More information

Perception of pitch. Importance of pitch: 2. mother hemp horse. scold. Definitions. Why is pitch important? AUDL4007: 11 Feb A. Faulkner.

Perception of pitch. Importance of pitch: 2. mother hemp horse. scold. Definitions. Why is pitch important? AUDL4007: 11 Feb A. Faulkner. Perception of pitch AUDL4007: 11 Feb 2010. A. Faulkner. See Moore, BCJ Introduction to the Psychology of Hearing, Chapter 5. Or Plack CJ The Sense of Hearing Lawrence Erlbaum, 2005 Chapter 7 1 Definitions

More information

Perception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 5: 12 Feb A. Faulkner.

Perception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 5: 12 Feb A. Faulkner. Perception of pitch BSc Audiology/MSc SHS Psychoacoustics wk 5: 12 Feb 2009. A. Faulkner. See Moore, BCJ Introduction to the Psychology of Hearing, Chapter 5. Or Plack CJ The Sense of Hearing Lawrence

More information

Exploring Geometric Shapes with Touch

Exploring Geometric Shapes with Touch Exploring Geometric Shapes with Touch Thomas Pietrzak, Andrew Crossan, Stephen Brewster, Benoît Martin, Isabelle Pecci To cite this version: Thomas Pietrzak, Andrew Crossan, Stephen Brewster, Benoît Martin,

More information

Sound rendering in Interactive Multimodal Systems. Federico Avanzini

Sound rendering in Interactive Multimodal Systems. Federico Avanzini Sound rendering in Interactive Multimodal Systems Federico Avanzini Background Outline Ecological Acoustics Multimodal perception Auditory visual rendering of egocentric distance Binaural sound Auditory

More information

Welcome to this course on «Natural Interactive Walking on Virtual Grounds»!

Welcome to this course on «Natural Interactive Walking on Virtual Grounds»! Welcome to this course on «Natural Interactive Walking on Virtual Grounds»! The speaker is Anatole Lécuyer, senior researcher at Inria, Rennes, France; More information about him at : http://people.rennes.inria.fr/anatole.lecuyer/

More information

Simultaneous presentation of tactile and auditory motion on the abdomen to realize the experience of being cut by a sword

Simultaneous presentation of tactile and auditory motion on the abdomen to realize the experience of being cut by a sword Simultaneous presentation of tactile and auditory motion on the abdomen to realize the experience of being cut by a sword Sayaka Ooshima 1), Yuki Hashimoto 1), Hideyuki Ando 2), Junji Watanabe 3), and

More information

Comparing Two Haptic Interfaces for Multimodal Graph Rendering

Comparing Two Haptic Interfaces for Multimodal Graph Rendering Comparing Two Haptic Interfaces for Multimodal Graph Rendering Wai Yu, Stephen Brewster Glasgow Interactive Systems Group, Department of Computing Science, University of Glasgow, U. K. {rayu, stephen}@dcs.gla.ac.uk,

More information

Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions

Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions Euan Freeman, Stephen Brewster Glasgow Interactive Systems Group University of Glasgow {first.last}@glasgow.ac.uk Vuokko Lantz

More information

Perception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 4: 7 Feb A. Faulkner.

Perception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 4: 7 Feb A. Faulkner. Perception of pitch BSc Audiology/MSc SHS Psychoacoustics wk 4: 7 Feb 2008. A. Faulkner. See Moore, BCJ Introduction to the Psychology of Hearing, Chapter 5. Or Plack CJ The Sense of Hearing Lawrence Erlbaum,

More information

Paper Body Vibration Effects on Perceived Reality with Multi-modal Contents

Paper Body Vibration Effects on Perceived Reality with Multi-modal Contents ITE Trans. on MTA Vol. 2, No. 1, pp. 46-5 (214) Copyright 214 by ITE Transactions on Media Technology and Applications (MTA) Paper Body Vibration Effects on Perceived Reality with Multi-modal Contents

More information

2011 TUI FINAL Back/Posture Device

2011 TUI FINAL Back/Posture Device 2011 TUI FINAL Back/Posture Device Walter Koning Berkeley, CA 94708 USA wk@ischool.berkeley.edu Alex Kantchelian Berkeley, CA 94708 USA akantchelian@ischool.berkeley.edu Erich Hacker Berkeley, CA 94708

More information

Yu, W. and Brewster, S.A. (2003) Evaluation of multimodal graphs for blind people. Universal Access in the Information Society 2(2):pp

Yu, W. and Brewster, S.A. (2003) Evaluation of multimodal graphs for blind people. Universal Access in the Information Society 2(2):pp Yu, W. and Brewster, S.A. (2003) Evaluation of multimodal graphs for blind people. Universal Access in the Information Society 2(2):pp. 105-124. http://eprints.gla.ac.uk/3273/ Glasgow eprints Service http://eprints.gla.ac.uk

More information

Interactive Simulation: UCF EIN5255. VR Software. Audio Output. Page 4-1

Interactive Simulation: UCF EIN5255. VR Software. Audio Output. Page 4-1 VR Software Class 4 Dr. Nabil Rami http://www.simulationfirst.com/ein5255/ Audio Output Can be divided into two elements: Audio Generation Audio Presentation Page 4-1 Audio Generation A variety of audio

More information

Vibrotactile Apparent Movement by DC Motors and Voice-coil Tactors

Vibrotactile Apparent Movement by DC Motors and Voice-coil Tactors Vibrotactile Apparent Movement by DC Motors and Voice-coil Tactors Masataka Niwa 1,2, Yasuyuki Yanagida 1, Haruo Noma 1, Kenichi Hosaka 1, and Yuichiro Kume 3,1 1 ATR Media Information Science Laboratories

More information

Facilitation of Affection by Tactile Feedback of False Heartbeat

Facilitation of Affection by Tactile Feedback of False Heartbeat Facilitation of Affection by Tactile Feedback of False Heartbeat Narihiro Nishimura n-nishimura@kaji-lab.jp Asuka Ishi asuka@kaji-lab.jp Michi Sato michi@kaji-lab.jp Shogo Fukushima shogo@kaji-lab.jp Hiroyuki

More information

Investigating Phicon Feedback in Non- Visual Tangible User Interfaces

Investigating Phicon Feedback in Non- Visual Tangible User Interfaces Investigating Phicon Feedback in Non- Visual Tangible User Interfaces David McGookin and Stephen Brewster Glasgow Interactive Systems Group School of Computing Science University of Glasgow Glasgow, G12

More information

Auditory Localization

Auditory Localization Auditory Localization CMPT 468: Sound Localization Tamara Smyth, tamaras@cs.sfu.ca School of Computing Science, Simon Fraser University November 15, 2013 Auditory locatlization is the human perception

More information

INVESTIGATING BINAURAL LOCALISATION ABILITIES FOR PROPOSING A STANDARDISED TESTING ENVIRONMENT FOR BINAURAL SYSTEMS

INVESTIGATING BINAURAL LOCALISATION ABILITIES FOR PROPOSING A STANDARDISED TESTING ENVIRONMENT FOR BINAURAL SYSTEMS 20-21 September 2018, BULGARIA 1 Proceedings of the International Conference on Information Technologies (InfoTech-2018) 20-21 September 2018, Bulgaria INVESTIGATING BINAURAL LOCALISATION ABILITIES FOR

More information

Rich Tactile Output on Mobile Devices

Rich Tactile Output on Mobile Devices Rich Tactile Output on Mobile Devices Alireza Sahami 1, Paul Holleis 1, Albrecht Schmidt 1, and Jonna Häkkilä 2 1 Pervasive Computing Group, University of Duisburg Essen, Schuetzehnbahn 70, 45117, Essen,

More information

Speech, Hearing and Language: work in progress. Volume 12

Speech, Hearing and Language: work in progress. Volume 12 Speech, Hearing and Language: work in progress Volume 12 2 Construction of a rotary vibrator and its application in human tactile communication Abbas HAYDARI and Stuart ROSEN Department of Phonetics and

More information

Introducing a Spatiotemporal Tactile Variometer to Leverage Thermal Updrafts

Introducing a Spatiotemporal Tactile Variometer to Leverage Thermal Updrafts Introducing a Spatiotemporal Tactile Variometer to Leverage Thermal Updrafts Erik Pescara pescara@teco.edu Michael Beigl beigl@teco.edu Jonathan Gräser graeser@teco.edu Abstract Measuring and displaying

More information

Enhanced Collision Perception Using Tactile Feedback

Enhanced Collision Perception Using Tactile Feedback Department of Computer & Information Science Technical Reports (CIS) University of Pennsylvania Year 2003 Enhanced Collision Perception Using Tactile Feedback Aaron Bloomfield Norman I. Badler University

More information

A Design Study for the Haptic Vest as a Navigation System

A Design Study for the Haptic Vest as a Navigation System Received January 7, 2013; Accepted March 19, 2013 A Design Study for the Haptic Vest as a Navigation System LI Yan 1, OBATA Yuki 2, KUMAGAI Miyuki 3, ISHIKAWA Marina 4, OWAKI Moeki 5, FUKAMI Natsuki 6,

More information

Exploration of Tactile Feedback in BI&A Dashboards

Exploration of Tactile Feedback in BI&A Dashboards Exploration of Tactile Feedback in BI&A Dashboards Erik Pescara Xueying Yuan Karlsruhe Institute of Technology Karlsruhe Institute of Technology erik.pescara@kit.edu uxdxd@student.kit.edu Maximilian Iberl

More information

Dimensional Design; Explorations of the Auditory and Haptic Correlate for the Mobile Device

Dimensional Design; Explorations of the Auditory and Haptic Correlate for the Mobile Device Dimensional Design; Explorations of the Auditory and Haptic Correlate for the Mobile Device Conor O Sullivan Motorola, Inc. 600 North U.S. Highway 45, DS-175, Libertyville, IL 60048, USA conor.o sullivan@motorola.com

More information

Virtual Chromatic Percussions Simulated by Pseudo-Haptic and Vibrotactile Feedback

Virtual Chromatic Percussions Simulated by Pseudo-Haptic and Vibrotactile Feedback Virtual Chromatic Percussions Simulated by Pseudo-Haptic and Vibrotactile Feedback Taku Hachisu The University of Electro- Communications 1-5-1 Chofugaoka, Chofu, Tokyo 182-8585, Japan +81 42 443 5363

More information

"From Dots To Shapes": an auditory haptic game platform for teaching geometry to blind pupils. Patrick Roth, Lori Petrucci, Thierry Pun

From Dots To Shapes: an auditory haptic game platform for teaching geometry to blind pupils. Patrick Roth, Lori Petrucci, Thierry Pun "From Dots To Shapes": an auditory haptic game platform for teaching geometry to blind pupils Patrick Roth, Lori Petrucci, Thierry Pun Computer Science Department CUI, University of Geneva CH - 1211 Geneva

More information

Feelable User Interfaces: An Exploration of Non-Visual Tangible User Interfaces

Feelable User Interfaces: An Exploration of Non-Visual Tangible User Interfaces Feelable User Interfaces: An Exploration of Non-Visual Tangible User Interfaces Katrin Wolf Telekom Innovation Laboratories TU Berlin, Germany katrin.wolf@acm.org Peter Bennett Interaction and Graphics

More information

Non-Visual Menu Navigation: the Effect of an Audio-Tactile Display

Non-Visual Menu Navigation: the Effect of an Audio-Tactile Display http://dx.doi.org/10.14236/ewic/hci2014.25 Non-Visual Menu Navigation: the Effect of an Audio-Tactile Display Oussama Metatla, Fiore Martin, Tony Stockman, Nick Bryan-Kinns School of Electronic Engineering

More information

Sound is the human ear s perceived effect of pressure changes in the ambient air. Sound can be modeled as a function of time.

Sound is the human ear s perceived effect of pressure changes in the ambient air. Sound can be modeled as a function of time. 2. Physical sound 2.1 What is sound? Sound is the human ear s perceived effect of pressure changes in the ambient air. Sound can be modeled as a function of time. Figure 2.1: A 0.56-second audio clip of

More information

EMA-Tactons: Vibrotactile External Memory Aids in an Auditory Display

EMA-Tactons: Vibrotactile External Memory Aids in an Auditory Display EMA-Tactons: Vibrotactile External Memory Aids in an Auditory Display Johan Kildal 1, Stephen A. Brewster 1 1 Glasgow Interactive Systems Group, Department of Computing Science University of Glasgow. Glasgow,

More information

Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback

Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback Jung Wook Park HCI Institute Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA, USA, 15213 jungwoop@andrew.cmu.edu

More information

Reflections on a WYFIWIF Tool for Eliciting User Feedback

Reflections on a WYFIWIF Tool for Eliciting User Feedback Reflections on a WYFIWIF Tool for Eliciting User Feedback Oliver Schneider Dept. of Computer Science University of British Columbia Vancouver, Canada oschneid@cs.ubc.ca Karon MacLean Dept. of Computer

More information

An Example Cognitive Architecture: EPIC

An Example Cognitive Architecture: EPIC An Example Cognitive Architecture: EPIC David E. Kieras Collaborator on EPIC: David E. Meyer University of Michigan EPIC Development Sponsored by the Cognitive Science Program Office of Naval Research

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Engineering Acoustics Session 2pEAb: Controlling Sound Quality 2pEAb10.

More information

Expression of 2DOF Fingertip Traction with 1DOF Lateral Skin Stretch

Expression of 2DOF Fingertip Traction with 1DOF Lateral Skin Stretch Expression of 2DOF Fingertip Traction with 1DOF Lateral Skin Stretch Vibol Yem 1, Mai Shibahara 2, Katsunari Sato 2, Hiroyuki Kajimoto 1 1 The University of Electro-Communications, Tokyo, Japan 2 Nara

More information

Spatial auditory interface for an embedded communication device in a car

Spatial auditory interface for an embedded communication device in a car First International Conference on Advances in Computer-Human Interaction Spatial auditory interface for an embedded communication device in a car Jaka Sodnik, Saso Tomazic University of Ljubljana, Slovenia

More information

Human Factors. We take a closer look at the human factors that affect how people interact with computers and software:

Human Factors. We take a closer look at the human factors that affect how people interact with computers and software: Human Factors We take a closer look at the human factors that affect how people interact with computers and software: Physiology physical make-up, capabilities Cognition thinking, reasoning, problem-solving,

More information

Comparison of Three Eye Tracking Devices in Psychology of Programming Research

Comparison of Three Eye Tracking Devices in Psychology of Programming Research In E. Dunican & T.R.G. Green (Eds). Proc. PPIG 16 Pages 151-158 Comparison of Three Eye Tracking Devices in Psychology of Programming Research Seppo Nevalainen and Jorma Sajaniemi University of Joensuu,

More information

Sweep-Shake: Finding Digital Resources in Physical Environments

Sweep-Shake: Finding Digital Resources in Physical Environments Sweep-Shake: Finding Digital Resources in Physical Environments Simon Robinson, Parisa Eslambolchilar, Matt Jones Future Interaction Technology Lab Computer Science Department Swansea University Swansea,

More information

Graphical User Interfaces for Blind Users: An Overview of Haptic Devices

Graphical User Interfaces for Blind Users: An Overview of Haptic Devices Graphical User Interfaces for Blind Users: An Overview of Haptic Devices Hasti Seifi, CPSC554m: Assignment 1 Abstract Graphical user interfaces greatly enhanced usability of computer systems over older

More information

The psychoacoustics of reverberation

The psychoacoustics of reverberation The psychoacoustics of reverberation Steven van de Par Steven.van.de.Par@uni-oldenburg.de July 19, 2016 Thanks to Julian Grosse and Andreas Häußler 2016 AES International Conference on Sound Field Control

More information

Effect of the number of loudspeakers on sense of presence in 3D audio system based on multiple vertical panning

Effect of the number of loudspeakers on sense of presence in 3D audio system based on multiple vertical panning Effect of the number of loudspeakers on sense of presence in 3D audio system based on multiple vertical panning Toshiyuki Kimura and Hiroshi Ando Universal Communication Research Institute, National Institute

More information

COM325 Computer Speech and Hearing

COM325 Computer Speech and Hearing COM325 Computer Speech and Hearing Part III : Theories and Models of Pitch Perception Dr. Guy Brown Room 145 Regent Court Department of Computer Science University of Sheffield Email: g.brown@dcs.shef.ac.uk

More information

Haptic Cues: Texture as a Guide for Non-Visual Tangible Interaction.

Haptic Cues: Texture as a Guide for Non-Visual Tangible Interaction. Haptic Cues: Texture as a Guide for Non-Visual Tangible Interaction. Figure 1. Setup for exploring texture perception using a (1) black box (2) consisting of changeable top with laser-cut haptic cues,

More information

Haptic control in a virtual environment

Haptic control in a virtual environment Haptic control in a virtual environment Gerard de Ruig (0555781) Lourens Visscher (0554498) Lydia van Well (0566644) September 10, 2010 Introduction With modern technological advancements it is entirely

More information

Perception of room size and the ability of self localization in a virtual environment. Loudspeaker experiment

Perception of room size and the ability of self localization in a virtual environment. Loudspeaker experiment Perception of room size and the ability of self localization in a virtual environment. Loudspeaker experiment Marko Horvat University of Zagreb Faculty of Electrical Engineering and Computing, Zagreb,

More information

ANALYSIS AND EVALUATION OF IRREGULARITY IN PITCH VIBRATO FOR STRING-INSTRUMENT TONES

ANALYSIS AND EVALUATION OF IRREGULARITY IN PITCH VIBRATO FOR STRING-INSTRUMENT TONES Abstract ANALYSIS AND EVALUATION OF IRREGULARITY IN PITCH VIBRATO FOR STRING-INSTRUMENT TONES William L. Martens Faculty of Architecture, Design and Planning University of Sydney, Sydney NSW 2006, Australia

More information

Tactile Actuators Using SMA Micro-wires and the Generation of Texture Sensation from Images

Tactile Actuators Using SMA Micro-wires and the Generation of Texture Sensation from Images IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) November -,. Tokyo, Japan Tactile Actuators Using SMA Micro-wires and the Generation of Texture Sensation from Images Yuto Takeda

More information

Drumtastic: Haptic Guidance for Polyrhythmic Drumming Practice

Drumtastic: Haptic Guidance for Polyrhythmic Drumming Practice Drumtastic: Haptic Guidance for Polyrhythmic Drumming Practice ABSTRACT W e present Drumtastic, an application where the user interacts with two Novint Falcon haptic devices to play virtual drums. The

More information

ModaDJ. Development and evaluation of a multimodal user interface. Institute of Computer Science University of Bern

ModaDJ. Development and evaluation of a multimodal user interface. Institute of Computer Science University of Bern ModaDJ Development and evaluation of a multimodal user interface Course Master of Computer Science Professor: Denis Lalanne Renato Corti1 Alina Petrescu2 1 Institute of Computer Science University of Bern

More information

MUS 302 ENGINEERING SECTION

MUS 302 ENGINEERING SECTION MUS 302 ENGINEERING SECTION Wiley Ross: Recording Studio Coordinator Email =>ross@email.arizona.edu Twitter=> https://twitter.com/ssor Web page => http://www.arts.arizona.edu/studio Youtube Channel=>http://www.youtube.com/user/wileyross

More information

Figure 2. Haptic human perception and display. 2.2 Pseudo-Haptic Feedback 2. RELATED WORKS 2.1 Haptic Simulation of Tapping an Object

Figure 2. Haptic human perception and display. 2.2 Pseudo-Haptic Feedback 2. RELATED WORKS 2.1 Haptic Simulation of Tapping an Object Virtual Chromatic Percussions Simulated by Pseudo-Haptic and Vibrotactile Feedback Taku Hachisu 1 Gabriel Cirio 2 Maud Marchal 2 Anatole Lécuyer 2 Hiroyuki Kajimoto 1,3 1 The University of Electro- Communications

More information

HAPTIC USER INTERFACES Final lecture

HAPTIC USER INTERFACES Final lecture HAPTIC USER INTERFACES Final lecture Roope Raisamo School of Information Sciences University of Tampere, Finland Content A little more about crossmodal interaction The next steps in the course 1 2 CROSSMODAL

More information

The effect of 3D audio and other audio techniques on virtual reality experience

The effect of 3D audio and other audio techniques on virtual reality experience The effect of 3D audio and other audio techniques on virtual reality experience Willem-Paul BRINKMAN a,1, Allart R.D. HOEKSTRA a, René van EGMOND a a Delft University of Technology, The Netherlands Abstract.

More information

Conversational Gestures For Direct Manipulation On The Audio Desktop

Conversational Gestures For Direct Manipulation On The Audio Desktop Conversational Gestures For Direct Manipulation On The Audio Desktop Abstract T. V. Raman Advanced Technology Group Adobe Systems E-mail: raman@adobe.com WWW: http://cs.cornell.edu/home/raman 1 Introduction

More information

Designing Tactile Vocabularies for Human-Computer Interaction

Designing Tactile Vocabularies for Human-Computer Interaction VICTOR ADRIEL DE JESUS OLIVEIRA Designing Tactile Vocabularies for Human-Computer Interaction Thesis presented in partial fulfillment of the requirements for the degree of Master of Computer Science Advisor:

More information

Tactile Feedback in Mobile: Consumer Attitudes About High-Definition Haptic Effects in Touch Screen Phones. August 2017

Tactile Feedback in Mobile: Consumer Attitudes About High-Definition Haptic Effects in Touch Screen Phones. August 2017 Consumer Attitudes About High-Definition Haptic Effects in Touch Screen Phones August 2017 Table of Contents 1. EXECUTIVE SUMMARY... 1 2. STUDY OVERVIEW... 2 3. METHODOLOGY... 3 3.1 THE SAMPLE SELECTION

More information

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL

More information

HapticArmrest: Remote Tactile Feedback on Touch Surfaces Using Combined Actuators

HapticArmrest: Remote Tactile Feedback on Touch Surfaces Using Combined Actuators HapticArmrest: Remote Tactile Feedback on Touch Surfaces Using Combined Actuators Hendrik Richter, Sebastian Löhmann, Alexander Wiethoff University of Munich, Germany {hendrik.richter, sebastian.loehmann,

More information

Preeti Rao 2 nd CompMusicWorkshop, Istanbul 2012

Preeti Rao 2 nd CompMusicWorkshop, Istanbul 2012 Preeti Rao 2 nd CompMusicWorkshop, Istanbul 2012 o Music signal characteristics o Perceptual attributes and acoustic properties o Signal representations for pitch detection o STFT o Sinusoidal model o

More information

Evaluating 3D Embodied Conversational Agents In Contrasting VRML Retail Applications

Evaluating 3D Embodied Conversational Agents In Contrasting VRML Retail Applications Evaluating 3D Embodied Conversational Agents In Contrasting VRML Retail Applications Helen McBreen, James Anderson, Mervyn Jack Centre for Communication Interface Research, University of Edinburgh, 80,

More information

Designing an Obstacle Game to Motivate Physical Activity among Teens. Shannon Parker Summer 2010 NSF Grant Award No. CNS

Designing an Obstacle Game to Motivate Physical Activity among Teens. Shannon Parker Summer 2010 NSF Grant Award No. CNS Designing an Obstacle Game to Motivate Physical Activity among Teens Shannon Parker Summer 2010 NSF Grant Award No. CNS-0852099 Abstract In this research we present an obstacle course game for the iphone

More information

Spatialization and Timbre for Effective Auditory Graphing

Spatialization and Timbre for Effective Auditory Graphing 18 Proceedings o1't11e 8th WSEAS Int. Conf. on Acoustics & Music: Theory & Applications, Vancouver, Canada. June 19-21, 2007 Spatialization and Timbre for Effective Auditory Graphing HONG JUN SONG and

More information

Ellen C. Haas, Ph.D.

Ellen C. Haas, Ph.D. INTEGRATING AUDITORY WARNINGS WITH TACTILE CUES IN MULTIMODAL DISPLAYS FOR CHALLENGING ENVIRONMENTS Ellen C. Haas, Ph.D. U.S. Army Research Laboratory Multimodal Controls and Displays Laboratory Aberdeen

More information