Effectiveness of Multi-modal Techniques in Human-Computer Interaction: Experimental Results with the Computer Chess Player Turk-2

Size: px
Start display at page:

Download "Effectiveness of Multi-modal Techniques in Human-Computer Interaction: Experimental Results with the Computer Chess Player Turk-2"

Transcription

1 Effectiveness of Multi-modal Techniques in Human-Computer Interaction: Experimental Results with the Computer Chess Player Turk-2 Levente Sajó, Péter Váradi, Attila Fazekas University of Debrecen, Faculty of Informatics Image Processing Group of Debrecen H-4010 Debrecen, P.O.Box 12, Hungary inf.unideb.hu, inf.unideb.hu Abstract. Development of techniques based on multi-modal human-computer communication highly contributes to create user friendlier informational systems. An experimental system, Multi-modal Chess Player Turk-2, has been built which is able to play chess but also fulfills the requirements of multi-modality. The first report about this system was published in [1]. Now we describe the human experiments made with this system and present the results of the evaluation. These results proves us that the multi-modal interaction makes information systems to become more human-like and easier to use thus becoming more acceptable by everyday people. Keywords. Computer Chess Player, Turk-2, Multi-modal Human-computer Interaction, human experiments. 1. Introduction Development in computer industry step-bystep has involved computers to our everyday life. Computers become faster and cheaper, which lead to the spread of computer-based information systems in different areas. Information systems could mean the general purpose personal computers used at people's home but also those machines which were built for a special task, e.g. ATMs at banks or interactive schedules and ticket machines at train stations. In spite of these systems became increasingly efficient and easier to use, from the part of the users a kind of disfavor can be observed. This can be because of the lack of knowledge for controlling the new technologies. Another reason is that every system should be used differently, thus day by day a lot of new information needed to be learned to be able to use them. Furthermore, the time is getting shorter between the releases of new generation systems. The way of usage of the information systems is similar to the usage of communication languages. If we know more languages, it is easier to learn a new one. If we are familiar in using more information systems, then using a new system causes fewer problems for us. The difficulties in the communication with these systems are that the user is the one who should learn the language of a system, how to give commands to it and how to understand its answers. The research field of multi-modal human-computer interaction addresses the problem of developing information systems to be able to communicate with the users in a more human-like way. Since we only consider the communication between humans and computers, the terms of "Multi-modal Human-Computer Interaction" (MMHCI) and "Multi-modal Interaction" (MMI) will be used as synonyms. 2. Multi-modal Human-Computer Interaction It is difficult to define what exactly makes the communication human-like. Usually it means using input and output channels which are more natural for users. Appearing Graphical User Interfaces at the beginning of '80s was the first step in the direction of the user-friendliness. GUIs changed the previous text-based interfaces by providing a common look and feel. Icons, drawings and images were used to represent the data visually and provided direct control of the objects by using mouse and keyboard. Appearing graphical operating systems like Apple MacOS, Microsoft Windows or Linux X Window System helped a lot in the spreading of personal computers. Even GUIs became very popular among the users they still only uses a subset of human sensory systems. Multi-modal user interfaces are the candidates to continue the way of building better user interfaces started by the GUIs. A multi-modal application is different from GUI applications, beyond the traditional input devices, it is using alternative communication

2 channels. Multi-modal human computer interaction considers the relation of computers with the users from the human side focusing on perception and control. The term modality refers to human senses or input/output channels. Raisimo divided human senses in seven groups from neurobiological point of view: internal chemical, external chemical (taste, smell), somatic sense (touch, pressure, temperature, pain), muscle senses (stretch, tension, join position), sense of balance, hearing and vision. The goal of multi-modal human computer interaction is the synergetic use of the most of the abilities that human have [6]. Our vision is such future information systems which are able to use parallelly multiple input and output modalities to interact with the user in a more natural and efficient way. Users can communicate by using human language and gestures, and the systems can understand the all parameters of the conversation. The systems should be able also to reply in an user friendly way, by speaking and making different gestures. Of course, these requirements are quite general and we are still far from a complete information system based on multi-modal human-computer interaction techniques, but there are some results which point to the directions described above. Analyzing human body gestures has been a preferred topic by the researchers. For example gestures have been extracted in fields of martial arts games. [4] computer aided Tai Chi is presented. On human body inexpensive wearable sensors were installed for capturing Tai Chi movements. The captured movement later is analyzed by the computer increasing the effectiveness of human-made analyzation. Audio channels also have been used for input modalities. Speech recognition tools like Microsoft Speech API can be used for recognition of limited set of spoken commands. In [3] Igarashi and Hughes proposes the use of non-verbal features in speech, like pitch, volume, and continuation, to directly control interactive applications as an alternative of recognizing spoken commands. They give examples such as television controller, where the user says "Channel up, ta ta ta" the channel number increases by three. Some works address the combinations of different input channels. In [2] is presented a perceptual user interface for controlling a flying cartoon-animated dragon in QuiQui's Giant Bounce. This is an interactive computer game for 4 to 9 years old children. The dragon flies on the user's movements and breathes fire when the user shouts. Using multi-modal inputs and outputs in the same application resulted interesting solutions. In [7] an interactive poker game is presented in which one human user plays against two animated agents. The application combines several modalities like facial gesture and body animation, emotion modeling and speech synthesis for driving the behavior of virtual characters and this way enhancing the naturalness of interaction in the game. Building humanoid robots is a highly preferred topic in many studies. These "emotional" robots are able to move their body and their hands and on their faces they can imitate human emotions. Such a robot is presented in [5], called icat, which is the opponent of a human player in a chess match, and its emotive behavior is influenced by the game state. Our aim was to build a system which implements interfaces based on several modalities both on input and output channel and to study if the human-like interface provides an added value. We chose chess as a kernel game application because it is a popular and wildly known game and therefore learning the rules causing fewer problems for the users. We wanted to put our effort to the communication part of the system with the users and not on the artificial intelligence part. We implemented the system in performed user studies with two versions, with multi-modal communication module (reffered in the following: TH) and without (NoTH). By comparing reactions, we intended to get an insight about the role of the multi-modal interfaces. 3. The Turk-2's architecture In this chapter we describe briefly our multimodal chess player system. The central part of the system is the controller which consists of three modules: the turn manager, which is responsible for the game flow and orchestrates the communication between the user and the chess engine, the emotion monitor, which keeps track of the emotional state of the user and the Turk2, and the chess engine which acts as the "mind" of the Turk2, deciding on the chess moves. The different components can be grouped into two main modules. (See Fig. 1.)

3 Figure 1. The architecture of Turk-2. The first module (marked with dots in Fig. 1) is the human-computer communication module, providing multi-modal input and output facilities. This module has two kinds of input, one from the speech recognizer and one from the player's face analyzer, which localizes faces and detects the facial expressions on the human player's face. The inputs from these components are processed by the turn manager, which generates the output and forwards it to the talking head. The second module (marked with dashes in Fig. 1) is the chess game module. This module contains two sub-modules, the chess state detector and the robot-arm with its controller. The chess-engine is responsible for the chessgame. The second detects the changes on the chess-board, and if there was a move, sends it to the chess-engine. The chess-engine generates the next move, which is executed by the robot-arm on the board. The reader can find more details about the different components of the systems in [1]. In the following we focus on presenting the human tests and their evaluations. 4. Experimental study The goal of this study is to assess the importance of multi-modal interfaces in humancomputer interaction. We wanted to find out weather adding a multi-modal interface to a computer chess game results better game experience. The chess player Turk-2, sketched in the previous chapter, was used to make human tests and the results of these tests have been evaluated to confirm our observations The scenario In the test 16 people have been participated. The participants were between 18 and 25 years old, eight of the participants were males, and eight females. They were everyday computer users not having too much connection with computer science. They did not know anything about the Turk-2 system, they were only told that they need to play chess against the computer. The chess playing skills of the participants varied from basic level (knowing only the rules) to the level of an average chess player because we wanted to test our system as a game and not as an exercise. Before starting a session, each participant received instruction from an assistant. The assistant also described what the test is about and set up the system. During the game time the assistant remained "invisible". The participants had to play two games per test. One game was with a simplified version of Turk-2 - without multi-modal communication part (NoTH) - the other was with the complete system (TH). The orders of the games were decided randomly at the beginning. During the games a video was recorded of the participants and the important (unexpected) events were collected into a report by assistant. In 6 sessions, parallelly, a video was taken of the talking head, too. The videos were processed later. At the end of the test the participants were asked to fill a questionnaire. After the test, each video taken about human faces or about talking head's faces was was annotated with event and timestamp pairs. The questionnaire contains ten yes/no questions regarding the participants' impressions about Turk-2 system. The answers to these questions should be rated from 1 to 10. The questions oriented to three main topics: which game did the participants enjoy more, what were their opinions about the human-likeness of the system provided by the multi-modal techniques and did they consider the talking head as the part of the game. The questionnaire also contains a part where participants could write some sentences about their impressions.

4 Figure 2. The distribution of looking directions in percentage of the whole duration of the game: with TH when the player is on turn, with TH when the computer is on turn, with noth when the player is on turn, with noth when the computer is on turn. Figure 3. Average laughing, smiling and speaking time in percentage of the whole duration of the game. In games with TH the users are laughing, smiling and speaking more then in games with noth. This happens mainly when they are looking on it at the same time Evaluation of the tests Both objective and subjective evaluation of the tests were used. In case of objective evaluation the extracted datas from the annotation files and the questionnaires were submitted to statistical analyzation. As software SPSS v17.0 were used. First, the normality of the statistical variables were checked (it varied between p=0.4 to p=0.9) and then Student's t-test were applied. In some cases bivariate correlations and Friedman test for several related samples have been applied. In case of the videos recorded from the players' face, the total duration of the events are calculated and these summarized values are used to define the hypotheses. The videos of the talking head were analyzed in connection with the other video for determining the effects of the mult-modal components on the participants. The operator's notes and our impressions about the videos gave us a lot of interesting observation, too. In the following the research questions and their answers are discussed from five different aspects: How did the players react to the talking head? Because chess is a game played in turns, human players changed their behaviors depending on who is on turn. When it was human player's turn, most of the time they were looking at the chessboard and thinking on their next moves. During computer's turn, there was nothing to engross the players. They were usually looking outside of the game scene. If the robot arm started to move, they followed it with their eyes. The presence of talking head was important to fill the emptiness while the computer was "thinking". Applying t-test, it can be proved that the means of total durations of looking to the chessboard were statistically equal in both test cases (p=0.822). Futhermore, in case of TH looking out and looking on the robot arm decrased (p=0.006 and p=0.002) and instead participants were looking on the talking head. Fig. 2 shows the distribution of mean looking directions in both cases (TH and noth). Applying bivariate correlation, statistically can be showed that eye contacting with talking

5 Figure 4. Females were smiling, laughing and talking more then males. head happened when the players were in passive state (p=0.01, corr=0.836). It happened, mainly, when they completed their move and the computer was on turn. Mostly this was after the turn-taking. But many times they also glanced to the talking head while they were following with their eyes the robot-arm moves. It can be showed that the players were smiling, laughing and talking more in games with TH (p=0.001) which is illustrated on Fig. 3. Furthermore, smiling, laughing and talking in games with noth and without looking at the talking head in games with TH happend in the same amount (p=0.794). The presence of the talking head made the players to perform these utterances in increased amount while they were looking at the talking head at the same time Is the talking head treated as a person? The talking head's presence had effect on the players' reactions during the game. They treated the Turk-2 as a person. Usually human players requited talking head's manifestations with smiles and laughing. Many times the participants replied to the talking head's questions and sentences. For example when the computer finished the move and then said "Check this out" the answer was that "I'm not happy with this". Sometimes they praised the computer after a good move: "Clever!" or fretted oneself "You are glad to hit, aren't you?". They also made different comments about the talking head's face: "You are so sly!", "Evil!", "She's actually grinning at me!?", or japed with it: "You are so ugly", "He has sleepy dust in his eyes". Some of the participants were jesting with the talking head by repeating his sentences: "Take your time", "Tough situation". By applying Friedman test on the questionarie, statistically can be proved that the answers connected to the human-likeness of the Turk-2 are related to the engagement and that the participants considered the talking head as a human-like partner in the game (p=0.06). This shows us the importance of developing more human-like MMIs Effect of the talking head on game experience Overall, it can be assumed that participants have enjoyed more playing with TH. On the questionnaire 95% said that it was better to play with TH. They spoke highly about talking head, saying that talking head made the game more interesting and entertaining. Their concentration during the game also confirms the positive effects of the talking head. Usually participants played better with TH, 80% of the games were longer, only in one case happened that somebody got checkmate in less turns playing with TH. The engagement of the participants also can be confirmed, by analyzing the mean thinking time in the two test cases. Using t-test can be proved that the participants were thinking longer in games with TH (p=0.017). The importance of MMI could be observed particularly in those tests when participants played first with TH and then with noth. In these cases the absence of talking head was more conspicuous and the players gave voice to this that "The first game with talking head was much better" or "I'm missing the talking head". Generating the next move and preparing for executing it with the robot-arm, usually took for the computer a few seconds. During this time participants did not understand what's happening. The speed of the robot-arm was found to be also slow in these cases. Players could hardly wait until it finishes. In the games with TH, it was

6 easy to notify the players that it was their turn, by changing the talking head's gaze from the chessboard to the player's direction, or in some cases announcing the opponent in words. They liked when talking head announced his next move or if there was a hit, check or checkmate. In those cases when the talking head skipped to announce his next move or his hit the partcipants noticed it: "Oh, he forgot to tell his next step!" Gender differences Analyzing the test result separately in case of males and females, a few differences can be observed. In overall can be assumed that boys were interested more in playing chess, defeating the computer. Females enjoyed more playing with TH and gave higher ratings in the questionnaires and they also head more interaction with the talking head. By applying t- test, statistically can be proved that females were smiling, laughing and talking more then males (p=0.01) in spite of that there is no difference in mean thinking time between the two genders (p=0.161). (See Fig 4.) Further observations For the question, should the talking head talk more and be more emotional or should it be silent with "poker face", about 70 percent of the participants declared that it should be more emotional, because "playing with a silent and poker-faced talking head is like playing alone". Our talking head, prepared for this system, was somewhere in middle between the two extremes. Usually, participants opinion was that is better to play with a more talkative partner having a varying and large vocabulary. Interpretation of identical expressions: the same facial expression of the talking head was interpreted differently by the players. At the beginning of the game the smile of the talking head was only an "innocent" smile, but at the end of the game, when the computer was close to win, the same smile was interpreted as "malevolent". 5. Conclusion This work concludes the results of human tests in which the effect of multi-modal interfaces in human computer interaction has been studied. Despite the relatively small number of participants in these tests, the results show us that a system which implements MMI solutions appears to be more human-like and provides better game experience. Giving face for the computer makes users to express more likely their feelings, in some cases they even treated the computer as a person by talking to and arguing with it. In the future would be good to make more dynamic tests with a large number of participants in this topic. For this we are going to use another game, called rock-paper-scissors, to which will be attached different multi-modal interface. Having a larger number of tests statistical measures could give more precise results. But, by considering these results can be concluded that developing image processing techniques which leads to better multi-modal interfaces will have a bright future. 6. References [1] Fazekas A, Sajó L. Multi-modal Humancomputer Chess Player The Turk 2. in Proc. of ITI2007, June, 2007, Cavtat, Croatia. [2] Hämäläinen P, Höysniemi J. A Computer Vision and Hearing Based User Interface for a Computer Game for Children. 7th ERCIM Workshop "User Interfaces For All", [3] Igarashi T, Hughes J F. Voice as Sound: Using Non-verbal Voice Input for Interactive Control. 14th Annual Symposium on User Interface Software and Technology, p , [4] Kunze K, Barry M, Heinz E A, Lukowicz P, Majoe D, Gutknecht J. Towards Recognizing Tai Chi - An Initial Experiment Using Wearable Sensors. IFAWC2006, Mobile Research Center, TZI Universität Bremen, Germany, [5] Leite J, Pereira A. icat, the Affective Chess Player. 7th Int. Joint Conf. on Autonomous Agents and Multiagent Systems, vol. 3, p , Estoril, Portugal, [6] Raisamo R. Multimodal Human Computer Interaction: a constructive and empirical study. Academic dissertation, University of Helsinki, [7] Schroder M, Gebhard P, Charfuelan M, Endres C, Kipp M, Pammi S, Rumpler M, Turk O. Enhancing Animated Agents in an Instrumented Poker Game. KI2008. LNCS, vol. 5243, pp , Springer, 2008.

Multi-modal Human-Computer Interaction. Attila Fazekas.

Multi-modal Human-Computer Interaction. Attila Fazekas. Multi-modal Human-Computer Interaction Attila Fazekas Attila.Fazekas@inf.unideb.hu Szeged, 12 July 2007 Hungary and Debrecen Multi-modal Human-Computer Interaction - 2 Debrecen Big Church Multi-modal Human-Computer

More information

Multi-modal Human-computer Interaction

Multi-modal Human-computer Interaction Multi-modal Human-computer Interaction Attila Fazekas Attila.Fazekas@inf.unideb.hu SSIP 2008, 9 July 2008 Hungary and Debrecen Multi-modal Human-computer Interaction - 2 Debrecen Big Church Multi-modal

More information

Touch Perception and Emotional Appraisal for a Virtual Agent

Touch Perception and Emotional Appraisal for a Virtual Agent Touch Perception and Emotional Appraisal for a Virtual Agent Nhung Nguyen, Ipke Wachsmuth, Stefan Kopp Faculty of Technology University of Bielefeld 33594 Bielefeld Germany {nnguyen, ipke, skopp}@techfak.uni-bielefeld.de

More information

Multi-Modal User Interaction

Multi-Modal User Interaction Multi-Modal User Interaction Lecture 4: Multiple Modalities Zheng-Hua Tan Department of Electronic Systems Aalborg University, Denmark zt@es.aau.dk MMUI, IV, Zheng-Hua Tan 1 Outline Multimodal interface

More information

Does the Appearance of a Robot Affect Users Ways of Giving Commands and Feedback?

Does the Appearance of a Robot Affect Users Ways of Giving Commands and Feedback? 19th IEEE International Symposium on Robot and Human Interactive Communication Principe di Piemonte - Viareggio, Italy, Sept. 12-15, 2010 Does the Appearance of a Robot Affect Users Ways of Giving Commands

More information

Introduction to Haptics

Introduction to Haptics Introduction to Haptics Roope Raisamo Multimodal Interaction Research Group Tampere Unit for Computer Human Interaction (TAUCHI) Department of Computer Sciences University of Tampere, Finland Definition

More information

Applying Usability Testing in the Evaluation of Products and Services for Elderly People Lei-Juan HOU a,*, Jian-Bing LIU b, Xin-Zhu XING c

Applying Usability Testing in the Evaluation of Products and Services for Elderly People Lei-Juan HOU a,*, Jian-Bing LIU b, Xin-Zhu XING c 2016 International Conference on Service Science, Technology and Engineering (SSTE 2016) ISBN: 978-1-60595-351-9 Applying Usability Testing in the Evaluation of Products and Services for Elderly People

More information

Haptic presentation of 3D objects in virtual reality for the visually disabled

Haptic presentation of 3D objects in virtual reality for the visually disabled Haptic presentation of 3D objects in virtual reality for the visually disabled M Moranski, A Materka Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, Lodz, POLAND marcin.moranski@p.lodz.pl,

More information

Perceptual Interfaces. Matthew Turk s (UCSB) and George G. Robertson s (Microsoft Research) slides on perceptual p interfaces

Perceptual Interfaces. Matthew Turk s (UCSB) and George G. Robertson s (Microsoft Research) slides on perceptual p interfaces Perceptual Interfaces Adapted from Matthew Turk s (UCSB) and George G. Robertson s (Microsoft Research) slides on perceptual p interfaces Outline Why Perceptual Interfaces? Multimodal interfaces Vision

More information

Human Robot Dialogue Interaction. Barry Lumpkin

Human Robot Dialogue Interaction. Barry Lumpkin Human Robot Dialogue Interaction Barry Lumpkin Robots Where to Look: A Study of Human- Robot Engagement Why embodiment? Pure vocal and virtual agents can hold a dialogue Physical robots come with many

More information

Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics -

Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics - Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics - Hiroshi Ishiguro 1,2, Tetsuo Ono 1, Michita Imai 1, Takayuki Kanda

More information

HUMAN-COMPUTER INTERACTION: OVERVIEW ON STATE OF THE ART TECHNOLOGY

HUMAN-COMPUTER INTERACTION: OVERVIEW ON STATE OF THE ART TECHNOLOGY HUMAN-COMPUTER INTERACTION: OVERVIEW ON STATE OF THE ART TECHNOLOGY *Ms. S. VAISHNAVI, Assistant Professor, Sri Krishna Arts And Science College, Coimbatore. TN INDIA **SWETHASRI. L., Final Year B.Com

More information

Evaluating 3D Embodied Conversational Agents In Contrasting VRML Retail Applications

Evaluating 3D Embodied Conversational Agents In Contrasting VRML Retail Applications Evaluating 3D Embodied Conversational Agents In Contrasting VRML Retail Applications Helen McBreen, James Anderson, Mervyn Jack Centre for Communication Interface Research, University of Edinburgh, 80,

More information

Associated Emotion and its Expression in an Entertainment Robot QRIO

Associated Emotion and its Expression in an Entertainment Robot QRIO Associated Emotion and its Expression in an Entertainment Robot QRIO Fumihide Tanaka 1. Kuniaki Noda 1. Tsutomu Sawada 2. Masahiro Fujita 1.2. 1. Life Dynamics Laboratory Preparatory Office, Sony Corporation,

More information

Advances in Human!!!!! Computer Interaction

Advances in Human!!!!! Computer Interaction Advances in Human!!!!! Computer Interaction Seminar WS 07/08 - AI Group, Chair Prof. Wahlster Patrick Gebhard gebhard@dfki.de Michael Kipp kipp@dfki.de Martin Rumpler rumpler@dfki.de Michael Schmitz schmitz@cs.uni-sb.de

More information

Lecture Overview. c D. Poole and A. Mackworth 2017 Artificial Intelligence, Lecture 1.1, Page 1 1 / 15

Lecture Overview. c D. Poole and A. Mackworth 2017 Artificial Intelligence, Lecture 1.1, Page 1 1 / 15 Lecture Overview What is Artificial Intelligence? Agents acting in an environment Learning objectives: at the end of the class, you should be able to describe what an intelligent agent is identify the

More information

A Kinect-based 3D hand-gesture interface for 3D databases

A Kinect-based 3D hand-gesture interface for 3D databases A Kinect-based 3D hand-gesture interface for 3D databases Abstract. The use of natural interfaces improves significantly aspects related to human-computer interaction and consequently the productivity

More information

Virtual General Game Playing Agent

Virtual General Game Playing Agent Virtual General Game Playing Agent Hafdís Erla Helgadóttir, Svanhvít Jónsdóttir, Andri Már Sigurdsson, Stephan Schiffel, and Hannes Högni Vilhjálmsson Center for Analysis and Design of Intelligent Agents,

More information

AFFECTIVE COMPUTING FOR HCI

AFFECTIVE COMPUTING FOR HCI AFFECTIVE COMPUTING FOR HCI Rosalind W. Picard MIT Media Laboratory 1 Introduction Not all computers need to pay attention to emotions, or to have emotional abilities. Some machines are useful as rigid

More information

Project Multimodal FooBilliard

Project Multimodal FooBilliard Project Multimodal FooBilliard adding two multimodal user interfaces to an existing 3d billiard game Dominic Sina, Paul Frischknecht, Marian Briceag, Ulzhan Kakenova March May 2015, for Future User Interfaces

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

HUMAN-COMPUTER INTERACTION: OVERVIEW ON STATE OF THE ART

HUMAN-COMPUTER INTERACTION: OVERVIEW ON STATE OF THE ART HUMAN-COMPUTER INTERACTION: OVERVIEW ON STATE OF THE ART Author: S. VAISHNAVI Assistant Professor, Sri Krishna Arts and Science College, Coimbatore (TN) INDIA Co-Author: SWETHASRI L. III.B.Com (PA), Sri

More information

REBO: A LIFE-LIKE UNIVERSAL REMOTE CONTROL

REBO: A LIFE-LIKE UNIVERSAL REMOTE CONTROL World Automation Congress 2010 TSI Press. REBO: A LIFE-LIKE UNIVERSAL REMOTE CONTROL SEIJI YAMADA *1 AND KAZUKI KOBAYASHI *2 *1 National Institute of Informatics / The Graduate University for Advanced

More information

Essay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam

Essay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam 1 Introduction Essay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam 1.1 Social Robots: Definition: Social robots are

More information

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL

More information

Short Course on Computational Illumination

Short Course on Computational Illumination Short Course on Computational Illumination University of Tampere August 9/10, 2012 Matthew Turk Computer Science Department and Media Arts and Technology Program University of California, Santa Barbara

More information

STRATEGO EXPERT SYSTEM SHELL

STRATEGO EXPERT SYSTEM SHELL STRATEGO EXPERT SYSTEM SHELL Casper Treijtel and Leon Rothkrantz Faculty of Information Technology and Systems Delft University of Technology Mekelweg 4 2628 CD Delft University of Technology E-mail: L.J.M.Rothkrantz@cs.tudelft.nl

More information

MIN-Fakultät Fachbereich Informatik. Universität Hamburg. Socially interactive robots. Christine Upadek. 29 November Christine Upadek 1

MIN-Fakultät Fachbereich Informatik. Universität Hamburg. Socially interactive robots. Christine Upadek. 29 November Christine Upadek 1 Christine Upadek 29 November 2010 Christine Upadek 1 Outline Emotions Kismet - a sociable robot Outlook Christine Upadek 2 Denition Social robots are embodied agents that are part of a heterogeneous group:

More information

Body Movement Analysis of Human-Robot Interaction

Body Movement Analysis of Human-Robot Interaction Body Movement Analysis of Human-Robot Interaction Takayuki Kanda, Hiroshi Ishiguro, Michita Imai, and Tetsuo Ono ATR Intelligent Robotics & Communication Laboratories 2-2-2 Hikaridai, Seika-cho, Soraku-gun,

More information

Enhancing Animated Agents in an Instrumented Poker Game

Enhancing Animated Agents in an Instrumented Poker Game Enhancing Animated Agents in an Instrumented Poker Game Marc Schröder 1, Patrick Gebhard 1, Marcela Charfuelan 1, Christoph Endres 1, Michael Kipp 1, Sathish Pammi 1, Martin Rumpler 2, and Oytun Türk 1

More information

Handling Emotions in Human-Computer Dialogues

Handling Emotions in Human-Computer Dialogues Handling Emotions in Human-Computer Dialogues Johannes Pittermann Angela Pittermann Wolfgang Minker Handling Emotions in Human-Computer Dialogues ABC Johannes Pittermann Universität Ulm Inst. Informationstechnik

More information

6 Ubiquitous User Interfaces

6 Ubiquitous User Interfaces 6 Ubiquitous User Interfaces Viktoria Pammer-Schindler May 3, 2016 Ubiquitous User Interfaces 1 Days and Topics March 1 March 8 March 15 April 12 April 26 (10-13) April 28 (9-14) May 3 May 10 Administrative

More information

Detecting perceived quality of interaction with a robot using contextual features. Ginevra Castellano, Iolanda Leite & Ana Paiva.

Detecting perceived quality of interaction with a robot using contextual features. Ginevra Castellano, Iolanda Leite & Ana Paiva. Detecting perceived quality of interaction with a robot using contextual features Ginevra Castellano, Iolanda Leite & Ana Paiva Autonomous Robots ISSN 0929-5593 DOI 10.1007/s10514-016-9592-y 1 23 Your

More information

Introduction to Artificial Intelligence

Introduction to Artificial Intelligence Introduction to Artificial Intelligence By Budditha Hettige Sources: Based on An Introduction to Multi-agent Systems by Michael Wooldridge, John Wiley & Sons, 2002 Artificial Intelligence A Modern Approach,

More information

GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL

GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL Darko Martinovikj Nevena Ackovska Faculty of Computer Science and Engineering Skopje, R. Macedonia ABSTRACT Despite the fact that there are different

More information

Enrichment chapter: ICT and computers. Objectives. Enrichment

Enrichment chapter: ICT and computers. Objectives. Enrichment Enrichment chapter: ICT and computers Objectives By the end of this chapter the student should be able to: List some of the uses of Information and Communications Technology (ICT) Use a computer to perform

More information

Interactive Coffee Tables: Interfacing TV within an Intuitive, Fun and Shared Experience

Interactive Coffee Tables: Interfacing TV within an Intuitive, Fun and Shared Experience Interactive Coffee Tables: Interfacing TV within an Intuitive, Fun and Shared Experience Radu-Daniel Vatavu and Stefan-Gheorghe Pentiuc University Stefan cel Mare of Suceava, Department of Computer Science,

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Lecture 01 - Introduction Edirlei Soares de Lima What is Artificial Intelligence? Artificial intelligence is about making computers able to perform the

More information

Autonomic gaze control of avatars using voice information in virtual space voice chat system

Autonomic gaze control of avatars using voice information in virtual space voice chat system Autonomic gaze control of avatars using voice information in virtual space voice chat system Kinya Fujita, Toshimitsu Miyajima and Takashi Shimoji Tokyo University of Agriculture and Technology 2-24-16

More information

Live Feeling on Movement of an Autonomous Robot Using a Biological Signal

Live Feeling on Movement of an Autonomous Robot Using a Biological Signal Live Feeling on Movement of an Autonomous Robot Using a Biological Signal Shigeru Sakurazawa, Keisuke Yanagihara, Yasuo Tsukahara, Hitoshi Matsubara Future University-Hakodate, System Information Science,

More information

Multimodal Metric Study for Human-Robot Collaboration

Multimodal Metric Study for Human-Robot Collaboration Multimodal Metric Study for Human-Robot Collaboration Scott A. Green s.a.green@lmco.com Scott M. Richardson scott.m.richardson@lmco.com Randy J. Stiles randy.stiles@lmco.com Lockheed Martin Space Systems

More information

Playing Tangram with a Humanoid Robot

Playing Tangram with a Humanoid Robot Playing Tangram with a Humanoid Robot Jochen Hirth, Norbert Schmitz, and Karsten Berns Robotics Research Lab, Dept. of Computer Science, University of Kaiserslautern, Germany j_hirth,nschmitz,berns@{informatik.uni-kl.de}

More information

On the GED essay, you ll need to write a short essay, about four

On the GED essay, you ll need to write a short essay, about four Write Smart 373 What Is the GED Essay Like? On the GED essay, you ll need to write a short essay, about four or five paragraphs long. The GED essay gives you a prompt that asks you to talk about your beliefs

More information

Leading the Agenda. Everyday technology: A focus group with children, young people and their carers

Leading the Agenda. Everyday technology: A focus group with children, young people and their carers Leading the Agenda Everyday technology: A focus group with children, young people and their carers March 2018 1 1.0 Introduction Assistive technology is an umbrella term that includes assistive, adaptive,

More information

Introduction to Talking Robots

Introduction to Talking Robots Introduction to Talking Robots Graham Wilcock Adjunct Professor, Docent Emeritus University of Helsinki 20.9.2016 1 Walking and Talking Graham Wilcock 20.9.2016 2 Choregraphe Box Libraries Animations Breath,

More information

Eye catchers in comics: Controlling eye movements in reading pictorial and textual media.

Eye catchers in comics: Controlling eye movements in reading pictorial and textual media. Eye catchers in comics: Controlling eye movements in reading pictorial and textual media. Takahide Omori Takeharu Igaki Faculty of Literature, Keio University Taku Ishii Centre for Integrated Research

More information

Understanding the Mechanism of Sonzai-Kan

Understanding the Mechanism of Sonzai-Kan Understanding the Mechanism of Sonzai-Kan ATR Intelligent Robotics and Communication Laboratories Where does the Sonzai-Kan, the feeling of one's presence, such as the atmosphere, the authority, come from?

More information

Haptic messaging. Katariina Tiitinen

Haptic messaging. Katariina Tiitinen Haptic messaging Katariina Tiitinen 13.12.2012 Contents Introduction User expectations for haptic mobile communication Hapticons Example: CheekTouch Introduction Multiple senses are used in face-to-face

More information

Virtual Reality RPG Spoken Dialog System

Virtual Reality RPG Spoken Dialog System Virtual Reality RPG Spoken Dialog System Project report Einir Einisson Gísli Böðvar Guðmundsson Steingrímur Arnar Jónsson Instructor Hannes Högni Vilhjálmsson Moderator David James Thue Abstract 1 In computer

More information

LEVEL 4 (8 weeks hours 16 hours exams) FALL

LEVEL 4 (8 weeks hours 16 hours exams) FALL LEVEL 4 (8 weeks - 176 hours 16 hours exams) FALL - 2016-2017 Week Units Book subjects Content Writing Exams 1 5-9 Dec, 2016 Unit 1 p. 7 11 (don t include p.11) Unit 1 p. 11-13 p.11) Ice Breakers Present

More information

Can the Success of Mobile Games Be Attributed to Following Mobile Game Heuristics?

Can the Success of Mobile Games Be Attributed to Following Mobile Game Heuristics? Can the Success of Mobile Games Be Attributed to Following Mobile Game Heuristics? Reham Alhaidary (&) and Shatha Altammami King Saud University, Riyadh, Saudi Arabia reham.alhaidary@gmail.com, Shaltammami@ksu.edu.sa

More information

Federico Forti, Erdi Izgi, Varalika Rathore, Francesco Forti

Federico Forti, Erdi Izgi, Varalika Rathore, Francesco Forti Basic Information Project Name Supervisor Kung-fu Plants Jakub Gemrot Annotation Kung-fu plants is a game where you can create your characters, train them and fight against the other chemical plants which

More information

How Many Pixels Do We Need to See Things?

How Many Pixels Do We Need to See Things? How Many Pixels Do We Need to See Things? Yang Cai Human-Computer Interaction Institute, School of Computer Science, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA ycai@cmu.edu

More information

What will the robot do during the final demonstration?

What will the robot do during the final demonstration? SPENCER Questions & Answers What is project SPENCER about? SPENCER is a European Union-funded research project that advances technologies for intelligent robots that operate in human environments. Such

More information

Robot Personality from Perceptual Behavior Engine : An Experimental Study

Robot Personality from Perceptual Behavior Engine : An Experimental Study Robot Personality from Perceptual Behavior Engine : An Experimental Study Dongwook Shin, Jangwon Lee, Hun-Sue Lee and Sukhan Lee School of Information and Communication Engineering Sungkyunkwan University

More information

Drumtastic: Haptic Guidance for Polyrhythmic Drumming Practice

Drumtastic: Haptic Guidance for Polyrhythmic Drumming Practice Drumtastic: Haptic Guidance for Polyrhythmic Drumming Practice ABSTRACT W e present Drumtastic, an application where the user interacts with two Novint Falcon haptic devices to play virtual drums. The

More information

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many Preface The jubilee 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 was held in the conference centre of the Best Western Hotel M, Belgrade, Serbia, from 30 June to 2 July

More information

WORKSHOP JOURNAL AND HANDOUTS The Motivation Equation: Designing Motivation into Deeper Learning COSEBOC conference, April 25, 2013

WORKSHOP JOURNAL AND HANDOUTS The Motivation Equation: Designing Motivation into Deeper Learning COSEBOC conference, April 25, 2013 WORKSHOP JOURNAL AND HANDOUTS The Motivation Equation: Designing Motivation into Deeper Learning COSEBOC conference, April 25, 2013 Presented by Kathleen Cushman, co-founder of What Kids Can Do For more

More information

Personalized short-term multi-modal interaction for social robots assisting users in shopping malls

Personalized short-term multi-modal interaction for social robots assisting users in shopping malls Personalized short-term multi-modal interaction for social robots assisting users in shopping malls Luca Iocchi 1, Maria Teresa Lázaro 1, Laurent Jeanpierre 2, Abdel-Illah Mouaddib 2 1 Dept. of Computer,

More information

Human Factors. We take a closer look at the human factors that affect how people interact with computers and software:

Human Factors. We take a closer look at the human factors that affect how people interact with computers and software: Human Factors We take a closer look at the human factors that affect how people interact with computers and software: Physiology physical make-up, capabilities Cognition thinking, reasoning, problem-solving,

More information

How Do You Make a Program Wait?

How Do You Make a Program Wait? How Do You Make a Program Wait? How Do You Make a Program Wait? Pre-Quiz 1. What is an algorithm? 2. Can you think of a reason why it might be inconvenient to program your robot to always go a precise

More information

Introduction Installation Switch Skills 1 Windows Auto-run CDs My Computer Setup.exe Apple Macintosh Switch Skills 1

Introduction Installation Switch Skills 1 Windows Auto-run CDs My Computer Setup.exe Apple Macintosh Switch Skills 1 Introduction This collection of easy switch timing activities is fun for all ages. The activities have traditional video game themes, to motivate students who understand cause and effect to learn to press

More information

Emotion Secrets Webinar Text

Emotion Secrets Webinar Text Emotion Secrets Webinar Text Hello everyone. Welcome to the webinar. This one is for our European members. Of course, anybody is welcome. But I tried to choose a time that was good for all of you members

More information

Socially Assistive Robots: Using Narrative to Improve Nutrition Intervention. Barry Lumpkin

Socially Assistive Robots: Using Narrative to Improve Nutrition Intervention. Barry Lumpkin Socially Assistive Robots: Using Narrative to Improve Nutrition Intervention Barry Lumpkin Introduction The rate of obesity is on the rise Various health risks are associated with being overweight Nutrition

More information

Active Agent Oriented Multimodal Interface System

Active Agent Oriented Multimodal Interface System Active Agent Oriented Multimodal Interface System Osamu HASEGAWA; Katsunobu ITOU, Takio KURITA, Satoru HAYAMIZU, Kazuyo TANAKA, Kazuhiko YAMAMOTO, and Nobuyuki OTSU Electrotechnical Laboratory 1-1-4 Umezono,

More information

Learning Progression for Narrative Writing

Learning Progression for Narrative Writing Learning Progression for Narrative Writing STRUCTURE Overall The writer told a story with pictures and some writing. The writer told, drew, and wrote a whole story. The writer wrote about when she did

More information

II. Pertinent self-concepts and their possible application

II. Pertinent self-concepts and their possible application Thoughts on Creating Better MMORPGs By: Thomas Mainville Paper 2: Application of Self-concepts I. Introduction The application of self-concepts to MMORPG systems is a concept that appears not to have been

More information

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces Huidong Bai The HIT Lab NZ, University of Canterbury, Christchurch, 8041 New Zealand huidong.bai@pg.canterbury.ac.nz Lei

More information

Designing the user experience of a multi-bot conversational system

Designing the user experience of a multi-bot conversational system Designing the user experience of a multi-bot conversational system Heloisa Candello IBM Research São Paulo Brazil hcandello@br.ibm.com Claudio Pinhanez IBM Research São Paulo, Brazil csantosp@br.ibm.com

More information

User Interface Agents

User Interface Agents User Interface Agents Roope Raisamo (rr@cs.uta.fi) Department of Computer Sciences University of Tampere http://www.cs.uta.fi/sat/ User Interface Agents Schiaffino and Amandi [2004]: Interface agents are

More information

Speech Controlled Mobile Games

Speech Controlled Mobile Games METU Computer Engineering SE542 Human Computer Interaction Speech Controlled Mobile Games PROJECT REPORT Fall 2014-2015 1708668 - Cankat Aykurt 1502210 - Murat Ezgi Bingöl 1679588 - Zeliha Şentürk Description

More information

Immersion in Multimodal Gaming

Immersion in Multimodal Gaming Immersion in Multimodal Gaming Playing World of Warcraft with Voice Controls Tony Ricciardi and Jae min John In a Sentence... The goal of our study was to determine how the use of a multimodal control

More information

Get ready for your interview!

Get ready for your interview! Get ready for your interview! Step 1: Do your homework on the department or facility Do research to answer the following questions: What is the culture like if it s a new department or facility? What are

More information

A Virtual Human Agent for Training Clinical Interviewing Skills to Novice Therapists

A Virtual Human Agent for Training Clinical Interviewing Skills to Novice Therapists A Virtual Human Agent for Training Clinical Interviewing Skills to Novice Therapists CyberTherapy 2007 Patrick Kenny (kenny@ict.usc.edu) Albert Skip Rizzo, Thomas Parsons, Jonathan Gratch, William Swartout

More information

Topic Paper HRI Theory and Evaluation

Topic Paper HRI Theory and Evaluation Topic Paper HRI Theory and Evaluation Sree Ram Akula (sreerama@mtu.edu) Abstract: Human-robot interaction(hri) is the study of interactions between humans and robots. HRI Theory and evaluation deals with

More information

Live Hand Gesture Recognition using an Android Device

Live Hand Gesture Recognition using an Android Device Live Hand Gesture Recognition using an Android Device Mr. Yogesh B. Dongare Department of Computer Engineering. G.H.Raisoni College of Engineering and Management, Ahmednagar. Email- yogesh.dongare05@gmail.com

More information

Lecturers. Alessandro Vinciarelli

Lecturers. Alessandro Vinciarelli Lecturers Alessandro Vinciarelli Alessandro Vinciarelli, lecturer at the University of Glasgow (Department of Computing Science) and senior researcher of the Idiap Research Institute (Martigny, Switzerland.

More information

Non Verbal Communication of Emotions in Social Robots

Non Verbal Communication of Emotions in Social Robots Non Verbal Communication of Emotions in Social Robots Aryel Beck Supervisor: Prof. Nadia Thalmann BeingThere Centre, Institute for Media Innovation, Nanyang Technological University, Singapore INTRODUCTION

More information

KI-SUNG SUH USING NAO INTRODUCTION TO INTERACTIVE HUMANOID ROBOTS

KI-SUNG SUH USING NAO INTRODUCTION TO INTERACTIVE HUMANOID ROBOTS KI-SUNG SUH USING NAO INTRODUCTION TO INTERACTIVE HUMANOID ROBOTS 2 WORDS FROM THE AUTHOR Robots are both replacing and assisting people in various fields including manufacturing, extreme jobs, and service

More information

Outline. Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types

Outline. Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Intelligent Agents Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Agents An agent is anything that can be viewed as

More information

Towards Strategic Kriegspiel Play with Opponent Modeling

Towards Strategic Kriegspiel Play with Opponent Modeling Towards Strategic Kriegspiel Play with Opponent Modeling Antonio Del Giudice and Piotr Gmytrasiewicz Department of Computer Science, University of Illinois at Chicago Chicago, IL, 60607-7053, USA E-mail:

More information

Implementing Physical Capabilities for an Existing Chatbot by Using a Repurposed Animatronic to Synchronize Motor Positioning with Speech

Implementing Physical Capabilities for an Existing Chatbot by Using a Repurposed Animatronic to Synchronize Motor Positioning with Speech Implementing Physical Capabilities for an Existing Chatbot by Using a Repurposed Animatronic to Synchronize Motor Positioning with Speech Alex Johnson, Tyler Roush, Mitchell Fulton, Anthony Reese Kent

More information

Beats Down: Using Heart Rate for Game Interaction in Mobile Settings

Beats Down: Using Heart Rate for Game Interaction in Mobile Settings Beats Down: Using Heart Rate for Game Interaction in Mobile Settings Claudia Stockhausen, Justine Smyzek, and Detlef Krömker Goethe University, Robert-Mayer-Str.10, 60054 Frankfurt, Germany {stockhausen,smyzek,kroemker}@gdv.cs.uni-frankfurt.de

More information

SIGVerse - A Simulation Platform for Human-Robot Interaction Jeffrey Too Chuan TAN and Tetsunari INAMURA National Institute of Informatics, Japan The

SIGVerse - A Simulation Platform for Human-Robot Interaction Jeffrey Too Chuan TAN and Tetsunari INAMURA National Institute of Informatics, Japan The SIGVerse - A Simulation Platform for Human-Robot Interaction Jeffrey Too Chuan TAN and Tetsunari INAMURA National Institute of Informatics, Japan The 29 th Annual Conference of The Robotics Society of

More information

Development of Video Chat System Based on Space Sharing and Haptic Communication

Development of Video Chat System Based on Space Sharing and Haptic Communication Sensors and Materials, Vol. 30, No. 7 (2018) 1427 1435 MYU Tokyo 1427 S & M 1597 Development of Video Chat System Based on Space Sharing and Haptic Communication Takahiro Hayashi 1* and Keisuke Suzuki

More information

Guide for lived experience speakers: preparing for an interview or speech

Guide for lived experience speakers: preparing for an interview or speech Guide for lived experience speakers: preparing for an interview or speech How do speakers decide whether or not to do an interview? Many people feel they should do an interview if they are asked. Before

More information

ModaDJ. Development and evaluation of a multimodal user interface. Institute of Computer Science University of Bern

ModaDJ. Development and evaluation of a multimodal user interface. Institute of Computer Science University of Bern ModaDJ Development and evaluation of a multimodal user interface Course Master of Computer Science Professor: Denis Lalanne Renato Corti1 Alina Petrescu2 1 Institute of Computer Science University of Bern

More information

Android Speech Interface to a Home Robot July 2012

Android Speech Interface to a Home Robot July 2012 Android Speech Interface to a Home Robot July 2012 Deya Banisakher Undergraduate, Computer Engineering dmbxt4@mail.missouri.edu Tatiana Alexenko Graduate Mentor ta7cf@mail.missouri.edu Megan Biondo Undergraduate,

More information

Roleplay Technologies: The Art of Conversation Transformed into the Science of Simulation

Roleplay Technologies: The Art of Conversation Transformed into the Science of Simulation The Art of Conversation Transformed into the Science of Simulation Making Games Come Alive with Interactive Conversation Mark Grundland What is our story? Communication skills training by virtual roleplay.

More information

HUMAN COMPUTER INTERFACE

HUMAN COMPUTER INTERFACE HUMAN COMPUTER INTERFACE TARUNIM SHARMA Department of Computer Science Maharaja Surajmal Institute C-4, Janakpuri, New Delhi, India ABSTRACT-- The intention of this paper is to provide an overview on the

More information

Public Displays of Affect: Deploying Relational Agents in Public Spaces

Public Displays of Affect: Deploying Relational Agents in Public Spaces Public Displays of Affect: Deploying Relational Agents in Public Spaces Timothy Bickmore Laura Pfeifer Daniel Schulman Sepalika Perera Chaamari Senanayake Ishraque Nazmi Northeastern University College

More information

Interface Design V: Beyond the Desktop

Interface Design V: Beyond the Desktop Interface Design V: Beyond the Desktop Rob Procter Further Reading Dix et al., chapter 4, p. 153-161 and chapter 15. Norman, The Invisible Computer, MIT Press, 1998, chapters 4 and 15. 11/25/01 CS4: HCI

More information

Featured Photographer #12. September, Interview with. Amira Issmail. Photographer, Hamburg, Germany

Featured Photographer #12. September, Interview with. Amira Issmail. Photographer, Hamburg, Germany Featured Photographer #12 September, 2015 Interview with Amira Issmail Photographer, Hamburg, Germany Dear Friends and Readers! Our twelfth issue takes us to Hamburg, Germany. Photographer and artist Amira

More information

REALIZATION OF TAI-CHI MOTION USING A HUMANOID ROBOT Physical interactions with humanoid robot

REALIZATION OF TAI-CHI MOTION USING A HUMANOID ROBOT Physical interactions with humanoid robot REALIZATION OF TAI-CHI MOTION USING A HUMANOID ROBOT Physical interactions with humanoid robot Takenori Wama 1, Masayuki Higuchi 1, Hajime Sakamoto 2, Ryohei Nakatsu 1 1 Kwansei Gakuin University, School

More information

FP7 ICT Call 6: Cognitive Systems and Robotics

FP7 ICT Call 6: Cognitive Systems and Robotics FP7 ICT Call 6: Cognitive Systems and Robotics Information day Luxembourg, January 14, 2010 Libor Král, Head of Unit Unit E5 - Cognitive Systems, Interaction, Robotics DG Information Society and Media

More information

The University of Algarve Informatics Laboratory

The University of Algarve Informatics Laboratory arxiv:0709.1056v2 [cs.hc] 13 Sep 2007 The University of Algarve Informatics Laboratory UALG-ILAB September, 2007 A Sudoku Game for People with Motor Impairments Stéphane Norte, and Fernando G. Lobo Department

More information

Simple Poker Game Design, Simulation, and Probability

Simple Poker Game Design, Simulation, and Probability Simple Poker Game Design, Simulation, and Probability Nanxiang Wang Foothill High School Pleasanton, CA 94588 nanxiang.wang309@gmail.com Mason Chen Stanford Online High School Stanford, CA, 94301, USA

More information

Evaluation of an Enhanced Human-Robot Interface

Evaluation of an Enhanced Human-Robot Interface Evaluation of an Enhanced Human-Robot Carlotta A. Johnson Julie A. Adams Kazuhiko Kawamura Center for Intelligent Systems Center for Intelligent Systems Center for Intelligent Systems Vanderbilt University

More information

Cognitive Science: What Is It, and How Can I Study It at RPI?

Cognitive Science: What Is It, and How Can I Study It at RPI? Cognitive Science: What Is It, and How Can I Study It at RPI? What is Cognitive Science? Cognitive Science: Aspects of Cognition Cognitive science is the science of cognition, which includes such things

More information

Haptic Camera Manipulation: Extending the Camera In Hand Metaphor

Haptic Camera Manipulation: Extending the Camera In Hand Metaphor Haptic Camera Manipulation: Extending the Camera In Hand Metaphor Joan De Boeck, Karin Coninx Expertise Center for Digital Media Limburgs Universitair Centrum Wetenschapspark 2, B-3590 Diepenbeek, Belgium

More information