Pratibha -International Journal of Computer Science information and Engg., Technologies ISSN

Size: px
Start display at page:

Download "Pratibha -International Journal of Computer Science information and Engg., Technologies ISSN"

Transcription

1 Unimodal and Multimodal Human Computer Interaction:A Modern Overview Pratibha Adkar Modern College Of Engineering MCA Department Pune-5 India Abstract. In this paper I review the major approaches to Unimodal human computer interaction from a computer vision perspective. In particular, I focus on three categories of Unimodal Human Computer Interaction. I discuss some research area for each category in detail. At the last I discuss about the limitations of Unimodal Human Computer Interaction and need of Multimodal Interaction. Benefits and Architecture of Multimodal is presented in detail at the last. Keywords Unimodal, Multimodal, Interaction, Human Computer Interaction, Audio, Visual, Sensor unimodal. The system based on single channel of input Restricted to the use of only one mode of human-computer interaction modality is called Unimodal HCI. Examples are Text based User Interface, Graphical User Interface, Pointer based Interface, Touch based Interface, etc. Three categories of unimodal systems are: I. Visual-Based II. Audio-Based III. Sensor-Based II.1 VISUAL BASED I. INTRODUCTION To optimize performance of human and computer together as a system Human Computer Interaction is required.researchers in HCI are interested in developing new design methodologies experimenting with new hardware devices prototyping new software systems exploring new paradigms for interaction and developing models and theories of interaction. This paper intends to provide an overview on the Unimodal and Multimodal Human Computer Interaction, first discussing about Unimodal Human Computer Interaction and its three category, examples of each category in detail are discussed in later section. In the end, intend to discuss about limitation of Unimodal Human Computer Interaction, need of Multimodal Human Computer Interaction, Benefits, Application and Architecture of Multimodal Human Computer Interaction. II. UNIMODAL HCI An interface depends on number and variety of its inputs and outputs which are communication channels that enable users to interact with a computer. Each of the different independent single channels is called a modality. A system that supports only one modality is called Vision based human computer interaction provides a broad range of input capabilities by employing computer vision techniques to process sensor data from one or more cameras in real-time. This is done to reliably estimate relevant visual information about the user. Vision based interaction has the ability of carrying rich information in a non-intrusive manner. The visual based human computer interaction is probably the most widespread area in HCI research. Considering the extent of applications and variety of open problems and approaches, researchers tried to tackle different aspects of human responses which can be recognized as a visual signal. Some examples in this section are as follow: Facial Expression Analysis (Emotion Recognition). Body Movement Tracking (Large-scale) Gesture Recognition Gaze Detection (Eyes Movement Tracking) 1. FACIAL EXPRESSION ANALYSIS (EMOTION RECOGNITION): Facial expressions are the facial changes in response to a person s internal emotional states, intentions, or social communications. Basic Structure of Facial Expression Analysis Systems Facial expression analysis includes both measurement of facial motion and recognition of expression. The general approach to automatic facial expression analysis (AFEA) consists of three steps (Fig. 1): face acquisition, facial data extraction and representation, and facial expression recognition. Fig.1.Basic structure of facial expression analysis systems. IJCSIET-ISSUE3-VOLUME2-SERIES2 Page 1

2 Face acquisition is a processing stage to automatically find the face region for the input images or sequences. It can be a detector to detect face for each frame or just detect face in the first frame and then track the face in the remainder of the video sequence. To handle large head motion, the head finder, head tracking, and pose estimation can be applied to a facial expression analysis system. After the face is located, the next step is to extract and represent the facial changes caused by facial expressions. In facial feature extraction for expression analysis, there are mainly two types of approaches: geometric featurebased methods and appearance-based methods. The geometric facial features present the shape and locations of facial components (including mouth, eyes, brows, and nose). The facial components or facial feature points are extracted to form a feature vector that represents the face geometry. With appearance-based methods, image filters, such as Gabor wavelets, are applied to either the whole-face or specific regions in a face image to extract a feature vector. Depending on the different facial feature extraction methods, the effects of in-plane head rotation and different scales of the faces can be eliminated by face normalization before the feature extraction or by feature representation before the step of expression recognition. Facial expression recognition is the last stage of AFEA systems. The facial changes can be identified as facial action units or prototypic emotional expressions (see Section 2.1 for definitions). Depending on if the temporal information is used, in this chapter we classified the recognition approaches as frame-based or sequence-based. 2. BODY MOVEMENT TRACKING (LARGE SCALE): Tracking of large-scale body movements (head, arms, torso, and legs) is necessary to interpret pose and motion in many MMHCI applications. Three important issues in articulated motion analysis: representation (joint angles or motion of all the sub-parts), computational paradigms (deterministic or probabilistic), and computation reduction. Human movement tracking systems are systems that generate in real-time data that represent the measured human movement. In general such systems consist of the following items, some of which can be omitted, depending on the technology involved (see figure below): Human Sensor(s) and/or marker(s) or source(s) + interfaceelectronics (on body) Source(s) or marker(s) and/or sensor(s) + interfaceelectronics (external) Computer interface-electronics Computer Data representing the human movement Movement of an object is always relative to a point of reference. In order to measure such movement a sensor attached to the object (e.g. a bodypart) can sense (measure the distance to, or orientation or position of) a source attached to the reference or, the sensor, attached to the reference, senses a source that is attached to the object. The human body consists of many moving parts, so that, taking one bodypart as a reference for another, the source and sensor can both be attached to the body. Also, the source can be either natural (e.g. the earth's gravitational field) or artificial (light emitted by an LED). Thus, a taxonomy of human movement tracking systems is possible that extends an existing taxonomy (e.g. Meyer et al 1992): Inside-in systems: sensor(s) and source(s) are both worn on the body Inside-out systems: sensor(s) on the body sense(s) external artificial / natural source(s) Outside-in systems: external sensor(s) sense(s) artificial / natural source(s) on the body In the case of outside-in systems, artificial sources are usually called markers or beacons, which can be either passive (e.g. light reflectors) or active (e.g. IR LED's). Other taxonomies have been proposed. The medium of the sensing technology (i.e. acoustical, optical, electromagnetical, and mechanical) is often used as a classifier (Ferrin 1991, Meyer et al 1992 and many others). However, inside-in tracking systems are generally not included in this taxonomy. The bodypart (i.e. head, hand, finger, leg, knee, ankle, foot, spine, face, eye, arm, elbow, chest, pelvis, etc.) is also frequently used as a classifier. 3. GESTURE RECOGNITION: Fig. 2 Human movement tracking system Gestures are usually understood as hand and body movement which can pass information from one to another. Gesture: A gesture is a sequence of postures connected by motions over a short time span. IJCSIET-ISSUE3-VOLUME2-SERIES2 Page 2

3 Posture: A posture is a specific configuration of hand flexion observed at some time instance. Usually a gesture consists of one or more postures sequentially occurring on the time axis. Bobick and Wilson have defined gestures as the motion of the body that is intended to communicate with other agents. For a successful communication, a sender and a receiver must have the same set of information for a particular gesture. Fig 3: Block Diagram of hand gesture recognition system Vision based analysis, is based on the way human beings perceive information about their surroundings, yet it is probably the most difficult to implement in a satisfactory way. Several different approaches have been tested so far. One is to build a three-dimensional model of the human hand. The model is matched to images of the hand by one or more cameras, and parameters corresponding to palm orientation and joint angles are estimated. These parameters are then used to perform gesture classification. Second one to capture the image using a camera then extract some feature and those features are used as input in a classification algorithm for classification. Preprocessing is very much required task to be done in hand gesture recognition system Preprocessing consist of two steps Segmentation Morphological filtering Segmentation is done to convert gray scale image into binary image so that there will only two object in image one is hand and other is background. After converting gray scale image into binary image make it sure that there is no noise in image for that use morphological filter technique. Morphological techniques consist of four operations: dilation, erosion, opening and closing. A morphological filtering approach has been applied using sequence of dilation and erosion to obtain a smooth, closed, and complete contour of a gesture. Feature extraction is very important in terms of giving input to a classifier.in feature extraction first find edge of the segmented and morphological filtered image. Gesture recognition used to determine where a person is pointing, which is useful for identifying the context of statements or instructions. Immersive game technology is another important example, where gestures can be used to control interactions within video games to try and make the game player's experience more interactive or immersive. 4. GAZE DETECTION: Gaze detection defined as the direction to which the eyes are pointing in space, is mostly an indirect form of interaction between user and machine which is mostly used for better understanding of user's attention, intent or focus in contextsensitive situations. It has been studied extensively since as early as 1879 in psychology, and more recently in neuroscience and in computing applications. Eye tracking systems are widely used in helping disabilities where eye tracking plays a main role in command and action scenario, for example blinking can be related to clicking. It is worth noting that visual approaches are almost used anywhere to assist other types of interactions, as it is the case where lip movement tracking is used as an influential aid for speech recognition error correction Gaze tracking systems can be intrusive or non-intrusive Intrusive systems is limiting user with head fixation or head mounted equipment. In opposite are non-intrusive systems, which are providing much more comfort and mobility. Intrusive HCI systems are more or less historical, for modern and practically usable HCI system is necessary to be non-intrusive. For understanding methods and principles used by these systems is necessary basic knowledge of human eye anatomy and physiology. Tracking of Eye Movement The location of face and eye should be known for tracking eye movements. We assume this location information has already been obtained through extant techniques. Exact eye movements can be measured by special techniques. This investigation concentrates on tracking eye movement itself. Rough eye position is not sufficient for tracking eye gaze accurately. Measuring the direction of visual attention of the eyes requires more precise data from eye image. A distinctive feature of the eye image should be measured in any arrangement. The pupil of people having dark or dark-brown eyes can hardly be differentiated from the iris in the captured images. If the image is captured from close range, then it can be used to detect the pupil even under ordinary lighting conditions. It was decided to track the iris for this reason. Due to the fact that the sclera is light and the iris is dark, this boundary can easily be optically detected and tracked. It can be quite appropriate for people with darker iris color (for instance, Asians). Young has addressed the iris tracking problem using a head-mounted camera. There are some issues, however, which have to be emphasized. They arise, due to the following reasons: 1. Coverage of the top and bottom of the limbus by the eyelids. 2. Excessive coverage of the eyes by eyelids (in some cases). II.2 AUDIO BASED: The audio based interaction between a computer and a human is another important area of HCI systems. This area deals with the information that is acquired by divergent audio signals. This information may not be as variable as visual signals but can be for most cases more trustable, helpful, and unique providers of information. IJCSIET-ISSUE3-VOLUME2-SERIES2 Page 3

4 1. SPEECH RECOGNITION: Speech recognition system performs two fundamental operations: signal modeling and pattern matching. Signal modeling represents process of converting speech signal into a set of parameters. The signal modeling involves four basic operations: spectral shaping, feature extraction, parametric transformation, and statistical modeling. Spectral shaping is the process of converting the speech signal from sound pressure wave to a digital signal; and emphasizing important frequency components in the signal. Spectral shaping involves two basic operations: digitization i.e. conversion of analog speech signal from sound pressure wave to digital signal; and digital filtering i.e. emphasizing important frequency components in the signal. Fig4: Basic Operations in Spectral Shaping Feature extraction is process of obtaining different features such as power, pitch, and vocal tract configuration from the speech signal. In speaker independent speech recognition, a premium is placed on extracting features that are somewhat invariant to changes in the speaker. So feature extraction involves analysis of speech signal. Broadly the feature extraction techniques are classified as temporal analysis and spectral analysis technique. In temporal analysis the speech waveform itself is used for analysis. In spectral analysis spectral representation of speech signal is used for analysis. Parameter transformation is the process of converting these features into signal parameters through process of differentiation and concatenation. Statistical modeling involves conversion of parameters in signal observation vectors. Pattern matching is the task of finding parameter set from memory which closely matches the parameter set obtained from the input speech signal. 2. SPEAKER RECOGNITION Speaker recognition is the identification of the person who is speaking by characteristics of their voices also called voice recognition. Recognizing the speaker can simplify the task of translating speech in systems that have been trained on specific person's voices or it can be used to authenticate or verify the identity of a speaker as part of a security process. Each speaker recognition system has two phases: Enrollment and verification. During enrollment, the speaker's voice is recorded and typically a number of features are extracted to form a voice print, template, or model. In the verification phase, a speech sample or "utterance" is compared against a previously created voice print. For identification systems, the utterance is compared against multiple voice prints in order to determine the best match (es) while verification systems compare an utterance against a single voice print. Because of the process involved, verification is faster than identification. Speaker recognition systems fall into two categories: textdependent and text-independent. Text-Dependent: If the text must be the same for enrollment and verification this is called text-dependent recognition. In a text-dependent system, prompts can either be common across all speakers (e.g.: a common pass phrase) or unique. In addition, the use of shared-secrets (e.g.: passwords and PINs) or knowledge-based information can be employed in order to create a multi-factor authentication scenario. Text-Independent: Text-independent systems are most often used for speaker identification as they require very little if any cooperation by the speaker. In this case the text during enrollment and test is different. In fact, the enrollment may happen without the user's knowledge, as in the case for many forensic applications. As text-independent technologies do not compare what was said at enrollment and verification, verification applications tend to also employ speech recognition to determine what the user is saying at the point of authentication. In text independent systems both acoustics and speech analysis techniques are used. 2. AUDITORY EMOTION ANALYSIS Recent endeavors to integrate human emotions in intelligent human computer interaction initiated the efforts in analysis of emotions in audio signals. A speech consists of some words spoken in a particular way in which the information about emotions resides. Emotions affect the open/close ratio of the vocal chords, and hence the quality of the voice. Sadness for example influences the voice quality so that creaky voice may be produced. In this case the speech is of low pitch and low intensity 3. MUSICAL INTERACTION Consuming music through interactive devices has a great cultural, social and commercial significance. Music technology is an inevitable part of public and commercial space, and also a way of to control experiences in this space. Music interaction is something that many people do every day. Music also moves a considerable amount of money in IJCSIET-ISSUE3-VOLUME2-SERIES2 Page 4

5 technology and content business. Music interaction is an important domain of innovation. Music generation and interaction is a very new area in HCI that finds its most interesting applications in the art industry. II.3 SENSOR BASED Sensor-based interactions are increasingly becoming an essential part in the design of user experiences. This type of interaction ranges from the activation of controls to providing some context-aware information by delivering relevant information to people at appropriate times. Sensor-based interaction can be considered as a combination of variety of areas with a wide range of applications, where at least one physical sensor is used between the user and machine to provide better interaction. These sensors as shown below can be very primitive or very sophisticated. 1. PEN-BASED INTERACTION The pen is an appropriate input device for intuitive and direct interaction. Instead of pointing at the screen directly a small pen-like plastic stick is used to point and draw on the screen. The pen is connected to the screen by a cable and, in operation, is held to the screen and detects a burst of light from the screen phosphor during the display scan. The light pen can therefore address individual pixels and so is much more accurate than the touch screen. Light pen can be used for fine selection and drawing. Using a stylus in combination with an electronic display almost resembles the familiar pen-and-paper situation. Drawings, writing and commands can be pen-produced directly on the display tablet. The intentions of the users do not need to be mediated by a command language nor by a sequence of actions for selecting icons, positions or keys. Comparisons between the pen and other input devices show specific advantages for the pen over more conventional input devices, specifically for tasks such as text or graphics editing. Pen-based interaction indeed offers advantages that other input devices cannot provide. However, the problems arising when a familiar pen-and-paper situation is transferred to a computer environment have moderated the enthusiasm for pen-based interfaces. On the other hand, facilities which are not possible with a simple pen and paper can be provided by adding a pen-input device to a computer which is able to react to the users' input and also to accept several interaction styles. Light pen is very direct in that the relationship between the device and the thing selected is immediate. 2. KEYBOARD AND MOUSE Entering text is one of our main activities when using the computer. The most obvious means of text entry is the plain keyboard, but there are several variations on this due to keyboard layouts. 1. The alphanumeric keyboard The keyboard is still one of the most common input devices in use today. It is used for entering textual data and commands. The vast majority of keyboards have a standardized layout, and are known by the first six letters of the top row of alphabetical keys, QWERTY. The layout of the digits and letters on a QWERTY keyboard is fixed but nonalphanumeric keys vary between keyboards. 2. Chord Keyboards These are significantly different from normal alphanumeric keyboards. Only a few keys, four or five are used and letters are produced by pressing one or more of the keys at once. Keyboards work by a keypress closing a connection, causing a character code to be sent to the computer. The connection is usually via a lead, but wireless system is also exist. The mouse has become a major component of the majority of desktop computer systems sold today, and is the little box with the tail connecting it to the machine in our basic computer system picture. It is a small, palm sized box housing a weighted ball- as the box is moved over the tabletop, the ball is rolled by the table and so rotates inside the housing. This rotation is detected by small rollers that are in contact with the ball, and these adjust the values of potentiometers. The potentiometers are aligned in different directions so that they can detect both horizontal and vertical motion. The relative motion information is passed to the computer via a wire attached to the box, or in some cases using wireless or infrared, and moves a pointer on the screen, called the cursor. The whole arrangement tends to look rodent-like, with the box acting as the body and the wire as the tail; hence the term mouse. The mouse operates in a planar fashion, moving around the desktop, and is an indirect input device, since a transformation is required to map from the horizontal nature of the desktop to the vertical alignment of the screen. A major advantage of the mouse is that the cursor itself is small, and it can be easily manipulated without obscuring the display. 2. JOYSTICKS The joystick is an indirect input device, taking up very little space. Consisting of a small palm-sized box with a stick or shaped grip sticking up from it, the joystick is a simple device with which movements of the stick cause a corresponding movement of the screen cursor. There are two types of joystick: the absolute and the isometric. In the absolute joystick, movement is the important characteristic, since the position of the joystick in the base corresponds to the position of the cursor on the screen. In the isometric joystick, the pressure on the stick corresponds to the velocity of the cursor, and when released, the stick returns to its usual upright centered position. This type of joystick is also called the velocity controlled joystick. The joystick, which competed with the mouse at SRI and much later became the standard input for the video games of the 1980s, is no less military. Today, it is still used for the remote control of guided missiles. IJCSIET-ISSUE3-VOLUME2-SERIES2 Page 5

6 Joysticks are inexpensive and fairly robust, and for this reason they are often found in computer games. Joysticks are used on many laptop computers to control the cursor. 4. MOTION TRACKING SENSORS AND DIGITIZERS. Motion tracking sensors/digitizers are state-of-the-art technology which revolutionized movie, animation, art, and video-game industry. They come in the form of wearable cloth or joint sensors and made computers much more able to interact with reality and human able to create their world virtually. A motion capture session records the movements of the actor as animation data which are later mapped to a 3D model like a human, a giant robot, or any other model created by a computer artist, and then makes the model move the same way as recorded. This is comparable to older techniques where the visual appearance of the motion of an actor was filmed and used as a guide for the frame by frame motion of a hand-drawn animated character. Fig5: Wearable motion capture cloth for making of video games (Taken from Operation Sports) 5. HAPTIC SENSORS Haptic is a generic term relating to touch, but it can be roughly divided into two areas: 1) Cutaneous / Tactile perception, which is concerned with tactile sensations arising from stimulus to the skin. eg. heat, pressure, vibration, slip, pain 2) Kinesthetics / proprioception, which is the perception of movement and position. Limb position, motion, force. Both are useful in interaction. Haptic and pressure sensors are of special interest for applications in robotics and virtual reality. New humanoid robots include hundreds of haptic sensors that make the robots sensitive to touch. These types of sensors are also used in medical surgery applications. Fig 6: Samsung Haptic Cell Phone 6. PRESSURE SENSORS A pressure control mechanism allows the user to iterate through a list of available pressure levels. In most pressure based interactions, pressure input is usually better controlled in one direction, i.e. when going upward from 0 to the highest value but not in the reverse direction. As a result, in a unipressure augmented mouse, the pressure control mechanism is basic and simply consists of pressing down on one sensor to iterate through a limited number of pressure levels. However, it would be beneficial to devise a pressure control mechanism that facilitates controlling input in both directions. This mechanism can be provided by means of some specialized hardware or by augmenting the mouse with more than one sensor. Many types of interactions, such as mode switching and menu selection can benefit from a large number of pressure levels than what has been typically reported. Increasing the number of accessible pressure levels may be possible with two sensors. 7. TASTE/SMELL SENSORS Smell is one of the strongest cues to memory. Various historical recreations such as the Jorvic in York, England use smells to create a feeling of immersion in their static displays of past life. This example uses a fixed smell in a particular location. Smell is a complex multi-dimensional sense and has a peculiar ability to trigger memory, but cannot be changed rapidly. These qualities may prove valuable in areas where a general sense of location and awareness is desirable. Limitation of Unimodal Interaction Not a natural way of human interaction. Usually designed for the average user. IJCSIET-ISSUE3-VOLUME2-SERIES2 Page 6

7 III Fails to cater the need of diverse category of people. Difficult to use by disable, illiterate and untrained people. Cannot provide universal interface. More error prone. MULTIMODAL HCI III.1 Need for Multimodal HCI system To enhance error avoidance and ease of error resolution. To accommodate a wide range of users, tasks and environmental situations. To cater the need of individual with differences, such as permanent or temporary handicaps. To prevent overuse of any mode of individual mode during extended computer usage. To permit the flexible and improved use of input modes, including alternation and integrated use. Multimodal systems are systems which combine two or more modalities. These modalities refer to the ways in which the system responds to the inputs that are communication channels.due to the problems faced with unimodal systems a combination of different modalities help solve the problems; that is when one modality is inaccessible, a task can be completed with the other modalities. Modalities can be combined as redundant or complementary depending on the number and type of modalities integrated. An interesting aspect of multimodality is the collaboration of different modalities to assist the recognitions. For example, lip movement tracking (visual-based) can help speech recognition methods (audio-based) and speech recognition methods (audio-based) can assist command acquisition in gesture recognition (visual-based). Wikipedia Def: Multimodal interaction provides the user with multiple modes of interfacing with a system beyond the traditional keyboard and mouse input/output. The most common such interface combines a visual modality (e.g. a display, keyboard, and mouse) with a voice modality (speech recognition for input, speech synthesis and recorded audio for output). However other modalities, such as pen based input or haptic input/output, may be used. Multimodal user interfaces are a research area in human computer interaction Fig7: Architecture of Multimodal HCI For example, lip movement tracking (visual-based) can help speech recognition methods (audio-based) and speech recognition methods (audio-based) can assist command acquisition in gesture recognition (visual-based). In the work by Frangeskides and Lanitis the system which uses speech and visual input to perform HCI tasks was proposed. In their system visual input is used for cursor control through face-tracking. Their approach for facetracking is achieved by tracking the eyes and nose. The generated regions around the eyes and nose are used to calculate the horizontal and vertical projections tracking to enable for cursor control. The difference of the face location from the original position is mapped into cursor movement that is, towards the direction of the movement. In their work, cursor control would be enabled if the Voice command mode was activated. In that case the cursor control will be done by uttering commands from one of the five groups ( Move cursor ) they designed. Although cursor control using voice commands can be used, it lacks precision, hence will need to be integrated with the use of face tracking in order to precisely reach the widget. For text entry an On screen keyboard (Windows operating system utility) was used. As soon as the keyboard was activated, keys were entered by using head tracking to move the cursor on the keys. Speech was then used to invoke mouse click events when the system was in Sound click mode or if it was in Voice command mode cursor movement will be used. III.2 Benefits of Multimodal Interfaces Efficiency follows from using each modality for the task that it is best suited for. Redundancy increases the likelihood that communication proceeds smoothly because there are many simultaneous references to the same issue. Perceptibility increase when the tasks are facilitated in spatial context. IJCSIET-ISSUE3-VOLUME2-SERIES2 Page 7

8 III.3 Naturalness follows from the free choice of modalities and may result in a human computer communication that is close to human-human communication. Accuracy increases when another modality can indicate an object more accurately than the main modality. Synergy occurs when one channel of communication can help refine imprecision, modify the meaning, or resolve ambiguities in another channel. Applications of Multimodal HCI T - Com access point Mobile telecommunication Hands-free devices to computers Using in a car Interactive information panel Smart video conferencing Intelligent Homes / Offices Driver monitoring intelligent games E commerce Helping people with disabilities [3] K. Bonsor, R. Johnson How face recognition system works,web article. [4] Jaimes, N. Sebe, Multimodal Human Computer Interaction: A Survey, IEEE International Workshop on Human Computer Interaction in conjunction withiccv 2005, Beijing, China, Oct. 21, [5] A.Jaimes, N. Sebe, Multimodal Human Computer Interaction: A Survey, Computer Vision and Image Understanding 108 (2007) [6] R. Gupta, Human Computer Interaction A Modern Overview, Int.J.Computer Technology & Applications,Vol 3 (5), I. [7] K.-Nam Kim,R. S. Ramakrishna, Vision-Based Eye- Gaze Tracking for Human Computer Interface, [8] M. Černý, Gaze Tracking Systems For Human- Computer Interface, Number 5, Volume VI, December [9] R. Liang1, M. Ouhyoung, A Real-time Continuous Gesture Recognition System for Sign Language,IEEE International conference on Automatic face and Gesture Recognition , Japan IV. CONCLUSION [10] W. Prevezer, R. Fidler, An overview of the approach at Sutherland House School, Document written & updated May In this paper I gave an overview of a sophisticated discipline called Unimodal and Multimodal Human Computer Interaction Architecture..Another important issue of this survey is the detail information of audio, vision and sensor based Unimodal HCI with different examples were explored. At the last Multimodal HCI explained in detail which gives the idea about need and benefits of multimodal HCI. REFERENCES [1] J. A. A. Saleh, Integrated Framework Design for Intelligent Human Machine Interaction, Master of Applied Science in Electrical and Computer Engineering Waterloo, Ontario, Canada, [2] MP. Kesarkar, Feature Extraction For Speech Recogniton, M.Tech. Credit Seminar Report, Electronic Systems Group, EE. Dept, IIT Bombay, November2003. [11] D. Reynolds, Automatic Speaker Recognition Using Gaussian Mixture Speaker Models [12] M. Simner, Pen-Based Human-Computer Interaction,Handwriting and Drawing Research: Basic and applied issues, A. Thomassen (Eds.), IOS Press, pp , [13] R. RAISAMO, Multimodal Human- Computer Interaction: A Constructive And Empirical Study,ACADEMIC DISSERTATION. [14] Dr.S.C.Mehrotra,R. S. Deore, Past,Present And Future Of Human Computer Interaction,International Journal of Engineering Research & Technology (IJERT) Vol. 1 Issue 6, August ISSN: [15] A.Dix,Janet Finlay, Human Computer Interaction,Third Edition,Chapter 3. IJCSIET-ISSUE3-VOLUME2-SERIES2 Page 8

HUMAN-COMPUTER INTERACTION: OVERVIEW ON STATE OF THE ART TECHNOLOGY

HUMAN-COMPUTER INTERACTION: OVERVIEW ON STATE OF THE ART TECHNOLOGY HUMAN-COMPUTER INTERACTION: OVERVIEW ON STATE OF THE ART TECHNOLOGY *Ms. S. VAISHNAVI, Assistant Professor, Sri Krishna Arts And Science College, Coimbatore. TN INDIA **SWETHASRI. L., Final Year B.Com

More information

HUMAN-COMPUTER INTERACTION: OVERVIEW ON STATE OF THE ART

HUMAN-COMPUTER INTERACTION: OVERVIEW ON STATE OF THE ART HUMAN-COMPUTER INTERACTION: OVERVIEW ON STATE OF THE ART Author: S. VAISHNAVI Assistant Professor, Sri Krishna Arts and Science College, Coimbatore (TN) INDIA Co-Author: SWETHASRI L. III.B.Com (PA), Sri

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

Haptic presentation of 3D objects in virtual reality for the visually disabled

Haptic presentation of 3D objects in virtual reality for the visually disabled Haptic presentation of 3D objects in virtual reality for the visually disabled M Moranski, A Materka Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, Lodz, POLAND marcin.moranski@p.lodz.pl,

More information

Research Seminar. Stefano CARRINO fr.ch

Research Seminar. Stefano CARRINO  fr.ch Research Seminar Stefano CARRINO stefano.carrino@hefr.ch http://aramis.project.eia- fr.ch 26.03.2010 - based interaction Characterization Recognition Typical approach Design challenges, advantages, drawbacks

More information

R (2) Controlling System Application with hands by identifying movements through Camera

R (2) Controlling System Application with hands by identifying movements through Camera R (2) N (5) Oral (3) Total (10) Dated Sign Assignment Group: C Problem Definition: Controlling System Application with hands by identifying movements through Camera Prerequisite: 1. Web Cam Connectivity

More information

Perceptual Interfaces. Matthew Turk s (UCSB) and George G. Robertson s (Microsoft Research) slides on perceptual p interfaces

Perceptual Interfaces. Matthew Turk s (UCSB) and George G. Robertson s (Microsoft Research) slides on perceptual p interfaces Perceptual Interfaces Adapted from Matthew Turk s (UCSB) and George G. Robertson s (Microsoft Research) slides on perceptual p interfaces Outline Why Perceptual Interfaces? Multimodal interfaces Vision

More information

Design a Model and Algorithm for multi Way Gesture Recognition using Motion and Image Comparison

Design a Model and Algorithm for multi Way Gesture Recognition using Motion and Image Comparison e-issn 2455 1392 Volume 2 Issue 10, October 2016 pp. 34 41 Scientific Journal Impact Factor : 3.468 http://www.ijcter.com Design a Model and Algorithm for multi Way Gesture Recognition using Motion and

More information

Multi-Modal User Interaction. Lecture 3: Eye Tracking and Applications

Multi-Modal User Interaction. Lecture 3: Eye Tracking and Applications Multi-Modal User Interaction Lecture 3: Eye Tracking and Applications Zheng-Hua Tan Department of Electronic Systems Aalborg University, Denmark zt@es.aau.dk 1 Part I: Eye tracking Eye tracking Tobii eye

More information

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005.

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005. Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays Habib Abi-Rached Thursday 17 February 2005. Objective Mission: Facilitate communication: Bandwidth. Intuitiveness.

More information

Advancements in Gesture Recognition Technology

Advancements in Gesture Recognition Technology IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) Volume 4, Issue 4, Ver. I (Jul-Aug. 2014), PP 01-07 e-issn: 2319 4200, p-issn No. : 2319 4197 Advancements in Gesture Recognition Technology 1 Poluka

More information

Touch Perception and Emotional Appraisal for a Virtual Agent

Touch Perception and Emotional Appraisal for a Virtual Agent Touch Perception and Emotional Appraisal for a Virtual Agent Nhung Nguyen, Ipke Wachsmuth, Stefan Kopp Faculty of Technology University of Bielefeld 33594 Bielefeld Germany {nnguyen, ipke, skopp}@techfak.uni-bielefeld.de

More information

HUMAN COMPUTER INTERFACE

HUMAN COMPUTER INTERFACE HUMAN COMPUTER INTERFACE TARUNIM SHARMA Department of Computer Science Maharaja Surajmal Institute C-4, Janakpuri, New Delhi, India ABSTRACT-- The intention of this paper is to provide an overview on the

More information

Controlling Humanoid Robot Using Head Movements

Controlling Humanoid Robot Using Head Movements Volume-5, Issue-2, April-2015 International Journal of Engineering and Management Research Page Number: 648-652 Controlling Humanoid Robot Using Head Movements S. Mounica 1, A. Naga bhavani 2, Namani.Niharika

More information

Chapter 1 - Introduction

Chapter 1 - Introduction 1 "We all agree that your theory is crazy, but is it crazy enough?" Niels Bohr (1885-1962) Chapter 1 - Introduction Augmented reality (AR) is the registration of projected computer-generated images over

More information

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright E90 Project Proposal 6 December 2006 Paul Azunre Thomas Murray David Wright Table of Contents Abstract 3 Introduction..4 Technical Discussion...4 Tracking Input..4 Haptic Feedack.6 Project Implementation....7

More information

Gesture Recognition with Real World Environment using Kinect: A Review

Gesture Recognition with Real World Environment using Kinect: A Review Gesture Recognition with Real World Environment using Kinect: A Review Prakash S. Sawai 1, Prof. V. K. Shandilya 2 P.G. Student, Department of Computer Science & Engineering, Sipna COET, Amravati, Maharashtra,

More information

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision 11-25-2013 Perception Vision Read: AIMA Chapter 24 & Chapter 25.3 HW#8 due today visual aural haptic & tactile vestibular (balance: equilibrium, acceleration, and orientation wrt gravity) olfactory taste

More information

Heads up interaction: glasgow university multimodal research. Eve Hoggan

Heads up interaction: glasgow university multimodal research. Eve Hoggan Heads up interaction: glasgow university multimodal research Eve Hoggan www.tactons.org multimodal interaction Multimodal Interaction Group Key area of work is Multimodality A more human way to work Not

More information

What was the first gestural interface?

What was the first gestural interface? stanford hci group / cs247 Human-Computer Interaction Design Studio What was the first gestural interface? 15 January 2013 http://cs247.stanford.edu Theremin Myron Krueger 1 Myron Krueger There were things

More information

Multi-modal Human-computer Interaction

Multi-modal Human-computer Interaction Multi-modal Human-computer Interaction Attila Fazekas Attila.Fazekas@inf.unideb.hu SSIP 2008, 9 July 2008 Hungary and Debrecen Multi-modal Human-computer Interaction - 2 Debrecen Big Church Multi-modal

More information

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM Aniket D. Kulkarni *1, Dr.Sayyad Ajij D. *2 *1(Student of E&C Department, MIT Aurangabad, India) *2(HOD of E&C department, MIT Aurangabad, India) aniket2212@gmail.com*1,

More information

Multi-modal Human-Computer Interaction. Attila Fazekas.

Multi-modal Human-Computer Interaction. Attila Fazekas. Multi-modal Human-Computer Interaction Attila Fazekas Attila.Fazekas@inf.unideb.hu Szeged, 12 July 2007 Hungary and Debrecen Multi-modal Human-Computer Interaction - 2 Debrecen Big Church Multi-modal Human-Computer

More information

A Kinect-based 3D hand-gesture interface for 3D databases

A Kinect-based 3D hand-gesture interface for 3D databases A Kinect-based 3D hand-gesture interface for 3D databases Abstract. The use of natural interfaces improves significantly aspects related to human-computer interaction and consequently the productivity

More information

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL

More information

A SURVEY ON GESTURE RECOGNITION TECHNOLOGY

A SURVEY ON GESTURE RECOGNITION TECHNOLOGY A SURVEY ON GESTURE RECOGNITION TECHNOLOGY Deeba Kazim 1, Mohd Faisal 2 1 MCA Student, Integral University, Lucknow (India) 2 Assistant Professor, Integral University, Lucknow (india) ABSTRACT Gesture

More information

Interactive Simulation: UCF EIN5255. VR Software. Audio Output. Page 4-1

Interactive Simulation: UCF EIN5255. VR Software. Audio Output. Page 4-1 VR Software Class 4 Dr. Nabil Rami http://www.simulationfirst.com/ein5255/ Audio Output Can be divided into two elements: Audio Generation Audio Presentation Page 4-1 Audio Generation A variety of audio

More information

6 Ubiquitous User Interfaces

6 Ubiquitous User Interfaces 6 Ubiquitous User Interfaces Viktoria Pammer-Schindler May 3, 2016 Ubiquitous User Interfaces 1 Days and Topics March 1 March 8 March 15 April 12 April 26 (10-13) April 28 (9-14) May 3 May 10 Administrative

More information

Salient features make a search easy

Salient features make a search easy Chapter General discussion This thesis examined various aspects of haptic search. It consisted of three parts. In the first part, the saliency of movability and compliance were investigated. In the second

More information

VICs: A Modular Vision-Based HCI Framework

VICs: A Modular Vision-Based HCI Framework VICs: A Modular Vision-Based HCI Framework The Visual Interaction Cues Project Guangqi Ye, Jason Corso Darius Burschka, & Greg Hager CIRL, 1 Today, I ll be presenting work that is part of an ongoing project

More information

A Brief Survey of HCI Technology. Lecture #3

A Brief Survey of HCI Technology. Lecture #3 A Brief Survey of HCI Technology Lecture #3 Agenda Evolution of HCI Technology Computer side Human side Scope of HCI 2 HCI: Historical Perspective Primitive age Charles Babbage s computer Punch card Command

More information

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception

More information

Toward an Augmented Reality System for Violin Learning Support

Toward an Augmented Reality System for Violin Learning Support Toward an Augmented Reality System for Violin Learning Support Hiroyuki Shiino, François de Sorbier, and Hideo Saito Graduate School of Science and Technology, Keio University, Yokohama, Japan {shiino,fdesorbi,saito}@hvrl.ics.keio.ac.jp

More information

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 - COMPUTERIZED IMAGING Section I: Chapter 2 RADT 3463 Computerized Imaging 1 SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 COMPUTERIZED IMAGING Section I: Chapter 2 RADT

More information

Virtual Reality Calendar Tour Guide

Virtual Reality Calendar Tour Guide Technical Disclosure Commons Defensive Publications Series October 02, 2017 Virtual Reality Calendar Tour Guide Walter Ianneo Follow this and additional works at: http://www.tdcommons.org/dpubs_series

More information

Geo-Located Content in Virtual and Augmented Reality

Geo-Located Content in Virtual and Augmented Reality Technical Disclosure Commons Defensive Publications Series October 02, 2017 Geo-Located Content in Virtual and Augmented Reality Thomas Anglaret Follow this and additional works at: http://www.tdcommons.org/dpubs_series

More information

Design and evaluation of Hapticons for enriched Instant Messaging

Design and evaluation of Hapticons for enriched Instant Messaging Design and evaluation of Hapticons for enriched Instant Messaging Loy Rovers and Harm van Essen Designed Intelligence Group, Department of Industrial Design Eindhoven University of Technology, The Netherlands

More information

Touch & Gesture. HCID 520 User Interface Software & Technology

Touch & Gesture. HCID 520 User Interface Software & Technology Touch & Gesture HCID 520 User Interface Software & Technology Natural User Interfaces What was the first gestural interface? Myron Krueger There were things I resented about computers. Myron Krueger

More information

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT TAYSHENG JENG, CHIA-HSUN LEE, CHI CHEN, YU-PIN MA Department of Architecture, National Cheng Kung University No. 1, University Road,

More information

ARTIFICIAL INTELLIGENCE - ROBOTICS

ARTIFICIAL INTELLIGENCE - ROBOTICS ARTIFICIAL INTELLIGENCE - ROBOTICS http://www.tutorialspoint.com/artificial_intelligence/artificial_intelligence_robotics.htm Copyright tutorialspoint.com Robotics is a domain in artificial intelligence

More information

VIRTUAL REALITY Introduction. Emil M. Petriu SITE, University of Ottawa

VIRTUAL REALITY Introduction. Emil M. Petriu SITE, University of Ottawa VIRTUAL REALITY Introduction Emil M. Petriu SITE, University of Ottawa Natural and Virtual Reality Virtual Reality Interactive Virtual Reality Virtualized Reality Augmented Reality HUMAN PERCEPTION OF

More information

EFFICIENT ATTENDANCE MANAGEMENT SYSTEM USING FACE DETECTION AND RECOGNITION

EFFICIENT ATTENDANCE MANAGEMENT SYSTEM USING FACE DETECTION AND RECOGNITION EFFICIENT ATTENDANCE MANAGEMENT SYSTEM USING FACE DETECTION AND RECOGNITION 1 Arun.A.V, 2 Bhatath.S, 3 Chethan.N, 4 Manmohan.C.M, 5 Hamsaveni M 1,2,3,4,5 Department of Computer Science and Engineering,

More information

Visual Interpretation of Hand Gestures as a Practical Interface Modality

Visual Interpretation of Hand Gestures as a Practical Interface Modality Visual Interpretation of Hand Gestures as a Practical Interface Modality Frederik C. M. Kjeldsen Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate

More information

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and 8.1 INTRODUCTION In this chapter, we will study and discuss some fundamental techniques for image processing and image analysis, with a few examples of routines developed for certain purposes. 8.2 IMAGE

More information

Effective Iconography....convey ideas without words; attract attention...

Effective Iconography....convey ideas without words; attract attention... Effective Iconography...convey ideas without words; attract attention... Visual Thinking and Icons An icon is an image, picture, or symbol representing a concept Icon-specific guidelines Represent the

More information

MAV-ID card processing using camera images

MAV-ID card processing using camera images EE 5359 MULTIMEDIA PROCESSING SPRING 2013 PROJECT PROPOSAL MAV-ID card processing using camera images Under guidance of DR K R RAO DEPARTMENT OF ELECTRICAL ENGINEERING UNIVERSITY OF TEXAS AT ARLINGTON

More information

An Overview of Biometrics. Dr. Charles C. Tappert Seidenberg School of CSIS, Pace University

An Overview of Biometrics. Dr. Charles C. Tappert Seidenberg School of CSIS, Pace University An Overview of Biometrics Dr. Charles C. Tappert Seidenberg School of CSIS, Pace University What are Biometrics? Biometrics refers to identification of humans by their characteristics or traits Physical

More information

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES International Journal of Advanced Research in Engineering and Technology (IJARET) Volume 9, Issue 3, May - June 2018, pp. 177 185, Article ID: IJARET_09_03_023 Available online at http://www.iaeme.com/ijaret/issues.asp?jtype=ijaret&vtype=9&itype=3

More information

Hand Gesture Recognition Using Radial Length Metric

Hand Gesture Recognition Using Radial Length Metric Hand Gesture Recognition Using Radial Length Metric Warsha M.Choudhari 1, Pratibha Mishra 2, Rinku Rajankar 3, Mausami Sawarkar 4 1 Professor, Information Technology, Datta Meghe Institute of Engineering,

More information

Chapter 1 Virtual World Fundamentals

Chapter 1 Virtual World Fundamentals Chapter 1 Virtual World Fundamentals 1.0 What Is A Virtual World? {Definition} Virtual: to exist in effect, though not in actual fact. You are probably familiar with arcade games such as pinball and target

More information

INDE/TC 455: User Interface Design

INDE/TC 455: User Interface Design INDE/TC 455: User Interface Design Module 13.0 Interface Technology 1 Three more interface considerations What is the best allocation of responsibility between the human and the tool? What is the best

More information

1. INTRODUCTION: 2. EOG: system, handicapped people, wheelchair.

1. INTRODUCTION: 2. EOG: system, handicapped people, wheelchair. ABSTRACT This paper presents a new method to control and guide mobile robots. In this case, to send different commands we have used electrooculography (EOG) techniques, so that, control is made by means

More information

Multi-Modal User Interaction

Multi-Modal User Interaction Multi-Modal User Interaction Lecture 4: Multiple Modalities Zheng-Hua Tan Department of Electronic Systems Aalborg University, Denmark zt@es.aau.dk MMUI, IV, Zheng-Hua Tan 1 Outline Multimodal interface

More information

About user acceptance in hand, face and signature biometric systems

About user acceptance in hand, face and signature biometric systems About user acceptance in hand, face and signature biometric systems Aythami Morales, Miguel A. Ferrer, Carlos M. Travieso, Jesús B. Alonso Instituto Universitario para el Desarrollo Tecnológico y la Innovación

More information

Ubiquitous Computing Summer Episode 16: HCI. Hannes Frey and Peter Sturm University of Trier. Hannes Frey and Peter Sturm, University of Trier 1

Ubiquitous Computing Summer Episode 16: HCI. Hannes Frey and Peter Sturm University of Trier. Hannes Frey and Peter Sturm, University of Trier 1 Episode 16: HCI Hannes Frey and Peter Sturm University of Trier University of Trier 1 Shrinking User Interface Small devices Narrow user interface Only few pixels graphical output No keyboard Mobility

More information

Introduction to Haptics

Introduction to Haptics Introduction to Haptics Roope Raisamo Multimodal Interaction Research Group Tampere Unit for Computer Human Interaction (TAUCHI) Department of Computer Sciences University of Tampere, Finland Definition

More information

REBO: A LIFE-LIKE UNIVERSAL REMOTE CONTROL

REBO: A LIFE-LIKE UNIVERSAL REMOTE CONTROL World Automation Congress 2010 TSI Press. REBO: A LIFE-LIKE UNIVERSAL REMOTE CONTROL SEIJI YAMADA *1 AND KAZUKI KOBAYASHI *2 *1 National Institute of Informatics / The Graduate University for Advanced

More information

Development of a telepresence agent

Development of a telepresence agent Author: Chung-Chen Tsai, Yeh-Liang Hsu (2001-04-06); recommended: Yeh-Liang Hsu (2001-04-06); last updated: Yeh-Liang Hsu (2004-03-23). Note: This paper was first presented at. The revised paper was presented

More information

Student Attendance Monitoring System Via Face Detection and Recognition System

Student Attendance Monitoring System Via Face Detection and Recognition System IJSTE - International Journal of Science Technology & Engineering Volume 2 Issue 11 May 2016 ISSN (online): 2349-784X Student Attendance Monitoring System Via Face Detection and Recognition System Pinal

More information

User Awareness of Biometrics

User Awareness of Biometrics Advances in Networks, Computing and Communications 4 User Awareness of Biometrics B.J.Edmonds and S.M.Furnell Network Research Group, University of Plymouth, Plymouth, United Kingdom e-mail: info@network-research-group.org

More information

User Interface Agents

User Interface Agents User Interface Agents Roope Raisamo (rr@cs.uta.fi) Department of Computer Sciences University of Tampere http://www.cs.uta.fi/sat/ User Interface Agents Schiaffino and Amandi [2004]: Interface agents are

More information

AUTOMATIC NUMBER PLATE DETECTION USING IMAGE PROCESSING AND PAYMENT AT TOLL PLAZA

AUTOMATIC NUMBER PLATE DETECTION USING IMAGE PROCESSING AND PAYMENT AT TOLL PLAZA Reg. No.:20151213 DOI:V4I3P13 AUTOMATIC NUMBER PLATE DETECTION USING IMAGE PROCESSING AND PAYMENT AT TOLL PLAZA Meet Shah, meet.rs@somaiya.edu Information Technology, KJSCE Mumbai, India. Akshaykumar Timbadia,

More information

Mel Spectrum Analysis of Speech Recognition using Single Microphone

Mel Spectrum Analysis of Speech Recognition using Single Microphone International Journal of Engineering Research in Electronics and Communication Mel Spectrum Analysis of Speech Recognition using Single Microphone [1] Lakshmi S.A, [2] Cholavendan M [1] PG Scholar, Sree

More information

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC ROBOT VISION Dr.M.Madhavi, MED, MVSREC Robotic vision may be defined as the process of acquiring and extracting information from images of 3-D world. Robotic vision is primarily targeted at manipulation

More information

Drumtastic: Haptic Guidance for Polyrhythmic Drumming Practice

Drumtastic: Haptic Guidance for Polyrhythmic Drumming Practice Drumtastic: Haptic Guidance for Polyrhythmic Drumming Practice ABSTRACT W e present Drumtastic, an application where the user interacts with two Novint Falcon haptic devices to play virtual drums. The

More information

Using low cost devices to support non-visual interaction with diagrams & cross-modal collaboration

Using low cost devices to support non-visual interaction with diagrams & cross-modal collaboration 22 ISSN 2043-0167 Using low cost devices to support non-visual interaction with diagrams & cross-modal collaboration Oussama Metatla, Fiore Martin, Nick Bryan-Kinns and Tony Stockman EECSRR-12-03 June

More information

UUIs Ubiquitous User Interfaces

UUIs Ubiquitous User Interfaces UUIs Ubiquitous User Interfaces Alexander Nelson April 16th, 2018 University of Arkansas - Department of Computer Science and Computer Engineering The Problem As more and more computation is woven into

More information

BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS

BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS KEER2010, PARIS MARCH 2-4 2010 INTERNATIONAL CONFERENCE ON KANSEI ENGINEERING AND EMOTION RESEARCH 2010 BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS Marco GILLIES *a a Department of Computing,

More information

INDE/TC 455: User Interface Design

INDE/TC 455: User Interface Design INDE/TC 455: User Interface Design Autumn 2008 Class #21 URL:courses.washington.edu/ie455 1 TA Moment 2 Class #20 Review Review of flipbooks 3 Assignments for Class #22 Individual Review modules: 5.7,

More information

Projection Based HCI (Human Computer Interface) System using Image Processing

Projection Based HCI (Human Computer Interface) System using Image Processing GRD Journals- Global Research and Development Journal for Volume 1 Issue 5 April 2016 ISSN: 2455-5703 Projection Based HCI (Human Computer Interface) System using Image Processing Pankaj Dhome Sagar Dhakane

More information

HUMAN MACHINE INTERFACE

HUMAN MACHINE INTERFACE Journal homepage: www.mjret.in ISSN:2348-6953 HUMAN MACHINE INTERFACE Priyesh P. Khairnar, Amin G. Wanjara, Rajan Bhosale, S.B. Kamble Dept. of Electronics Engineering,PDEA s COEM Pune, India priyeshk07@gmail.com,

More information

Direct Manipulation. and Instrumental Interaction. CS Direct Manipulation

Direct Manipulation. and Instrumental Interaction. CS Direct Manipulation Direct Manipulation and Instrumental Interaction 1 Review: Interaction vs. Interface What s the difference between user interaction and user interface? Interface refers to what the system presents to the

More information

Challenging areas:- Hand gesture recognition is a growing very fast and it is I. INTRODUCTION

Challenging areas:- Hand gesture recognition is a growing very fast and it is I. INTRODUCTION Hand gesture recognition for vehicle control Bhagyashri B.Jakhade, Neha A. Kulkarni, Sadanand. Patil Abstract: - The rapid evolution in technology has made electronic gadgets inseparable part of our life.

More information

Haptic messaging. Katariina Tiitinen

Haptic messaging. Katariina Tiitinen Haptic messaging Katariina Tiitinen 13.12.2012 Contents Introduction User expectations for haptic mobile communication Hapticons Example: CheekTouch Introduction Multiple senses are used in face-to-face

More information

CSE 165: 3D User Interaction. Lecture #14: 3D UI Design

CSE 165: 3D User Interaction. Lecture #14: 3D UI Design CSE 165: 3D User Interaction Lecture #14: 3D UI Design 2 Announcements Homework 3 due tomorrow 2pm Monday: midterm discussion Next Thursday: midterm exam 3D UI Design Strategies 3 4 Thus far 3DUI hardware

More information

Air Marshalling with the Kinect

Air Marshalling with the Kinect Air Marshalling with the Kinect Stephen Witherden, Senior Software Developer Beca Applied Technologies stephen.witherden@beca.com Abstract. The Kinect sensor from Microsoft presents a uniquely affordable

More information

Face Detection: A Literature Review

Face Detection: A Literature Review Face Detection: A Literature Review Dr.Vipulsangram.K.Kadam 1, Deepali G. Ganakwar 2 Professor, Department of Electronics Engineering, P.E.S. College of Engineering, Nagsenvana Aurangabad, Maharashtra,

More information

The Mixed Reality Book: A New Multimedia Reading Experience

The Mixed Reality Book: A New Multimedia Reading Experience The Mixed Reality Book: A New Multimedia Reading Experience Raphaël Grasset raphael.grasset@hitlabnz.org Andreas Dünser andreas.duenser@hitlabnz.org Mark Billinghurst mark.billinghurst@hitlabnz.org Hartmut

More information

International Journal of Scientific & Engineering Research, Volume 7, Issue 12, December ISSN IJSER

International Journal of Scientific & Engineering Research, Volume 7, Issue 12, December ISSN IJSER International Journal of Scientific & Engineering Research, Volume 7, Issue 12, December-2016 192 A Novel Approach For Face Liveness Detection To Avoid Face Spoofing Attacks Meenakshi Research Scholar,

More information

AUTOMATIC SPEECH RECOGNITION FOR NUMERIC DIGITS USING TIME NORMALIZATION AND ENERGY ENVELOPES

AUTOMATIC SPEECH RECOGNITION FOR NUMERIC DIGITS USING TIME NORMALIZATION AND ENERGY ENVELOPES AUTOMATIC SPEECH RECOGNITION FOR NUMERIC DIGITS USING TIME NORMALIZATION AND ENERGY ENVELOPES N. Sunil 1, K. Sahithya Reddy 2, U.N.D.L.mounika 3 1 ECE, Gurunanak Institute of Technology, (India) 2 ECE,

More information

SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY

SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY Sidhesh Badrinarayan 1, Saurabh Abhale 2 1,2 Department of Information Technology, Pune Institute of Computer Technology, Pune, India ABSTRACT: Gestures

More information

BIOMETRICS BY- VARTIKA PAUL 4IT55

BIOMETRICS BY- VARTIKA PAUL 4IT55 BIOMETRICS BY- VARTIKA PAUL 4IT55 BIOMETRICS Definition Biometrics is the identification or verification of human identity through the measurement of repeatable physiological and behavioral characteristics

More information

DATA GLOVES USING VIRTUAL REALITY

DATA GLOVES USING VIRTUAL REALITY DATA GLOVES USING VIRTUAL REALITY Raghavendra S.N 1 1 Assistant Professor, Information science and engineering, sri venkateshwara college of engineering, Bangalore, raghavendraewit@gmail.com ABSTRACT This

More information

Image Processing and Particle Analysis for Road Traffic Detection

Image Processing and Particle Analysis for Road Traffic Detection Image Processing and Particle Analysis for Road Traffic Detection ABSTRACT Aditya Kamath Manipal Institute of Technology Manipal, India This article presents a system developed using graphic programming

More information

Application Areas of AI Artificial intelligence is divided into different branches which are mentioned below:

Application Areas of AI   Artificial intelligence is divided into different branches which are mentioned below: Week 2 - o Expert Systems o Natural Language Processing (NLP) o Computer Vision o Speech Recognition And Generation o Robotics o Neural Network o Virtual Reality APPLICATION AREAS OF ARTIFICIAL INTELLIGENCE

More information

Benefits of using haptic devices in textile architecture

Benefits of using haptic devices in textile architecture 28 September 2 October 2009, Universidad Politecnica de Valencia, Spain Alberto DOMINGO and Carlos LAZARO (eds.) Benefits of using haptic devices in textile architecture Javier SANCHEZ *, Joan SAVALL a

More information

Intelligent interaction

Intelligent interaction BionicWorkplace: autonomously learning workstation for human-machine collaboration Intelligent interaction Face to face, hand in hand. The BionicWorkplace shows the extent to which human-machine collaboration

More information

- Basics of informatics - Computer network - Software engineering - Intelligent media processing - Human interface. Professor. Professor.

- Basics of informatics - Computer network - Software engineering - Intelligent media processing - Human interface. Professor. Professor. - Basics of informatics - Computer network - Software engineering - Intelligent media processing - Human interface Computer-Aided Engineering Research of power/signal integrity analysis and EMC design

More information

Interactive Coffee Tables: Interfacing TV within an Intuitive, Fun and Shared Experience

Interactive Coffee Tables: Interfacing TV within an Intuitive, Fun and Shared Experience Interactive Coffee Tables: Interfacing TV within an Intuitive, Fun and Shared Experience Radu-Daniel Vatavu and Stefan-Gheorghe Pentiuc University Stefan cel Mare of Suceava, Department of Computer Science,

More information

Human-Computer Interaction: Preamble and Future in 2020.

Human-Computer Interaction: Preamble and Future in 2020. Human-Computer Interaction: Preamble and Future in 2020. Ashish Palsingh J. G. Institute of Computer Applications Gujarat University, Ahmedabad, India. ranvirsingh1069@rediffmail.com Ekta Palsingh Shree

More information

Combined Approach for Face Detection, Eye Region Detection and Eye State Analysis- Extended Paper

Combined Approach for Face Detection, Eye Region Detection and Eye State Analysis- Extended Paper International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 10, Issue 9 (September 2014), PP.57-68 Combined Approach for Face Detection, Eye

More information

Session 2: 10 Year Vision session (11:00-12:20) - Tuesday. Session 3: Poster Highlights A (14:00-15:00) - Tuesday 20 posters (3minutes per poster)

Session 2: 10 Year Vision session (11:00-12:20) - Tuesday. Session 3: Poster Highlights A (14:00-15:00) - Tuesday 20 posters (3minutes per poster) Lessons from Collecting a Million Biometric Samples 109 Expression Robust 3D Face Recognition by Matching Multi-component Local Shape Descriptors on the Nasal and Adjoining Cheek Regions 177 Shared Representation

More information

Robust Hand Gesture Recognition for Robotic Hand Control

Robust Hand Gesture Recognition for Robotic Hand Control Robust Hand Gesture Recognition for Robotic Hand Control Ankit Chaudhary Robust Hand Gesture Recognition for Robotic Hand Control 123 Ankit Chaudhary Department of Computer Science Northwest Missouri State

More information

Virtual Tactile Maps

Virtual Tactile Maps In: H.-J. Bullinger, J. Ziegler, (Eds.). Human-Computer Interaction: Ergonomics and User Interfaces. Proc. HCI International 99 (the 8 th International Conference on Human-Computer Interaction), Munich,

More information

SPY ROBOT CONTROLLING THROUGH ZIGBEE USING MATLAB

SPY ROBOT CONTROLLING THROUGH ZIGBEE USING MATLAB SPY ROBOT CONTROLLING THROUGH ZIGBEE USING MATLAB MD.SHABEENA BEGUM, P.KOTESWARA RAO Assistant Professor, SRKIT, Enikepadu, Vijayawada ABSTRACT In today s world, in almost all sectors, most of the work

More information

The User Activity Reasoning Model Based on Context-Awareness in a Virtual Living Space

The User Activity Reasoning Model Based on Context-Awareness in a Virtual Living Space , pp.62-67 http://dx.doi.org/10.14257/astl.2015.86.13 The User Activity Reasoning Model Based on Context-Awareness in a Virtual Living Space Bokyoung Park, HyeonGyu Min, Green Bang and Ilju Ko Department

More information

ART 269 3D Animation The 12 Principles of Animation. 1. Squash and Stretch

ART 269 3D Animation The 12 Principles of Animation. 1. Squash and Stretch ART 269 3D Animation The 12 Principles of Animation 1. Squash and Stretch Animated sequence of a racehorse galloping. Photograph by Eadweard Muybridge. The horse's body demonstrates squash and stretch

More information

A SURVEY ON HAND GESTURE RECOGNITION

A SURVEY ON HAND GESTURE RECOGNITION A SURVEY ON HAND GESTURE RECOGNITION U.K. Jaliya 1, Dr. Darshak Thakore 2, Deepali Kawdiya 3 1 Assistant Professor, Department of Computer Engineering, B.V.M, Gujarat, India 2 Assistant Professor, Department

More information

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data Hrvoje Benko Microsoft Research One Microsoft Way Redmond, WA 98052 USA benko@microsoft.com Andrew D. Wilson Microsoft

More information

The Control of Avatar Motion Using Hand Gesture

The Control of Avatar Motion Using Hand Gesture The Control of Avatar Motion Using Hand Gesture ChanSu Lee, SangWon Ghyme, ChanJong Park Human Computing Dept. VR Team Electronics and Telecommunications Research Institute 305-350, 161 Kajang-dong, Yusong-gu,

More information