Flexible Gesture Recognition for Immersive Virtual Environments

Size: px
Start display at page:

Download "Flexible Gesture Recognition for Immersive Virtual Environments"

Transcription

1 Flexible Gesture Recognition for Immersive Virtual Environments Matthias Deller, Achim Ebert, Michael Bender, and Hans Hagen German Research Center for Artificial Intelligence, Kaiserslautern, Germany Abstract With powerful graphics hardware becoming affordable for everyone, there is an increasing tendency towards a new generation of user interfaces, with the focus shifting from traditional two-dimensional desktops to three-dimensional virtual environments. Therefore, there is a growing need for applicable immersive interaction metaphors to manipulate these environments. In our paper we propose a gesture recognition engine using an inexpensive data glove with integrated 6 DOF tracking. Despite of noisy input data from the glove, we are able to achieve reliable and flexible gesture recognition. New gestures can be trained easily and existing gestures can be individually adapted for different users. 1 Introduction In recent years, a new direction is showing in the development of computer applications. With powerful graphics hardware becoming available to everyone at reasonable prices, there is an increasing tendency to enhance traditional desktops and applications by making use of all three dimensions, thereby replacing the common desktop metaphor with a virtual environment. One of the main advantages of these virtual environments is described with the term immersion, meaning the lowering of barriers between human and computer. The user gets the impression of being part of the virtual scene and, ideally, is able to manipulate it as he would his real surroundings, without devoting conscious attention to the usage of an interface. One reason why virtual environments are not yet as common as one should think, based on the advantages they present, might be the lack of adequate interfaces to interact with immersive environments, as well as methods and paradigms to intuitively manipulate three-dimensional settings. More recently, there are also some devices especially designed for controlling three-dimensional environments, but for the most part they are not very intuitive and in most cases still demand additional steering [1]. The most natural way for humans to manipulate their surroundings is, of course, by simply using their hands. Hands are used to grab and move objects, or manipulate them in other ways. They are used to point at, indicate or mark objects of interest. Finally, hands can be used to communicate with others and to state intentions by making postures or gestures. In most cases, this is done without having to consciously think about it, and so without interrupting other tasks the person may be involved with at the same time. Therefore, the most promising approach to minimize the cognitive load required for learning and using a user interface in a virtual environment is to employ a gesture recognition engine that lets the user interact with the application in a natural way by just utilizing his hands in ways he is already used to. 2 Related Work At the moment, research on gesture recognition is mainly focused on visual capturing and interpretation of gestures. Either the user or his hands are captured by cameras so their position or the posture of the hands can be determined with appropriate methods. To achieve this goal, there are several different strategies. In some cases, these techniques are non-invasive, so the user is not

2 required to wear any special equipment or clothing. Of course, this makes it hard to determine what part of the picture belongs to the background and which parts are the hands. Some approaches aim to solve this segmentation problem by imposing requirements on the user s surroundings, such as requiring a special uniform background to distinguish the user's hand from it [2][3]. Others don't need a specially prepared, but static background [4]. Another possibility is the use of the infrared spectrum to allow a better distinguishability of the hand and its surroundings [5]. Newer approaches use a combination of these methods to enhance the segmentation process and find the user's fingers in front of varying backgrounds [6]. Still other authors aim to simplify the segmentation process by introducing restrictions, often by requiring the user to wear marked gloves [7][8], or by restricting the capturing process to a single, accordingly prepared setting [9]. Although promising, all of these approaches have the common drawback that they pose special needs to the surrounding in which they are used. They require uniform, steady lighting conditions, high contrast in the captured pictures and have difficulties when the user's motions are so fast that his hands appear blurred on the captured frames. Apart from that, these procedures demand a lot of computing power, as well as special and often costly hardware. In addition, the cameras for capturing the user have to be firmly installed and adjusted, so these devices are bound to one place and the user has to stay within a predefined area to allow reliable gesture recognition. Often, a separate room has to be used to enable the recognition of the user s gestures. Another possibility to capture gestures is by the use of special interface devices called data gloves [10][11]. The handicap of professional data gloves, however, is the fact that they are not per se equipped with positioning sensors. This limits the range of detectable gestures to static postures, unless further hardware is applied. The user has to wear additional gear to enable determination of position and orientation of his hand, often with electromagnetic tracking devices like the Ascension Flock of Birds [12]. These devices allow a relatively exact determination of the hands position as well as its orientation, if fixated to an appropriate location. The problem with using electromagnetic tracking, however, is the circumstance that they require the user to wear at least one extra sensor attached to the system by cable. Additionally, electromagnetic tracking devices have to be firmly installed and calibrated, and they are very prone to errors if there are metallic objects in the vicinity of the tracking system. So, although there are several high potential approaches to allow the use of gestures to enhance interaction with (mobile) computers, these possibilities are not yet serviceable for real-time gesture interaction with computers. They demand very specialized and therefore expensive hardware, require the user to wear special clothing or stand in front of a fixed background, and use a lot of computing power to determine the performed gestures. Further, almost all of these techniques are restricted to one especially prepared setting, because the setup has to be installed in and calibrated to a designated surrounding. Thus, these solutions are not feasible to be used in a normal working environment, especially if they are to be integrated in more complex applications to allow real-time interaction on the spot. Consequential, there is a need for a gesture recognition that is both flexible to be adapted to various conditions like alternating users or different hardware, possibly even transportable devices, yet fast and powerful enough to enable a reliable recognition of a variety of gestures without hampering the performance of the actual application. Similar to the introduction of the mouse as an adequate interaction device for graphical user interfaces, gesture recognition interfaces should be easily defined and integrated either for interaction in three-dimensional settings or as a means to interact with the computer in a more natural way, without having to use an abstract interface. 3 Applied hardware The glove hardware we used to realize and test our gesture recognition engine was a P5 Glove from Essential reality [13]. The P5 is a consumer data glove originally designed as a game controller. It features five bend sensors to track the flexion of the wearer's fingers as well as an infrared-based optical tracking system, allowing computation of the glove's position and orientation without the need for additional hardware. The glove consists of a stationary base station housing the infrared receptors enabling the spatial tracking. The glove itself is

3 connected to the base station with a cable and consists of a plastic housing that is strapped to the back of the user s hand, with five bendable strips connected to his fingers to determine the bend of each individual finger. In addition, on top of the housing are four buttons which can be used to provide additional functionality. The obtainment of position and orientation data is achieved with the help of reflectors mounted on prominent positions on the glove housing. Dependent on how many of these reflectors are visible for the base station and on which positions the visible reflectors are registered, the glove's driver is able to calculate the orientation and position of the glove. The tracking is cut short if the alignment of the glove is such that the back of the user's hand is angled away from the receptor and so too many of the reflectors are concealed. Yet, since the P5 is intended to be used sitting in front of a desktop computer, most gestures can be adequately recognized using this hardware. Figure 1: Our demonstration setup: 2D display, stereoscopic display and data glove. During our work with the P5, we learned that the calculated values for the flexion of the fingers were quite accurate, while the spatial tracking data was much less reliable. The position information was fairly dependable, whereas the orientation values of the glove were, dependent on lighting conditions, sometimes very unstable. Because of this, additional adequate filtering mechanisms had to be applied to ascertain sufficiently reliable values. The low price of about 50 Euros was one reason we chose the P5 for our gesture recognition, because it shows that serviceable interaction hardware for virtual environments can be realized at a cost that makes it an option for the normal consumer market. The other reason for our choice was to show that our recognition engine is powerful and flexible enough to enable reliable gesture recognition even when used with inexpensive gamer hardware. 4 Posture and gesture recognition A major problem for the recognition of gestures, especially when using visual tracking, is the high amount of computational power required to determine the most likely match to the gesture carried out by the user. Especially when gesture recognition is to be integrated in running applications that at the same time have to render a virtual environment and manipulate this environment according to the recognized gestures, this is a task that cannot be accomplished on a single average consumer PC. We aim to achieve a reliable real-time recognition that is capable of running on any fairly up-to-date workplace PC and can easily be integrated in normal. Like Bimber s 'fuzzy logic approach' [14], we use a set of gestures that have been learned by performing the gesture to determine the most likely match. However, unlike the aforementioned method, for our system we do not define gestures as motion over a certain period of time, but as a sequence of postures made at specific positions with specific orientations of the user's hand. Thus, the relevant data for each posture is mainly given by the flexions of the individual fingers. However, for some postures the orientation of the hand may be more or less significant. While some gestures mean the same independent of the hand s orientation, for some gestures the orientation data is much more relevant, for example the meaning of a fist with outstretched thumb can differ significantly whether the thumb points upward or downward. Due to this fact, the postures for our recognition engine are composed of the flexion values of the fingers, the orientation data of the hand and an additional value indicating the relevance of the orientation for the posture. As mentioned before, the required postures are taught to the system by simply performing them, then associating an identifier with the posture. This approach makes it extremely easy to teach the system new postures that may be required for specific applications. Alternately, existing postures can be adapted for specific users. To do so, the posture in question is selected and performed several times by the user. The system

4 captures the different variations of the posture and determines the resulting averaged posture definition. In this manner, it is possible to create a flexible collection of different postures, termed a posture library, with little expedience of time. This library can be saved and loaded in form of a gesture definition file, making it possible for the same application to have different posture definitions for different users, allowing an on-line change of the user context. 4.1 Recognition process Our recognition engine is subdivided into two components: the data acquisition and the gesture manager. The data acquisition runs as a separate thread and is constantly checking the received data from the glove for possible matches from the gesture manager. As mentioned before, position and especially orientation data received from the P5 can be very noisy, so they have to be appropriately filtered and smoothed out to enable a sufficiently reliable matching to the known postures. First, the tracking data is piped through a deadband filter to reduce the chance of jumping error values in the tracked data. Also, Alterations in the position or orientation data that exceed a given limit are discarded as improbable and replaced with their previous values. The resulting data is then straightened out by a dynamically adjusting average filter. The resulting data is reasonably correct enough to provide a good basis for the matching process of the gesture manager. If the gesture manager finds a likely match to the provided data in his posture library, this posture is marked as a candidate. To lower the possibility of misrecognition and false positives, a posture is only accredited as recognized when held for an adjustable minimum time span. During our tests it showed that values between 300 and 600 milliseconds are suitable to allow a reliable recognition without forcing the user to hold the posture for too long. Once a posture is recognized, a PostureChanged-event is sent to the application that started the acquisition thread. To enable the application to use the recognized posture for further processing, the identifier of the posture as well as the identifier of the previous posture is provided to facilitate the sequencing of postures to a more complex gesture. Furthermore, the position and orientation of the glove is provided. The acquisition thread also keeps track of the glove's movement. If the changes in the position or orientation data of the glove exceed an adjustable threshold, a GloveMove-event is fired. This event is similar to common MouseMove-events, providing both the start and end values of the position and orientation data of the movement. Finally, to take into account hardware that possesses additional buttons, like the P5, the data acquisition thread also monitors the state of these buttons and generates corresponding ButtonPressed- and ButtonReleased-events, providing the designated number of the button. It is important to note that although the data acquisition we implemented was fitted to the Essential Reality P5, it can easily be adapted to be suitable for any other data glove, either for mere posture recognition or in combination with any additional 6 Degrees Of Freedom tracking device like the Ascension Flock of Birds [12] to achieve full gestural interaction. To test this, we adapted our gesture recognition to a professional dataglove from Fifth Dimension Technologies [15], although without any tracking, so only static postures were supported. Nevertheless, the recognition of these postures was fast and, because of the more sophisticated sensors of the 5DT product, very reliable. 4.2 The Gesture Manager The gesture manager is the principal part of the recognition engine, maintaining the list of known postures as well as providing multiple functions to manage the posture library. As soon as the first posture is added to the library or an existing library is loaded, the gesture manager begins matching the data received from the data acquisition thread to the stored datasets. This is done by first looking for the best matching finger constellation. In this first step, the bend values of the fingers are interpreted as five-dimensional vectors and for each posture definition the distance to the current data is calculated. If this distance fails to be within an adjustable minimum recognition distance, the posture is discarded. If a posture matches the data to a relevant degree, the orientation data is compared in a likewise manner to the current values. Depending on whether this distance exceeds another

5 adjustable limit, the likelihood of a match is lowered or raised according to the orientation quota associated with the corresponding posture dataset. Also, the gesture manager provides several means to modulate parameters on run time. The recognition sensitivity can be changed, new postures can be added, existing ones adapted, or new posture libraries can be loaded. 4.3 Recognition of gestures As mentioned before, we see gestures as a sequence of successive postures. With the help of the PostureChanged-events, our recognition engine provides an extremely flexible way to track gestures performed by the user. The recognition of single postures like letters of the American Sign Language is as easily possible as the recognition of more complex, dynamic gestures. This can be done by tracking the sequence of performed postures. For example, let's consider the detection of a "click" gesture. Tests with different users indicated that an intuitive gesture for this task is pointing at the object and then tapping at it with the index finger. To accomplish the detection of this gesture, one would define a pointing posture with outstretched index finger and thumb and the other fingers flexed, then a tapping posture with half-bent index finger. All there remains to do in the application is to check for two sequent PostureChanged-events indicating a change from the pointing to the tapping posture, then back to pointing. In this manner, almost any desired gesture can quickly be implemented and recognized. 5 Implementation and results We have evaluated our gesture recognition engine by enhancing a demo application representing a virtual document space with gesture interaction. In the implemented virtual environment, the user can manipulate various objects representing documents and trigger specific actions by performing a corresponding gesture. In order to enhance the degree of immersion, we used a particular demonstration setup as shown in Figure 1. To allow the user a stereoscopic view of the scene, we used a special 3D display device, the SeeReal C-I [16]. To compensate for the loss in resolution on the stereoscopic monitor we used an additional TFT display to also show a higher resolution view of the scene. A testimony for the speed of our recognition engine is the fact that we were able to realize the application logic including the rendering of three different perspectives (one for each eye, another one for the non-stereoscopic display), and the tracking and recognition of gestures on a normal consumer grade computer in real-time. Our demo scene shown in Figure 3 consists of a virtual desk, on which different documents are arranged randomly. In the background of the scene, a wall containing a pin board and a calendar can be seen. Additionally, the user s hand is represented by a hand avatar, showing its location in the scene as well as the hand s orientation and the flexion of the fingers. Figure 3: Our demonstration application: A virtual desktop with several gestural interaction possibilities. The user was given multiple means to interact with this environment. First, he could rearrange the documents on the table by simply moving his hand avatar over a chosen document, then grabbing it by making a fist. He could then move the selected document around and drop it in the desired location by opening his fist, releasing his grip on the document. Another possibility was to have a closer look at the calendar or the pin board by moving his hand in front of the object and point at it. Additionally, there were several possibilities to interact with specific documents. To select a document, the user had to move his hand over it, and then tap on it in the way described earlier. Once a document was selected, it moved to the front of the scene, allowing a closer look at the cover page. The user then had the choice between putting the document back in its location by performing a dropping gesture, closing then opening his hand, or he could open the document. To open it, he had to grab it in the same way

6 (by making a fist), then turn his hand around and open it, spreading his fingers with his palm facing upward. Next, the user was able to browse through the document by making a pointing posture and tilting his hand to the right or left to browse forward or backward. We had several users test the demonstrational environment, moving documents and browsing through them. Apart from initial difficulties due to the unfamiliarity with the glove hardware, after a short while most users were able to use the different gestures in a natural way, with only few adaptations of the posture definitions to the individual users. 6 Future work One of our next endeavours will be to integrate artificial intelligence methods to allow automatic adaptation of the generic posture libraries to individual users, allowing a smoother recognition of their gestures while interacting with the demonstration environment. Furthermore, we plan to verify our gesture recognition engine with different types of hardware, for instance using a professional data glove with additional finger sensors in combination with position and orientation data acquired from an electromagnetic tracking system. Concerning the system itself, we plan to add the possibility of using individual recognition boundaries for each posture definition as well as an automatic adjustment of these boundaries during the training of the posture, dependent on the accuracy when repeating the posture. 7 Conclusions In this paper we presented our prototype of a flexible and powerful gesture recognition engine, allowing gesture interaction for a variety of possible hardware devices and combinations thereof. Gestures can rapidly and easily be defined as a sequence of succeeding postures. These postures are trained to the system by simply performing them wearing the designated glove hardware. Our engine can easily be integrated in any desired application and is capable of providing a fast and reliable gesture recognition interface on standard consumer computers with the possibility of on-line change of user contexts and gesture collections. References [1] CIGER J., GUTIERREZ M., VEXO F., THALMANN D.: The Magic Wand, Proceedings of the 19th Spring Conference on Computer Graphics, [2] QUEK F., MYSLIWIEC T., ZHAO M.: Finger mouse: A freehand pointing interface, Proceedings of the International Conference on Automatic Face and Gesture Recognition, Zürich, [3] LIEN C., HUANG C.: Model-Based Articulated Hand Motion Tracking For Gesture Recognition, Image and Vision Computing, vol. 16, February [4] APPENZELLER G., LEE J., HASHIMOTO H.: Building topological maps by looking at people: An example of cooperation between intelligent spaces and robots, Proceedings of the IEEE-RSJ International Conference on Intelligent Robots and Systems, [5] REHG J., KANADE T.: Digiteyes: Vision-based human hand tracking, Technical Report CMU-CS , School of Computer Science, Carnegie Mellon University, [6] VON HARDENBERG C., BÉRARD F.: Bare-Hand Human- Computer Interaction, Proceedings of the ACM Workshop on Perceptive User Interfaces, Orlando, [7] STARNER T., WEAVER J, PENTLAND A.: A wearable computer based American Sign Language Recognizer, Proceedings of the IEEE Transactions on Pattern Analysis and Machine Intelligence, [8] HIENZ H., GROEBEL K., OFFNER G.: Real-time hand-arm motion analysis using a single video camera, Proceedings of the International Conference on Automatic Face and Gesture Recognition, Killington, [9] CROWLEY J., BÉRARD F., COUTAZ J.: Finger tracking as an input device for augmented reality, Proceedings of the International Conference on Automatic Face and Gesture Recognition, Zürich, [10] TAKAHASHI T., KISHINO F.: Hand gesture coding based on experiments using a hand gesture interface device, ACM SIGCHI Bulletin, April [11] HUANG T.S., PAVLOVIC V.I.: Hand Gesture Modeling, Analysis, and Synthesis, Proceedings of the International Conference on Automatic Face and Gesture Recognition, Zürich, [12] ASCENSION PRODUCTS FLOCK OF BIRDS, URL: tech.com/products/flockofbirds.php [13] THE P5 GLOVE HOMEPAGE, URL: [14] BIMBER O.: Continuous 6DOF Gesture Recognition: A Fuzzy-Logic Approach, Proceedings of 7-th International Conference in Central Europe on Computer Graphics, Visualization and Interactive Digital Media (WSCG 99), [15] FIFTH DIMENSION TECHNOLOGIES HOMEPAGE, URL: 5dt.com/index.html [16] SEEREAL TECHNOLOGIES HOMEPAGE, URL:

Gesture Recognition with Real World Environment using Kinect: A Review

Gesture Recognition with Real World Environment using Kinect: A Review Gesture Recognition with Real World Environment using Kinect: A Review Prakash S. Sawai 1, Prof. V. K. Shandilya 2 P.G. Student, Department of Computer Science & Engineering, Sipna COET, Amravati, Maharashtra,

More information

Enabling Cursor Control Using on Pinch Gesture Recognition

Enabling Cursor Control Using on Pinch Gesture Recognition Enabling Cursor Control Using on Pinch Gesture Recognition Benjamin Baldus Debra Lauterbach Juan Lizarraga October 5, 2007 Abstract In this project we expect to develop a machine-user interface based on

More information

3D Graphical User Interface on personal computer using p5 Data Glove

3D Graphical User Interface on personal computer using p5 Data Glove www.ijcsi.org 155 3D Graphical User Interface on personal computer using p5 Data Glove Khyati R. Nirmal 1, Nitin Mishra 2 1 M.Tech *, Departement of IT, NRI Institutions, RGPV Bhopal, Madhya Pradesh, India

More information

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005.

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005. Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays Habib Abi-Rached Thursday 17 February 2005. Objective Mission: Facilitate communication: Bandwidth. Intuitiveness.

More information

The Control of Avatar Motion Using Hand Gesture

The Control of Avatar Motion Using Hand Gesture The Control of Avatar Motion Using Hand Gesture ChanSu Lee, SangWon Ghyme, ChanJong Park Human Computing Dept. VR Team Electronics and Telecommunications Research Institute 305-350, 161 Kajang-dong, Yusong-gu,

More information

DepthTouch: Using Depth-Sensing Camera to Enable Freehand Interactions On and Above the Interactive Surface

DepthTouch: Using Depth-Sensing Camera to Enable Freehand Interactions On and Above the Interactive Surface DepthTouch: Using Depth-Sensing Camera to Enable Freehand Interactions On and Above the Interactive Surface Hrvoje Benko and Andrew D. Wilson Microsoft Research One Microsoft Way Redmond, WA 98052, USA

More information

VICs: A Modular Vision-Based HCI Framework

VICs: A Modular Vision-Based HCI Framework VICs: A Modular Vision-Based HCI Framework The Visual Interaction Cues Project Guangqi Ye, Jason Corso Darius Burschka, & Greg Hager CIRL, 1 Today, I ll be presenting work that is part of an ongoing project

More information

Realtime 3D Computer Graphics Virtual Reality

Realtime 3D Computer Graphics Virtual Reality Realtime 3D Computer Graphics Virtual Reality Virtual Reality Input Devices Special input devices are required for interaction,navigation and motion tracking (e.g., for depth cue calculation): 1 WIMP:

More information

Toward an Augmented Reality System for Violin Learning Support

Toward an Augmented Reality System for Violin Learning Support Toward an Augmented Reality System for Violin Learning Support Hiroyuki Shiino, François de Sorbier, and Hideo Saito Graduate School of Science and Technology, Keio University, Yokohama, Japan {shiino,fdesorbi,saito}@hvrl.ics.keio.ac.jp

More information

Augmented Desk Interface. Graduate School of Information Systems. Tokyo , Japan. is GUI for using computer programs. As a result, users

Augmented Desk Interface. Graduate School of Information Systems. Tokyo , Japan. is GUI for using computer programs. As a result, users Fast Tracking of Hands and Fingertips in Infrared Images for Augmented Desk Interface Yoichi Sato Institute of Industrial Science University oftokyo 7-22-1 Roppongi, Minato-ku Tokyo 106-8558, Japan ysato@cvl.iis.u-tokyo.ac.jp

More information

R (2) Controlling System Application with hands by identifying movements through Camera

R (2) Controlling System Application with hands by identifying movements through Camera R (2) N (5) Oral (3) Total (10) Dated Sign Assignment Group: C Problem Definition: Controlling System Application with hands by identifying movements through Camera Prerequisite: 1. Web Cam Connectivity

More information

SIMULATION MODELING WITH ARTIFICIAL REALITY TECHNOLOGY (SMART): AN INTEGRATION OF VIRTUAL REALITY AND SIMULATION MODELING

SIMULATION MODELING WITH ARTIFICIAL REALITY TECHNOLOGY (SMART): AN INTEGRATION OF VIRTUAL REALITY AND SIMULATION MODELING Proceedings of the 1998 Winter Simulation Conference D.J. Medeiros, E.F. Watson, J.S. Carson and M.S. Manivannan, eds. SIMULATION MODELING WITH ARTIFICIAL REALITY TECHNOLOGY (SMART): AN INTEGRATION OF

More information

A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems

A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems F. Steinicke, G. Bruder, H. Frenz 289 A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems Frank Steinicke 1, Gerd Bruder 1, Harald Frenz 2 1 Institute of Computer Science,

More information

Design a Model and Algorithm for multi Way Gesture Recognition using Motion and Image Comparison

Design a Model and Algorithm for multi Way Gesture Recognition using Motion and Image Comparison e-issn 2455 1392 Volume 2 Issue 10, October 2016 pp. 34 41 Scientific Journal Impact Factor : 3.468 http://www.ijcter.com Design a Model and Algorithm for multi Way Gesture Recognition using Motion and

More information

A Dynamic Gesture Language and Graphical Feedback for Interaction in a 3D User Interface

A Dynamic Gesture Language and Graphical Feedback for Interaction in a 3D User Interface EUROGRAPHICS 93/ R. J. Hubbold and R. Juan (Guest Editors), Blackwell Publishers Eurographics Association, 1993 Volume 12, (1993), number 3 A Dynamic Gesture Language and Graphical Feedback for Interaction

More information

Virtual Grasping Using a Data Glove

Virtual Grasping Using a Data Glove Virtual Grasping Using a Data Glove By: Rachel Smith Supervised By: Dr. Kay Robbins 3/25/2005 University of Texas at San Antonio Motivation Navigation in 3D worlds is awkward using traditional mouse Direct

More information

Integration of Hand Gesture and Multi Touch Gesture with Glove Type Device

Integration of Hand Gesture and Multi Touch Gesture with Glove Type Device 2016 4th Intl Conf on Applied Computing and Information Technology/3rd Intl Conf on Computational Science/Intelligence and Applied Informatics/1st Intl Conf on Big Data, Cloud Computing, Data Science &

More information

Input devices and interaction. Ruth Aylett

Input devices and interaction. Ruth Aylett Input devices and interaction Ruth Aylett Contents Tracking What is available Devices Gloves, 6 DOF mouse, WiiMote Why is it important? Interaction is basic to VEs We defined them as interactive in real-time

More information

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM Aniket D. Kulkarni *1, Dr.Sayyad Ajij D. *2 *1(Student of E&C Department, MIT Aurangabad, India) *2(HOD of E&C department, MIT Aurangabad, India) aniket2212@gmail.com*1,

More information

Human Computer Interaction by Gesture Recognition

Human Computer Interaction by Gesture Recognition IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735.Volume 9, Issue 3, Ver. V (May - Jun. 2014), PP 30-35 Human Computer Interaction by Gesture Recognition

More information

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL

More information

Fingertip Detection: A Fast Method with Natural Hand

Fingertip Detection: A Fast Method with Natural Hand Fingertip Detection: A Fast Method with Natural Hand Jagdish Lal Raheja Machine Vision Lab Digital Systems Group, CEERI/CSIR Pilani, INDIA jagdish@ceeri.ernet.in Karen Das Dept. of Electronics & Comm.

More information

VEWL: A Framework for Building a Windowing Interface in a Virtual Environment Daniel Larimer and Doug A. Bowman Dept. of Computer Science, Virginia Tech, 660 McBryde, Blacksburg, VA dlarimer@vt.edu, bowman@vt.edu

More information

Controlling Humanoid Robot Using Head Movements

Controlling Humanoid Robot Using Head Movements Volume-5, Issue-2, April-2015 International Journal of Engineering and Management Research Page Number: 648-652 Controlling Humanoid Robot Using Head Movements S. Mounica 1, A. Naga bhavani 2, Namani.Niharika

More information

Virtual Tactile Maps

Virtual Tactile Maps In: H.-J. Bullinger, J. Ziegler, (Eds.). Human-Computer Interaction: Ergonomics and User Interfaces. Proc. HCI International 99 (the 8 th International Conference on Human-Computer Interaction), Munich,

More information

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright E90 Project Proposal 6 December 2006 Paul Azunre Thomas Murray David Wright Table of Contents Abstract 3 Introduction..4 Technical Discussion...4 Tracking Input..4 Haptic Feedack.6 Project Implementation....7

More information

The use of gestures in computer aided design

The use of gestures in computer aided design Loughborough University Institutional Repository The use of gestures in computer aided design This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation: CASE,

More information

GlassSpection User Guide

GlassSpection User Guide i GlassSpection User Guide GlassSpection User Guide v1.1a January2011 ii Support: Support for GlassSpection is available from Pyramid Imaging. Send any questions or test images you want us to evaluate

More information

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data Hrvoje Benko Microsoft Research One Microsoft Way Redmond, WA 98052 USA benko@microsoft.com Andrew D. Wilson Microsoft

More information

SPY ROBOT CONTROLLING THROUGH ZIGBEE USING MATLAB

SPY ROBOT CONTROLLING THROUGH ZIGBEE USING MATLAB SPY ROBOT CONTROLLING THROUGH ZIGBEE USING MATLAB MD.SHABEENA BEGUM, P.KOTESWARA RAO Assistant Professor, SRKIT, Enikepadu, Vijayawada ABSTRACT In today s world, in almost all sectors, most of the work

More information

Chapter 1 Virtual World Fundamentals

Chapter 1 Virtual World Fundamentals Chapter 1 Virtual World Fundamentals 1.0 What Is A Virtual World? {Definition} Virtual: to exist in effect, though not in actual fact. You are probably familiar with arcade games such as pinball and target

More information

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces Huidong Bai The HIT Lab NZ, University of Canterbury, Christchurch, 8041 New Zealand huidong.bai@pg.canterbury.ac.nz Lei

More information

A SURVEY ON GESTURE RECOGNITION TECHNOLOGY

A SURVEY ON GESTURE RECOGNITION TECHNOLOGY A SURVEY ON GESTURE RECOGNITION TECHNOLOGY Deeba Kazim 1, Mohd Faisal 2 1 MCA Student, Integral University, Lucknow (India) 2 Assistant Professor, Integral University, Lucknow (india) ABSTRACT Gesture

More information

HandsIn3D: Supporting Remote Guidance with Immersive Virtual Environments

HandsIn3D: Supporting Remote Guidance with Immersive Virtual Environments HandsIn3D: Supporting Remote Guidance with Immersive Virtual Environments Weidong Huang 1, Leila Alem 1, and Franco Tecchia 2 1 CSIRO, Australia 2 PERCRO - Scuola Superiore Sant Anna, Italy {Tony.Huang,Leila.Alem}@csiro.au,

More information

A Kinect-based 3D hand-gesture interface for 3D databases

A Kinect-based 3D hand-gesture interface for 3D databases A Kinect-based 3D hand-gesture interface for 3D databases Abstract. The use of natural interfaces improves significantly aspects related to human-computer interaction and consequently the productivity

More information

Interface Design V: Beyond the Desktop

Interface Design V: Beyond the Desktop Interface Design V: Beyond the Desktop Rob Procter Further Reading Dix et al., chapter 4, p. 153-161 and chapter 15. Norman, The Invisible Computer, MIT Press, 1998, chapters 4 and 15. 11/25/01 CS4: HCI

More information

Tele-Nursing System with Realistic Sensations using Virtual Locomotion Interface

Tele-Nursing System with Realistic Sensations using Virtual Locomotion Interface 6th ERCIM Workshop "User Interfaces for All" Tele-Nursing System with Realistic Sensations using Virtual Locomotion Interface Tsutomu MIYASATO ATR Media Integration & Communications 2-2-2 Hikaridai, Seika-cho,

More information

Shopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction

Shopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction Shopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction Minghao Cai 1(B), Soh Masuko 2, and Jiro Tanaka 1 1 Waseda University, Kitakyushu, Japan mhcai@toki.waseda.jp, jiro@aoni.waseda.jp

More information

PHYSICAL ROBOTS PROGRAMMING BY IMITATION USING VIRTUAL ROBOT PROTOTYPES

PHYSICAL ROBOTS PROGRAMMING BY IMITATION USING VIRTUAL ROBOT PROTOTYPES Bulletin of the Transilvania University of Braşov Series I: Engineering Sciences Vol. 6 (55) No. 2-2013 PHYSICAL ROBOTS PROGRAMMING BY IMITATION USING VIRTUAL ROBOT PROTOTYPES A. FRATU 1 M. FRATU 2 Abstract:

More information

Air Marshalling with the Kinect

Air Marshalling with the Kinect Air Marshalling with the Kinect Stephen Witherden, Senior Software Developer Beca Applied Technologies stephen.witherden@beca.com Abstract. The Kinect sensor from Microsoft presents a uniquely affordable

More information

OPTICAL CAMOUFLAGE. ¾ B.Tech E.C.E Shri Vishnu engineering college for women. Abstract

OPTICAL CAMOUFLAGE. ¾ B.Tech E.C.E Shri Vishnu engineering college for women. Abstract OPTICAL CAMOUFLAGE Y.Jyothsna Devi S.L.A.Sindhu ¾ B.Tech E.C.E Shri Vishnu engineering college for women Jyothsna.1015@gmail.com sindhu1015@gmail.com Abstract This paper describes a kind of active camouflage

More information

USING VIRTUAL REALITY SIMULATION FOR SAFE HUMAN-ROBOT INTERACTION 1. INTRODUCTION

USING VIRTUAL REALITY SIMULATION FOR SAFE HUMAN-ROBOT INTERACTION 1. INTRODUCTION USING VIRTUAL REALITY SIMULATION FOR SAFE HUMAN-ROBOT INTERACTION Brad Armstrong 1, Dana Gronau 2, Pavel Ikonomov 3, Alamgir Choudhury 4, Betsy Aller 5 1 Western Michigan University, Kalamazoo, Michigan;

More information

virtual reality SANJAY SINGH B.TECH (EC)

virtual reality SANJAY SINGH B.TECH (EC) virtual reality SINGH (EC) SANJAY B.TECH What is virtual reality? A satisfactory definition may be formulated like this: "Virtual Reality is a way for humans to visualize, manipulate and interact with

More information

UNIVERSIDAD CARLOS III DE MADRID ESCUELA POLITÉCNICA SUPERIOR

UNIVERSIDAD CARLOS III DE MADRID ESCUELA POLITÉCNICA SUPERIOR UNIVERSIDAD CARLOS III DE MADRID ESCUELA POLITÉCNICA SUPERIOR TRABAJO DE FIN DE GRADO GRADO EN INGENIERÍA DE SISTEMAS DE COMUNICACIONES CONTROL CENTRALIZADO DE FLOTAS DE ROBOTS CENTRALIZED CONTROL FOR

More information

Figure 1 HDR image fusion example

Figure 1 HDR image fusion example TN-0903 Date: 10/06/09 Using image fusion to capture high-dynamic range (hdr) scenes High dynamic range (HDR) refers to the ability to distinguish details in scenes containing both very bright and relatively

More information

Guidelines for choosing VR Devices from Interaction Techniques

Guidelines for choosing VR Devices from Interaction Techniques Guidelines for choosing VR Devices from Interaction Techniques Jaime Ramírez Computer Science School Technical University of Madrid Campus de Montegancedo. Boadilla del Monte. Madrid Spain http://decoroso.ls.fi.upm.es

More information

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception

More information

VIEW: Visual Interactive Effective Worlds Lorentz Center International Center for workshops in the Sciences June Dr.

VIEW: Visual Interactive Effective Worlds Lorentz Center International Center for workshops in the Sciences June Dr. Virtual Reality & Presence VIEW: Visual Interactive Effective Worlds Lorentz Center International Center for workshops in the Sciences 25-27 June 2007 Dr. Frederic Vexo Virtual Reality & Presence Outline:

More information

Chapter 1 - Introduction

Chapter 1 - Introduction 1 "We all agree that your theory is crazy, but is it crazy enough?" Niels Bohr (1885-1962) Chapter 1 - Introduction Augmented reality (AR) is the registration of projected computer-generated images over

More information

VIRTUAL REALITY Introduction. Emil M. Petriu SITE, University of Ottawa

VIRTUAL REALITY Introduction. Emil M. Petriu SITE, University of Ottawa VIRTUAL REALITY Introduction Emil M. Petriu SITE, University of Ottawa Natural and Virtual Reality Virtual Reality Interactive Virtual Reality Virtualized Reality Augmented Reality HUMAN PERCEPTION OF

More information

Physical Presence in Virtual Worlds using PhysX

Physical Presence in Virtual Worlds using PhysX Physical Presence in Virtual Worlds using PhysX One of the biggest problems with interactive applications is how to suck the user into the experience, suspending their sense of disbelief so that they are

More information

Development of excavator training simulator using leap motion controller

Development of excavator training simulator using leap motion controller Journal of Physics: Conference Series PAPER OPEN ACCESS Development of excavator training simulator using leap motion controller To cite this article: F Fahmi et al 2018 J. Phys.: Conf. Ser. 978 012034

More information

Prediction and Correction Algorithm for a Gesture Controlled Robotic Arm

Prediction and Correction Algorithm for a Gesture Controlled Robotic Arm Prediction and Correction Algorithm for a Gesture Controlled Robotic Arm Pushkar Shukla 1, Shehjar Safaya 2, Utkarsh Sharma 3 B.Tech, College of Engineering Roorkee, Roorkee, India 1 B.Tech, College of

More information

Intelligent interaction

Intelligent interaction BionicWorkplace: autonomously learning workstation for human-machine collaboration Intelligent interaction Face to face, hand in hand. The BionicWorkplace shows the extent to which human-machine collaboration

More information

P1.4. Light has to go where it is needed: Future Light Based Driver Assistance Systems

P1.4. Light has to go where it is needed: Future Light Based Driver Assistance Systems Light has to go where it is needed: Future Light Based Driver Assistance Systems Thomas Könning¹, Christian Amsel¹, Ingo Hoffmann² ¹ Hella KGaA Hueck & Co., Lippstadt, Germany ² Hella-Aglaia Mobile Vision

More information

Advancements in Gesture Recognition Technology

Advancements in Gesture Recognition Technology IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) Volume 4, Issue 4, Ver. I (Jul-Aug. 2014), PP 01-07 e-issn: 2319 4200, p-issn No. : 2319 4197 Advancements in Gesture Recognition Technology 1 Poluka

More information

Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment

Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment Helmut Schrom-Feiertag 1, Christoph Schinko 2, Volker Settgast 3, and Stefan Seer 1 1 Austrian

More information

User Interface Aspects of a Human-Hand Simulation System

User Interface Aspects of a Human-Hand Simulation System Interface Aspects of a Human-Hand Simulation System Beifang YI Frederick C. HARRIS, Jr. Sergiu M. DASCALU Ali EROL ABSTRACT This paper describes the user interface design for a human-hand simulation system,

More information

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung,

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung, IJCSNS International Journal of Computer Science and Network Security, VOL.11 No.9, September 2011 55 A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang,

More information

II. LITERATURE SURVEY

II. LITERATURE SURVEY Hand Gesture Recognition Using Operating System Mr. Anap Avinash 1 Bhalerao Sushmita 2, Lambrud Aishwarya 3, Shelke Priyanka 4, Nirmal Mohini 5 12345 Computer Department, P.Dr.V.V.P. Polytechnic, Loni

More information

Paper on: Optical Camouflage

Paper on: Optical Camouflage Paper on: Optical Camouflage PRESENTED BY: I. Harish teja V. Keerthi E.C.E E.C.E E-MAIL: Harish.teja123@gmail.com kkeerthi54@gmail.com 9533822365 9866042466 ABSTRACT: Optical Camouflage delivers a similar

More information

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT TAYSHENG JENG, CHIA-HSUN LEE, CHI CHEN, YU-PIN MA Department of Architecture, National Cheng Kung University No. 1, University Road,

More information

Mechatronics Project Report

Mechatronics Project Report Mechatronics Project Report Introduction Robotic fish are utilized in the Dynamic Systems Laboratory in order to study and model schooling in fish populations, with the goal of being able to manage aquatic

More information

AUGMENTED VIRTUAL REALITY APPLICATIONS IN MANUFACTURING

AUGMENTED VIRTUAL REALITY APPLICATIONS IN MANUFACTURING 6 th INTERNATIONAL MULTIDISCIPLINARY CONFERENCE AUGMENTED VIRTUAL REALITY APPLICATIONS IN MANUFACTURING Peter Brázda, Jozef Novák-Marcinčin, Faculty of Manufacturing Technologies, TU Košice Bayerova 1,

More information

Drumtastic: Haptic Guidance for Polyrhythmic Drumming Practice

Drumtastic: Haptic Guidance for Polyrhythmic Drumming Practice Drumtastic: Haptic Guidance for Polyrhythmic Drumming Practice ABSTRACT W e present Drumtastic, an application where the user interacts with two Novint Falcon haptic devices to play virtual drums. The

More information

International Journal of Informative & Futuristic Research ISSN (Online):

International Journal of Informative & Futuristic Research ISSN (Online): Reviewed Paper Volume 2 Issue 6 February 2015 International Journal of Informative & Futuristic Research An Innovative Approach Towards Virtual Drums Paper ID IJIFR/ V2/ E6/ 021 Page No. 1603-1608 Subject

More information

AR 2 kanoid: Augmented Reality ARkanoid

AR 2 kanoid: Augmented Reality ARkanoid AR 2 kanoid: Augmented Reality ARkanoid B. Smith and R. Gosine C-CORE and Memorial University of Newfoundland Abstract AR 2 kanoid, Augmented Reality ARkanoid, is an augmented reality version of the popular

More information

Voice Control of da Vinci

Voice Control of da Vinci Voice Control of da Vinci Lindsey A. Dean and H. Shawn Xu Mentor: Anton Deguet 5/19/2011 I. Background The da Vinci is a tele-operated robotic surgical system. It is operated by a surgeon sitting at the

More information

Mid-term report - Virtual reality and spatial mobility

Mid-term report - Virtual reality and spatial mobility Mid-term report - Virtual reality and spatial mobility Jarl Erik Cedergren & Stian Kongsvik October 10, 2017 The group members: - Jarl Erik Cedergren (jarlec@uio.no) - Stian Kongsvik (stiako@uio.no) 1

More information

VISUAL REQUIREMENTS ON AUGMENTED VIRTUAL REALITY SYSTEM

VISUAL REQUIREMENTS ON AUGMENTED VIRTUAL REALITY SYSTEM Annals of the University of Petroşani, Mechanical Engineering, 8 (2006), 73-78 73 VISUAL REQUIREMENTS ON AUGMENTED VIRTUAL REALITY SYSTEM JOZEF NOVÁK-MARCINČIN 1, PETER BRÁZDA 2 Abstract: Paper describes

More information

Using Pinch Gloves for both Natural and Abstract Interaction Techniques in Virtual Environments

Using Pinch Gloves for both Natural and Abstract Interaction Techniques in Virtual Environments Using Pinch Gloves for both Natural and Abstract Interaction Techniques in Virtual Environments Doug A. Bowman, Chadwick A. Wingrave, Joshua M. Campbell, and Vinh Q. Ly Department of Computer Science (0106)

More information

VR Haptic Interfaces for Teleoperation : an Evaluation Study

VR Haptic Interfaces for Teleoperation : an Evaluation Study VR Haptic Interfaces for Teleoperation : an Evaluation Study Renaud Ott, Mario Gutiérrez, Daniel Thalmann, Frédéric Vexo Virtual Reality Laboratory Ecole Polytechnique Fédérale de Lausanne (EPFL) CH-1015

More information

AUTOMATION OF 3D MEASUREMENTS FOR THE FINAL ASSEMBLY STEPS OF THE LHC DIPOLE MAGNETS

AUTOMATION OF 3D MEASUREMENTS FOR THE FINAL ASSEMBLY STEPS OF THE LHC DIPOLE MAGNETS IWAA2004, CERN, Geneva, 4-7 October 2004 AUTOMATION OF 3D MEASUREMENTS FOR THE FINAL ASSEMBLY STEPS OF THE LHC DIPOLE MAGNETS M. Bajko, R. Chamizo, C. Charrondiere, A. Kuzmin 1, CERN, 1211 Geneva 23, Switzerland

More information

GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL

GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL Darko Martinovikj Nevena Ackovska Faculty of Computer Science and Engineering Skopje, R. Macedonia ABSTRACT Despite the fact that there are different

More information

Affordance based Human Motion Synthesizing System

Affordance based Human Motion Synthesizing System Affordance based Human Motion Synthesizing System H. Ishii, N. Ichiguchi, D. Komaki, H. Shimoda and H. Yoshikawa Graduate School of Energy Science Kyoto University Uji-shi, Kyoto, 611-0011, Japan Abstract

More information

Learning and Using Models of Kicking Motions for Legged Robots

Learning and Using Models of Kicking Motions for Legged Robots Learning and Using Models of Kicking Motions for Legged Robots Sonia Chernova and Manuela Veloso Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 {soniac, mmv}@cs.cmu.edu Abstract

More information

MOBAJES: Multi-user Gesture Interaction System with Wearable Mobile Device

MOBAJES: Multi-user Gesture Interaction System with Wearable Mobile Device MOBAJES: Multi-user Gesture Interaction System with Wearable Mobile Device Enkhbat Davaasuren and Jiro Tanaka 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8577 Japan {enkhee,jiro}@iplab.cs.tsukuba.ac.jp Abstract.

More information

Immersive Training. David Lafferty President of Scientific Technical Services And ARC Associate

Immersive Training. David Lafferty President of Scientific Technical Services And ARC Associate Immersive Training David Lafferty President of Scientific Technical Services And ARC Associate Current Situation Great Shift Change Drive The Need For Training Conventional Training Methods Are Expensive

More information

Image Manipulation Interface using Depth-based Hand Gesture

Image Manipulation Interface using Depth-based Hand Gesture Image Manipulation Interface using Depth-based Hand Gesture UNSEOK LEE JIRO TANAKA Vision-based tracking is popular way to track hands. However, most vision-based tracking methods can t do a clearly tracking

More information

Localized Space Display

Localized Space Display Localized Space Display EE 267 Virtual Reality, Stanford University Vincent Chen & Jason Ginsberg {vschen, jasong2}@stanford.edu 1 Abstract Current virtual reality systems require expensive head-mounted

More information

HAPTIC BASED ROBOTIC CONTROL SYSTEM ENHANCED WITH EMBEDDED IMAGE PROCESSING

HAPTIC BASED ROBOTIC CONTROL SYSTEM ENHANCED WITH EMBEDDED IMAGE PROCESSING HAPTIC BASED ROBOTIC CONTROL SYSTEM ENHANCED WITH EMBEDDED IMAGE PROCESSING K.Gopal, Dr.N.Suthanthira Vanitha, M.Jagadeeshraja, and L.Manivannan, Knowledge Institute of Technology Abstract: - The advancement

More information

Salient features make a search easy

Salient features make a search easy Chapter General discussion This thesis examined various aspects of haptic search. It consisted of three parts. In the first part, the saliency of movability and compliance were investigated. In the second

More information

Microsoft Scrolling Strip Prototype: Technical Description

Microsoft Scrolling Strip Prototype: Technical Description Microsoft Scrolling Strip Prototype: Technical Description Primary features implemented in prototype Ken Hinckley 7/24/00 We have done at least some preliminary usability testing on all of the features

More information

November 30, Prof. Sung-Hoon Ahn ( 安成勳 )

November 30, Prof. Sung-Hoon Ahn ( 安成勳 ) 4 4 6. 3 2 6 A C A D / C A M Virtual Reality/Augmented t Reality November 30, 2009 Prof. Sung-Hoon Ahn ( 安成勳 ) Photo copyright: Sung-Hoon Ahn School of Mechanical and Aerospace Engineering Seoul National

More information

Development of an Automatic Camera Control System for Videoing a Normal Classroom to Realize a Distant Lecture

Development of an Automatic Camera Control System for Videoing a Normal Classroom to Realize a Distant Lecture Development of an Automatic Camera Control System for Videoing a Normal Classroom to Realize a Distant Lecture Akira Suganuma Depertment of Intelligent Systems, Kyushu University, 6 1, Kasuga-koen, Kasuga,

More information

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING (Application to IMAGE PROCESSING) DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING SUBMITTED BY KANTA ABHISHEK IV/IV C.S.E INTELL ENGINEERING COLLEGE ANANTAPUR EMAIL:besmile.2k9@gmail.com,abhi1431123@gmail.com

More information

Adding Gestures to Ordinary Mouse Use: a New Input Modality for Improved Human-Computer Interaction

Adding Gestures to Ordinary Mouse Use: a New Input Modality for Improved Human-Computer Interaction Adding Gestures to Ordinary Mouse Use: a New Input Modality for Improved Human-Computer Interaction Luca Lombardi and Marco Porta Dipartimento di Informatica e Sistemistica, Università di Pavia Via Ferrata,

More information

Effective Iconography....convey ideas without words; attract attention...

Effective Iconography....convey ideas without words; attract attention... Effective Iconography...convey ideas without words; attract attention... Visual Thinking and Icons An icon is an image, picture, or symbol representing a concept Icon-specific guidelines Represent the

More information

3D-Position Estimation for Hand Gesture Interface Using a Single Camera

3D-Position Estimation for Hand Gesture Interface Using a Single Camera 3D-Position Estimation for Hand Gesture Interface Using a Single Camera Seung-Hwan Choi, Ji-Hyeong Han, and Jong-Hwan Kim Department of Electrical Engineering, KAIST, Gusung-Dong, Yusung-Gu, Daejeon, Republic

More information

PHYSICS-BASED INTERACTIONS IN VIRTUAL REALITY MAX LAMMERS LEAD SENSE GLOVE

PHYSICS-BASED INTERACTIONS IN VIRTUAL REALITY MAX LAMMERS LEAD SENSE GLOVE PHYSICS-BASED INTERACTIONS IN VIRTUAL REALITY MAX LAMMERS LEAD DEVELOPER @ SENSE GLOVE Current Interactions in VR Input Device Virtual Hand Model (VHM) Sense Glove Accuracy (per category) Optics based

More information

DATA GLOVES USING VIRTUAL REALITY

DATA GLOVES USING VIRTUAL REALITY DATA GLOVES USING VIRTUAL REALITY Raghavendra S.N 1 1 Assistant Professor, Information science and engineering, sri venkateshwara college of engineering, Bangalore, raghavendraewit@gmail.com ABSTRACT This

More information

DESIGN STYLE FOR BUILDING INTERIOR 3D OBJECTS USING MARKER BASED AUGMENTED REALITY

DESIGN STYLE FOR BUILDING INTERIOR 3D OBJECTS USING MARKER BASED AUGMENTED REALITY DESIGN STYLE FOR BUILDING INTERIOR 3D OBJECTS USING MARKER BASED AUGMENTED REALITY 1 RAJU RATHOD, 2 GEORGE PHILIP.C, 3 VIJAY KUMAR B.P 1,2,3 MSRIT Bangalore Abstract- To ensure the best place, position,

More information

Surgical robot simulation with BBZ console

Surgical robot simulation with BBZ console Review Article on Thoracic Surgery Surgical robot simulation with BBZ console Francesco Bovo 1, Giacomo De Rossi 2, Francesco Visentin 2,3 1 BBZ srl, Verona, Italy; 2 Department of Computer Science, Università

More information

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Hafid NINISS Forum8 - Robot Development Team Abstract: The purpose of this work is to develop a man-machine interface for

More information

Exploring Passive Ambient Static Electric Field Sensing to Enhance Interaction Modalities Based on Body Motion and Activity

Exploring Passive Ambient Static Electric Field Sensing to Enhance Interaction Modalities Based on Body Motion and Activity Exploring Passive Ambient Static Electric Field Sensing to Enhance Interaction Modalities Based on Body Motion and Activity Adiyan Mujibiya The University of Tokyo adiyan@acm.org http://lab.rekimoto.org/projects/mirage-exploring-interactionmodalities-using-off-body-static-electric-field-sensing/

More information

VR System Input & Tracking

VR System Input & Tracking Human-Computer Interface VR System Input & Tracking 071011-1 2017 년가을학기 9/13/2017 박경신 System Software User Interface Software Input Devices Output Devices User Human-Virtual Reality Interface User Monitoring

More information

FRAUNHOFER INSTITUTE FOR OPEN COMMUNICATION SYSTEMS FOKUS COMPETENCE CENTER VISCOM

FRAUNHOFER INSTITUTE FOR OPEN COMMUNICATION SYSTEMS FOKUS COMPETENCE CENTER VISCOM FRAUNHOFER INSTITUTE FOR OPEN COMMUNICATION SYSTEMS FOKUS COMPETENCE CENTER VISCOM SMART ALGORITHMS FOR BRILLIANT PICTURES The Competence Center Visual Computing of Fraunhofer FOKUS develops visualization

More information

A Real Time Static & Dynamic Hand Gesture Recognition System

A Real Time Static & Dynamic Hand Gesture Recognition System International Journal of Engineering Inventions e-issn: 2278-7461, p-issn: 2319-6491 Volume 4, Issue 12 [Aug. 2015] PP: 93-98 A Real Time Static & Dynamic Hand Gesture Recognition System N. Subhash Chandra

More information

ReVRSR: Remote Virtual Reality for Service Robots

ReVRSR: Remote Virtual Reality for Service Robots ReVRSR: Remote Virtual Reality for Service Robots Amel Hassan, Ahmed Ehab Gado, Faizan Muhammad March 17, 2018 Abstract This project aims to bring a service robot s perspective to a human user. We believe

More information

Visual Search using Principal Component Analysis

Visual Search using Principal Component Analysis Visual Search using Principal Component Analysis Project Report Umesh Rajashekar EE381K - Multidimensional Digital Signal Processing FALL 2000 The University of Texas at Austin Abstract The development

More information