Adding Gestures to Ordinary Mouse Use: a New Input Modality for Improved Human-Computer Interaction

Size: px
Start display at page:

Download "Adding Gestures to Ordinary Mouse Use: a New Input Modality for Improved Human-Computer Interaction"

Transcription

1 Adding Gestures to Ordinary Mouse Use: a New Input Modality for Improved Human-Computer Interaction Luca Lombardi and Marco Porta Dipartimento di Informatica e Sistemistica, Università di Pavia Via Ferrata, Pavia (Italy) luca.lombardi@unipv.it, marco.porta@unipv.it Abstract Although the way we interact with computers is substantially the same since twenty years based on keyboard, mouse and window metaphor machine perception could be usefully exploited to enhance the human-computer communication process. In this paper, we present a vision-based user interface where plain, static hand gestures performed nearby the mouse are interpreted as specific input commands. Our tests demonstrate that this new input modality does not interfere with ordinary mouse use and can speed up task execution, while not requiring too much attention from the user. 1. Introduction Perceptive interfaces also called perceptual when combined with multimedia features and other possible multimodal kinds of input [1] provide the computer with perceptive capabilities. Through perception, the machine becomes able to sense its environment, acquiring implicit and/or explicit information about users and what happens around them. Vision, in particular, is a powerful non-invasive input channel for interfaces, and can considerably improve the quality of the interaction. Many prototypes of vision-based interfaces (VBIs) have been developed so far [2], definitely laying the foundations for vision technology as a new input paradigm. In this paper we propose a VBI where plain, static hand gestures performed nearby the mouse are interpreted as specific input commands; actually, we think that vision-based interfaces are best exploited when used in addition to usual input devices, not instead of them. Hand gesture recognition has so far been exploited for PC input in several applications. Among these, we may cite mouse replacement (e.g. [3]), drawing (e.g. [4]) and navigation in virtual environments (e.g. [5]). However, the near-the-mouse approach we are proposing which we call GEM, from Gesture-Enhanced Mouse is a novelty in the multimodal interaction area. To be really useful for the user, a VBI should comply with some general rules. Firstly, it should speed up task execution, compared to traditional input modalities (or, at worst, the times needed should be the same). Secondly, the VBI should not require too much attention from the user, who should be able to trigger commands without taking his/her mind off the task being carried out at that moment. Thirdly, gesture execution should not be too much tiring. The gestures we have chosen for our vision-based interface meet with the previous rules, but are also characterized by two further qualities: hand-mouse contact, so that almost no hand shifts are necessary in the switch between normal mouse activity and gesture performance; and comfortable postures, to allow the user to perform gestures with very low effort. 2. System description As an input device, we exploit a standard and very cheap webcam placed in front of the mouse (Figure 1a); the image size is 320x240 pixels. Gestures recognized by GEM are the following: G1. Fingers close together: hand covering the mouse, with adjacent fingers touching each other (Figure 1b). This hand posture, while easy to achieve, does not interfere with normal mouse use; in fact, there are always at least small gaps between fingers, which can be eliminated only intentionally. G2. Horizontal hand: hand on the mouse, kept about parallel to the horizontal plane (Figure 1c). Since, usually, the hand wraps the mouse when using it,

2 also in this case there is no risk of mistaking ordinary mouse use for the gesture. G3. Horizontal fist: fist horizontally placed on the mouse (Figure 1d). Hand switching to such position can occur very quickly, even while moving the mouse. G4. Vertical fist: fist vertically placed on the mouse (Figure 1e). This is a slight variant of gesture 3, which can be achieved with almost no additional effort. G5. Vertical hand: vertical open hand on the mouse (Figure 1f). Unlike gesture G4, here the hand is roughly open. G1 (c) G2 (d) G3 (e) G4 (f) G5 Figure 1. Webcam position, (b-f) gestures recognized by GEM Each gesture has a specific meaning associated with it: once recognized, the particular hand posture triggers the execution of a corresponding command. According to user preferences or context of use, several actions can be chosen to speed up or ease computer input. For example, referring to a Web browsing setting, gesture G1 could be used for an operation performed very often, such as continuous downward page scrolling; gesture G3 could be exploited for the opposite task, i.e. upward page scrolling; gesture G2 might trigger a page update; gesture G4 could be used to go to the home page; and gesture G5, at last, may be the equivalent of pressing the browser s back button. Command associations, however, can be modified according to one s preferences or needs. It is to be noted that the action triggered by a gesture may be performed continually or only once, depending on the kind of command. Referring to the previous example, page scrolling is repeated as long as the hand holds the proper posture; conversely, a go back action is simply executed one time, even if the hand stays in position: to accomplish the command again, the hand must be moved in some way (so that another gesture or no gesture at all is recognized). Such behavior prevents one-shot commands from being unintentionally repeated. 3. Gesture recognition To get a fast recognition algorithm, we have opted for a two-dimensional appearance-based approach. While simple, the chosen method assures good performance of the gesture identification subsystem. After a preliminary phase in which application parameters are initialized, gesture recognition occurs through two steps. Firstly, a color test is performed to identify skin pixels. Then, potential indicators of each gesture are searched for Stage 1: skin color detection Skin colors tend to concentrate within narrow areas of certain chromatic spaces. In particular, we work in the YCbCr color space, which separates luminance (Y) from chrominance (Cb and Cr) and has the advantage of allowing bad illumination effects to be reduced (among others, see [6] for a description of different color-based skin detection techniques). Moreover, this model can be employed (almost) regardless of users skin, since differences in skin colors are mainly due to different concentrations of melanin, which affect the Y component only. Before using the system, the application work environment must be initialized according to light conditions and user skin color, by modifying the Cb and Cr values (as well as their range) through graphical sliders. Color calibration data are stored in a file, so that they may be subsequently reused (under similar illumination conditions), thus becoming a sort of identification card of the user. The color test phase produces a binary image, where white pixels correspond to skinidentified areas. Since shadows may affect the color of hand skin portions, we reduce the problem by filling horizontal gaps. Practically, if within a row there is a sequence of black pixels included between two white pixels, and such pixels are surrounded by a number of white pixels greater than a threshold (e.g. 5), then the black pixels are turned into white ones: in Figure 2, filled gaps within the hand are indicated in grey. Such a simple and fast solution works well in most cases, especially for gesture G5, where shades may deceive the recognition algorithm. Also, as clothes or background elements with colors similar to skin may decrease the recognition accuracy, we adopt solutions that help reduce the problem. For example, through sliders, the user can exclude upper/bottom/left/right image strips from the recognition process, so that they are not considered in the color test phase. In addition, it is possible to click on

3 pixels of the original image to explicitly say that their colors (and similar ones, according to a defined tolerance) do not pertain to skin regions. Upper area Middle area Lower area (c) Figure 2. Skin color identification: original image, skin image 3.2. Stage 2: image segmentation The first step in the recognition process is the search for a horizontal continuous line of skin pixels whose length is greater than a configurable threshold: we call such line horizontal feature. Indeed, to allow for possible noise present in source images, we admit some nonskin pixels on the horizontal feature, but only very few (5% at most). We use two reference thresholds, a long one, lth, and a short one, sth. From top to bottom, each row of the image is analyzed to identify a possible line with a number of skin pixels greater than lth or sth. Once a horizontal feature is found, we search for, immediately below it, another line with the same characteristics, so that spurious skin pixel rows are not considered; the presence of this second line is a prerequisite for going ahead with the recognition algorithm. As will be explained in the next subsections, if the horizontal feature is longer than lth, then the user may be performing any of the five gestures; otherwise, provided that the feature is longer than sth, the user may be performing gesture G5, i.e. vertical open hand. Actually, this gesture does not require the hand to be placed perfectly straight in front of the webcam, which might be a strict constraint for the user. Instead, it is sufficient that the hand is roughly open, so as to distinguish it from the vertical fist (therefore, gesture G5 may be characterized by either a short or a long feature). To discriminate among the different gestures, the image is vertically subdivided into three regions, which we will call upper area, middle area and lower area (Figure 3a). As shown in Figures 3b-3f, for gestures G1 and G2 the hand is totally included in the lower area. For gesture G3, the hand occupies both the lower and middle areas. For gestures G4 and G5, at last, the hand protrudes to the upper area. (d) (e) (f) Figure 3. Upper, middle and lower areas, (b-f) skin images for gestures G1-G5 The region where the horizontal feature is detected is therefore a first indicator about the kind of gesture the user is potentially performing. Since the heights of the three areas, as well as the values of the thresholds lth and sth, depend on the average distance of the hand from the webcam besides on the user s hand size, it is of course possible to set such parameters (usually, in the initial calibration phase). During normal mouse use, however, new adjustments are generally not necessary. To improve the recognition process, we also impose that the left starting point of the horizontal feature is within the left 1/n portion of the image. In our experiments n = 2.5, but, depending on the position of the webcam, the value may be varied (e.g. n = 2). This way, we prevent false skin pixels typically pertaining to the clothes of a right-handed user and staying in the right (n-1)/n area of the image from being taken into account (Figure 4). Figure 4. User wearing a skin-like pullover and performing gesture G1, selection of a horizontal feature (black line) which starts within the left 2/5 of the image Gesture G1: fingers close together. For gesture G1 to be recognized, the horizontal feature, longer than lth, must be found within the lower area. When this occurs, a rectangle is built which has such line, with length lth, as the upper base, and whose height is h =

4 (3/4)lth (Figure 5a): we call such rectangle control rectangle. Then, the algorithm counts the number of skin pixels (n1) on the lower base of the rectangle and on the horizontal segments placed at (1/2)h and (2/3)h (n2 and n3, respectively): we call such segments control lines. Figure 6. Control rectangles for gesture G2: well parallel hand, roughly parallel hand Figure 5. Control rectangle for gesture G1, unsuccessful recognition due to gaps between fingers The gesture is classified as G1 if there are sufficient skin pixels on each control line and non-skin gaps are not too wide. In our tests, we impose n1, n2, n3 > 0.9lth; we also limit the width of non-skin pixel gaps on control lines to 0.03lth, although this value may need to be increased in some (bad) illumination conditions. Moreover, since the presence or absence of gaps is fundamental to correctly identify gesture G1, in this case the recognition algorithm does not consider artificial skin pixels created in stage 1 by filling non-skin gaps (Figure 5b) Gesture G2: horizontal hand. Like for gesture G1, the identification of gesture G2 requires the horizontal feature, longer than lth, to be found within the lower area of the image. A first control rectangle (CR1) is then built which has such line (lth in length) as the upper base, and whose height is h = (1/4)lth. The number of skin pixels on the lower base of such rectangle must be greater than a threshold (0.9lth in our tests). If this is the case, a second control rectangle (CR2) is built immediately under CR1, with height 0.6lth (Figure 6a). For the gesture to be classified as G2, on the lower base of CR2 there must not be too much skin pixels; in our experiments, we use a threshold of 0.45lth. This value, which may seem rather high, turns out to be a good choice in most cases, and accounts for the fact that often the hand is not rightly parallel to the horizontal plane (which might require too much effort from the user; see Figure 6b) Gesture G3: horizontal fist. Gesture G3 is characterized by a horizontal feature, longer than lth, which is within the middle area. Such line (lth in length) is the upper base of a control rectangle lth/2 high (Figure 7). The number n of skin pixels on the lower base of the rectangle determines whether the gesture can be actually recognized as G3 or not; in our tests, we impose n > 0.9lth. Even when the hand is quite close to the webcam, thus protruding in the upper area (Figure 7b), the long horizontal feature is correctly found in the middle area (as will be explained in Section 3.2.5, there is no risk of mistaking gesture G3 for gesture G5). Figure 7. Control rectangle for gesture G3 at two different distances from the webcam: long, short Gesture G4: vertical fist. When a long horizontal feature is found within the upper area, the user may be performing either gesture G4 or gesture G5. As already stated, in fact, the hand needs not to be perfectly straight before the webcam to be recognized as the vertical hand gesture. Therefore, a method is necessary to distinguish it from the vertical fist. Figure 8 shows gesture G4 and two variants of gesture G5 (b and c). The horizontal black line is the horizontal feature. The vertical segment is instead a control line, right-shifted with respect to the left starting point of the horizontal feature, on which we search for the lowest mean of the RGB channel values (i.e. the lowest intensity). As you can note, when the user is performing gesture G5 the separations between fingers are characterized by very dark pixels. On the contrary, pixels on the vertical line of gesture G4 are usually lighter.

5 (c) Figure 8. Gesture G4, gesture G5 (version 1), (c) gesture G5 (version 2) So, to solve the vertical fist / vertical hand dilemma, we match the lowest intensity found on the control line against a threshold (settable through a slider, since it depends on lighting conditions). If the value is greater than the threshold, then the gesture is recognized as G4, otherwise it is classified as G5. This simple trick is an effective way to distinguish the two hand poses without forcing the user to pay much attention to the way the gesture is performed. Once a long horizontal feature is found in the upper area, a control rectangle is built which has such line as the upper base and is lth wide and lth high (Figure 9). Next, we count the number n of skin pixels on the horizontal control line placed at lth/2. If n is greater than a threshold (0.9lth in our experiments), then we proceed to compute the lowest RGB mean value on the vertical control line (lth in length) positioned at a distance d from the left side of the rectangle; in our tests, d = 25. If such value is greater than a configurable threshold, then the gesture is interpreted as the vertical fist (Figure 9a), otherwise as the vertical hand (Figure 9b). Figure 9. Control rectangles for gestures G4/G5: gesture G4, gesture G Gesture G5: vertical hand. As explained in the previous section, a long horizontal feature in the upper area, associated with a lowest intensity on the vertical control line which is less than a threshold, indicates that the user is performing gesture G5. If the hand is about straight in front of the webcam, however, the long horizontal feature cannot be found in none of the three areas of the image (Figure 10a). When this happens, the recognition algorithm searches for a shorter horizontal feature within the upper area, using a threshold sth instead of lth; in our experiments, sth = lth/2.5. Like for the long feature, we require the left starting point of the short feature to be in the left portion of the scene. In this case, however, since the hand is narrower, we impose that the point is within the left 1/m area of the image, with m < 2.5 (in our tests, m = 2). Once a short horizontal feature is found, the gesture identification algorithm considers a control rectangle which has such line (sth in length) as the upper base, and whose height is 2sth (Figure 10b). Figure 10. Gesture performed with a straight hand: original image, skin image with the control rectangle If the number of skin pixels on the base of the control rectangle is greater than a threshold (0.9sth in our experiments), then the gesture is recognized as G5. 4. Implementation issues The system is implemented in Java. Even if this language is usually not recommended for real-time applications, due to performance problems, we decided on it for two main reasons. Firstly, we wanted to rapidly develop a prototype application, composed of both the gesture recognition module and different graphical interfaces; in point of fact, not many languages allow complex interfaces to be implemented with relatively little effort like Java. Moreover, we have taken advantage of the JMF (Java Media Framework) package, through which multimedia data (such as video) can be easily acquired and processed. As for performance, the increasing power of current PCs makes the problem less and less relevant. We have tested GEM on both a high-end and a lowend PC (Intel Core Duo, 2394 Mhz, 2 GB RAM and Intel Pentium 4, 2597 MHz, 512 MB RAM), without noting relevant differences in program execution: on the average, 15 frames per second were processed in both cases. 5. Experimental tests GEM has of course been informally tested many times, both during its development and when fully implemented. However, to assess its real effectiveness, we

6 have also carried out some structured experiments with five subjects. Although we are aware that more users should be involved to get incontestable results (which we will do soon), the good outcomes of the tests seem to confirm the suitability of our gesture/mouse approach as a new human-computer interaction modality. We have implemented four kinds of experiments. In the first one the user, starting from the normal mouse use state, was asked to perform a certain gesture. Normal mouse use means that the mouse is being used and no gesture is recognized. For five times, the execution of each one of the five gestures (G1-G5) was requested in random order in five different sessions, thus resulting in 25 total gestures. The observed variable was the percentage of gestures correctly recognized. In the second experiment, we took into account hand transitions to and off the mouse. The user was asked, for five times, to lay the hand on the mouse, move it and, then, withdraw it. The observed variable was the percentage of unintentional gestures recognized. In the third experiment, the system was employed for a Web browsing activity, with the command associations described in Section 2 (G1 = scroll down, G2 = page update, G3 = scroll up, G4 = home page, G5 = back). The user was asked to surf the Web by using gestures for 120 seconds. The observed variable was the percentage of commands correctly recognized. In the fourth experiment, at last, the testers were asked to use the mouse normally for 60 seconds (doing whatever they wanted, but without performing gestures). The observed variable was the number of unintentional gestures recognized. Tables 1-4 show the results obtained for the five testers. It is to be noted that errors are mainly due to bad performances of gestures by users, who had very few minutes to get acquainted with GEM. Moreover, while here we have considered isolated recognitions too, a command is actually triggered only after a certain number of successive identifications. 6. Conclusions and future work In this paper we have presented GEM, a visionbased user interface for the recognition of hand gestures performed near the mouse. Our tests have demonstrated that this approach, which exploits an ordinary webcam, is suitable for PC input and, in general, does not clash with normal mouse activities. The main problem of GEM, anyway common to all VBIs, is the need for an initial accurate calibration, so that the hand can be subsequently correctly identified. Tables Results of the experimental tests G1 100% 80% 80% 100% 100% G2 100% 60% 80% 60% 80% G3 100% 100% 100% 100% 100% G4 80% 100% 80% 100% 100% G5 60% 100% 100% 100% 100% 0% 10% 20% 10% 10% 100% 83.3% 100% 93.3% 92% Some bad illumination conditions may in fact generate reflections, glares and shadows, that might deceive the recognition algorithm. For this reason, we are now considering the opportunity of using an infrared camera instead of an ordinary webcam: while costs of such sensors are constantly lowering, their use may greatly enhance the skin identification procedure, thus improving the whole gesture recognition process as well. 7. Acknowledgements This work has been supported by funds from the Italian FIRB project Software and Communication Platforms for High-Performance Collaborative Grid (grant RBIN043TKY). 8. References [1] M. Turk, and G. Robertson G, Perceptual User Interfaces, Communications of the ACM, Vol 43, No 3, 2000, [2] M. Porta, Vision-based user interfaces: methods and applications, International Journal of Human-Computer Studies, Elsevier Science Publishing, 57 (2002), [3] G. Iannizzotto, M. Villari, and L. Vita, Hand tracking for human-computer interaction with Graylevel VisualGlove: Turning back to the simple way, Proc. PUI 2001, Orlando, FL, USA, Nov [4] K. Abe, H. Saito, and S. Ozawa, 3-D Drawing System via Hand Motion Recognition from Two Cameras. Proc. IEEE Int. Conf. on Systems, Man, and Cybernetics, Tucson, AZ, USA, October 8-11, [5] R. O Hagan, and A. Zelinsky, Visual Gesture Interfaces for Virtual Environments, Proc. of the 1 st Australasian User Interface Conference, Canberra, Australia, 31 January 3 February, [6] B. D. Zarit, B. J. Super, and F. K. H. Quek, Comparison of Five Color Models in Skin Pixel Classification, Proc. ICCV 99, Corfu, Greece, Sept

Analysis of Various Methodology of Hand Gesture Recognition System using MATLAB

Analysis of Various Methodology of Hand Gesture Recognition System using MATLAB Analysis of Various Methodology of Hand Gesture Recognition System using MATLAB Komal Hasija 1, Rajani Mehta 2 Abstract Recognition is a very effective area of research in regard of security with the involvement

More information

Microsoft Scrolling Strip Prototype: Technical Description

Microsoft Scrolling Strip Prototype: Technical Description Microsoft Scrolling Strip Prototype: Technical Description Primary features implemented in prototype Ken Hinckley 7/24/00 We have done at least some preliminary usability testing on all of the features

More information

A Real Time Static & Dynamic Hand Gesture Recognition System

A Real Time Static & Dynamic Hand Gesture Recognition System International Journal of Engineering Inventions e-issn: 2278-7461, p-issn: 2319-6491 Volume 4, Issue 12 [Aug. 2015] PP: 93-98 A Real Time Static & Dynamic Hand Gesture Recognition System N. Subhash Chandra

More information

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC ROBOT VISION Dr.M.Madhavi, MED, MVSREC Robotic vision may be defined as the process of acquiring and extracting information from images of 3-D world. Robotic vision is primarily targeted at manipulation

More information

Development of an Automatic Camera Control System for Videoing a Normal Classroom to Realize a Distant Lecture

Development of an Automatic Camera Control System for Videoing a Normal Classroom to Realize a Distant Lecture Development of an Automatic Camera Control System for Videoing a Normal Classroom to Realize a Distant Lecture Akira Suganuma Depertment of Intelligent Systems, Kyushu University, 6 1, Kasuga-koen, Kasuga,

More information

Real-Time Face Detection and Tracking for High Resolution Smart Camera System

Real-Time Face Detection and Tracking for High Resolution Smart Camera System Digital Image Computing Techniques and Applications Real-Time Face Detection and Tracking for High Resolution Smart Camera System Y. M. Mustafah a,b, T. Shan a, A. W. Azman a,b, A. Bigdeli a, B. C. Lovell

More information

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES International Journal of Advanced Research in Engineering and Technology (IJARET) Volume 9, Issue 3, May - June 2018, pp. 177 185, Article ID: IJARET_09_03_023 Available online at http://www.iaeme.com/ijaret/issues.asp?jtype=ijaret&vtype=9&itype=3

More information

Wadehra Kartik, Kathpalia Mukul, Bahl Vasudha, International Journal of Advance Research, Ideas and Innovations in Technology

Wadehra Kartik, Kathpalia Mukul, Bahl Vasudha, International Journal of Advance Research, Ideas and Innovations in Technology ISSN: 2454-132X Impact factor: 4.295 (Volume 4, Issue 1) Available online at www.ijariit.com Hand Detection and Gesture Recognition in Real-Time Using Haar-Classification and Convolutional Neural Networks

More information

The Control of Avatar Motion Using Hand Gesture

The Control of Avatar Motion Using Hand Gesture The Control of Avatar Motion Using Hand Gesture ChanSu Lee, SangWon Ghyme, ChanJong Park Human Computing Dept. VR Team Electronics and Telecommunications Research Institute 305-350, 161 Kajang-dong, Yusong-gu,

More information

What was the first gestural interface?

What was the first gestural interface? stanford hci group / cs247 Human-Computer Interaction Design Studio What was the first gestural interface? 15 January 2013 http://cs247.stanford.edu Theremin Myron Krueger 1 Myron Krueger There were things

More information

522 Int'l Conf. Artificial Intelligence ICAI'15

522 Int'l Conf. Artificial Intelligence ICAI'15 522 Int'l Conf. Artificial Intelligence ICAI'15 Verification of a Seat Occupancy/Vacancy Detection Method Using High-Resolution Infrared Sensors and the Application to the Intelligent Lighting System Daichi

More information

International Journal of Informative & Futuristic Research ISSN (Online):

International Journal of Informative & Futuristic Research ISSN (Online): Reviewed Paper Volume 2 Issue 6 February 2015 International Journal of Informative & Futuristic Research An Innovative Approach Towards Virtual Drums Paper ID IJIFR/ V2/ E6/ 021 Page No. 1603-1608 Subject

More information

APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE

APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE Najirah Umar 1 1 Jurusan Teknik Informatika, STMIK Handayani Makassar Email : najirah_stmikh@yahoo.com

More information

A SURVEY ON HAND GESTURE RECOGNITION

A SURVEY ON HAND GESTURE RECOGNITION A SURVEY ON HAND GESTURE RECOGNITION U.K. Jaliya 1, Dr. Darshak Thakore 2, Deepali Kawdiya 3 1 Assistant Professor, Department of Computer Engineering, B.V.M, Gujarat, India 2 Assistant Professor, Department

More information

Facial Biometric For Performance. Best Practice Guide

Facial Biometric For Performance. Best Practice Guide Facial Biometric For Performance Best Practice Guide Foreword State-of-the-art face recognition systems under controlled lighting condition are proven to be very accurate with unparalleled user-friendliness,

More information

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung,

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung, IJCSNS International Journal of Computer Science and Network Security, VOL.11 No.9, September 2011 55 A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang,

More information

WHO. 6 staff people. Tel: / Fax: Website: vision.unipv.it

WHO. 6 staff people. Tel: / Fax: Website: vision.unipv.it It has been active in the Department of Electrical, Computer and Biomedical Engineering of the University of Pavia since the early 70s. The group s initial research activities concentrated on image enhancement

More information

Design a Model and Algorithm for multi Way Gesture Recognition using Motion and Image Comparison

Design a Model and Algorithm for multi Way Gesture Recognition using Motion and Image Comparison e-issn 2455 1392 Volume 2 Issue 10, October 2016 pp. 34 41 Scientific Journal Impact Factor : 3.468 http://www.ijcter.com Design a Model and Algorithm for multi Way Gesture Recognition using Motion and

More information

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright E90 Project Proposal 6 December 2006 Paul Azunre Thomas Murray David Wright Table of Contents Abstract 3 Introduction..4 Technical Discussion...4 Tracking Input..4 Haptic Feedack.6 Project Implementation....7

More information

Color and More. Color basics

Color and More. Color basics Color and More In this lesson, you'll evaluate an image in terms of its overall tonal range (lightness, darkness, and contrast), its overall balance of color, and its overall appearance for areas that

More information

Enabling Cursor Control Using on Pinch Gesture Recognition

Enabling Cursor Control Using on Pinch Gesture Recognition Enabling Cursor Control Using on Pinch Gesture Recognition Benjamin Baldus Debra Lauterbach Juan Lizarraga October 5, 2007 Abstract In this project we expect to develop a machine-user interface based on

More information

Finger rotation detection using a Color Pattern Mask

Finger rotation detection using a Color Pattern Mask Finger rotation detection using a Color Pattern Mask V. Shishir Reddy 1, V. Raghuveer 2, R. Hithesh 3, J. Vamsi Krishna 4,, R. Pratesh Kumar Reddy 5, K. Chandra lohit 6 1,2,3,4,5,6 Electronics and Communication,

More information

Combined Approach for Face Detection, Eye Region Detection and Eye State Analysis- Extended Paper

Combined Approach for Face Detection, Eye Region Detection and Eye State Analysis- Extended Paper International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 10, Issue 9 (September 2014), PP.57-68 Combined Approach for Face Detection, Eye

More information

Gesture Recognition with Real World Environment using Kinect: A Review

Gesture Recognition with Real World Environment using Kinect: A Review Gesture Recognition with Real World Environment using Kinect: A Review Prakash S. Sawai 1, Prof. V. K. Shandilya 2 P.G. Student, Department of Computer Science & Engineering, Sipna COET, Amravati, Maharashtra,

More information

Image Extraction using Image Mining Technique

Image Extraction using Image Mining Technique IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 9 (September. 2013), V2 PP 36-42 Image Extraction using Image Mining Technique Prof. Samir Kumar Bandyopadhyay,

More information

VIRTUAL FIGURE PRESENTATION USING PRESSURE- SLIPPAGE-GENERATION TACTILE MOUSE

VIRTUAL FIGURE PRESENTATION USING PRESSURE- SLIPPAGE-GENERATION TACTILE MOUSE VIRTUAL FIGURE PRESENTATION USING PRESSURE- SLIPPAGE-GENERATION TACTILE MOUSE Yiru Zhou 1, Xuecheng Yin 1, and Masahiro Ohka 1 1 Graduate School of Information Science, Nagoya University Email: ohka@is.nagoya-u.ac.jp

More information

SIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB

SIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB SIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB S. Kajan, J. Goga Institute of Robotics and Cybernetics, Faculty of Electrical Engineering and Information Technology, Slovak University

More information

The Research of the Lane Detection Algorithm Base on Vision Sensor

The Research of the Lane Detection Algorithm Base on Vision Sensor Research Journal of Applied Sciences, Engineering and Technology 6(4): 642-646, 2013 ISSN: 2040-7459; e-issn: 2040-7467 Maxwell Scientific Organization, 2013 Submitted: September 03, 2012 Accepted: October

More information

EFFICIENT ATTENDANCE MANAGEMENT SYSTEM USING FACE DETECTION AND RECOGNITION

EFFICIENT ATTENDANCE MANAGEMENT SYSTEM USING FACE DETECTION AND RECOGNITION EFFICIENT ATTENDANCE MANAGEMENT SYSTEM USING FACE DETECTION AND RECOGNITION 1 Arun.A.V, 2 Bhatath.S, 3 Chethan.N, 4 Manmohan.C.M, 5 Hamsaveni M 1,2,3,4,5 Department of Computer Science and Engineering,

More information

Linear Gaussian Method to Detect Blurry Digital Images using SIFT

Linear Gaussian Method to Detect Blurry Digital Images using SIFT IJCAES ISSN: 2231-4946 Volume III, Special Issue, November 2013 International Journal of Computer Applications in Engineering Sciences Special Issue on Emerging Research Areas in Computing(ERAC) www.caesjournals.org

More information

Haptic presentation of 3D objects in virtual reality for the visually disabled

Haptic presentation of 3D objects in virtual reality for the visually disabled Haptic presentation of 3D objects in virtual reality for the visually disabled M Moranski, A Materka Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, Lodz, POLAND marcin.moranski@p.lodz.pl,

More information

Traffic Sign Recognition Senior Project Final Report

Traffic Sign Recognition Senior Project Final Report Traffic Sign Recognition Senior Project Final Report Jacob Carlson and Sean St. Onge Advisor: Dr. Thomas L. Stewart Bradley University May 12th, 2008 Abstract - Image processing has a wide range of real-world

More information

Adaptive use of thresholding and multiple colour space representation to improve classification of MMCC barcode

Adaptive use of thresholding and multiple colour space representation to improve classification of MMCC barcode Edith Cowan University Research Online ECU Publications 2011 2011 Adaptive use of thresholding and multiple colour space representation to improve classification of MMCC barcode Siong Khai Ong Edith Cowan

More information

Face Detection System on Ada boost Algorithm Using Haar Classifiers

Face Detection System on Ada boost Algorithm Using Haar Classifiers Vol.2, Issue.6, Nov-Dec. 2012 pp-3996-4000 ISSN: 2249-6645 Face Detection System on Ada boost Algorithm Using Haar Classifiers M. Gopi Krishna, A. Srinivasulu, Prof (Dr.) T.K.Basak 1, 2 Department of Electronics

More information

Bandit Detection using Color Detection Method

Bandit Detection using Color Detection Method Available online at www.sciencedirect.com Procedia Engineering 29 (2012) 1259 1263 2012 International Workshop on Information and Electronic Engineering Bandit Detection using Color Detection Method Junoh,

More information

Research on Pupil Segmentation and Localization in Micro Operation Hu BinLiang1, a, Chen GuoLiang2, b, Ma Hui2, c

Research on Pupil Segmentation and Localization in Micro Operation Hu BinLiang1, a, Chen GuoLiang2, b, Ma Hui2, c 3rd International Conference on Machinery, Materials and Information Technology Applications (ICMMITA 2015) Research on Pupil Segmentation and Localization in Micro Operation Hu BinLiang1, a, Chen GuoLiang2,

More information

A Very High Level Interface to Teleoperate a Robot via Web including Augmented Reality

A Very High Level Interface to Teleoperate a Robot via Web including Augmented Reality A Very High Level Interface to Teleoperate a Robot via Web including Augmented Reality R. Marín, P. J. Sanz and J. S. Sánchez Abstract The system consists of a multirobot architecture that gives access

More information

Research Seminar. Stefano CARRINO fr.ch

Research Seminar. Stefano CARRINO  fr.ch Research Seminar Stefano CARRINO stefano.carrino@hefr.ch http://aramis.project.eia- fr.ch 26.03.2010 - based interaction Characterization Recognition Typical approach Design challenges, advantages, drawbacks

More information

FEATURE. Adaptive Temporal Aperture Control for Improving Motion Image Quality of OLED Display

FEATURE. Adaptive Temporal Aperture Control for Improving Motion Image Quality of OLED Display Adaptive Temporal Aperture Control for Improving Motion Image Quality of OLED Display Takenobu Usui, Yoshimichi Takano *1 and Toshihiro Yamamoto *2 * 1 Retired May 217, * 2 NHK Engineering System, Inc

More information

CHAPTER-4 FRUIT QUALITY GRADATION USING SHAPE, SIZE AND DEFECT ATTRIBUTES

CHAPTER-4 FRUIT QUALITY GRADATION USING SHAPE, SIZE AND DEFECT ATTRIBUTES CHAPTER-4 FRUIT QUALITY GRADATION USING SHAPE, SIZE AND DEFECT ATTRIBUTES In addition to colour based estimation of apple quality, various models have been suggested to estimate external attribute based

More information

R (2) Controlling System Application with hands by identifying movements through Camera

R (2) Controlling System Application with hands by identifying movements through Camera R (2) N (5) Oral (3) Total (10) Dated Sign Assignment Group: C Problem Definition: Controlling System Application with hands by identifying movements through Camera Prerequisite: 1. Web Cam Connectivity

More information

Evaluating Touch Gestures for Scrolling on Notebook Computers

Evaluating Touch Gestures for Scrolling on Notebook Computers Evaluating Touch Gestures for Scrolling on Notebook Computers Kevin Arthur Synaptics, Inc. 3120 Scott Blvd. Santa Clara, CA 95054 USA karthur@synaptics.com Nada Matic Synaptics, Inc. 3120 Scott Blvd. Santa

More information

Virtual Touch Human Computer Interaction at a Distance

Virtual Touch Human Computer Interaction at a Distance International Journal of Computer Science and Telecommunications [Volume 4, Issue 5, May 2013] 18 ISSN 2047-3338 Virtual Touch Human Computer Interaction at a Distance Prasanna Dhisale, Puja Firodiya,

More information

Near Infrared Face Image Quality Assessment System of Video Sequences

Near Infrared Face Image Quality Assessment System of Video Sequences 2011 Sixth International Conference on Image and Graphics Near Infrared Face Image Quality Assessment System of Video Sequences Jianfeng Long College of Electrical and Information Engineering Hunan University

More information

Sense. 3D scanning application for Intel RealSense 3D Cameras. Capture your world in 3D. User Guide. Original Instructions

Sense. 3D scanning application for Intel RealSense 3D Cameras. Capture your world in 3D. User Guide. Original Instructions Sense 3D scanning application for Intel RealSense 3D Cameras Capture your world in 3D User Guide Original Instructions TABLE OF CONTENTS 1 INTRODUCTION.... 3 COPYRIGHT.... 3 2 SENSE SOFTWARE SETUP....

More information

Figure 1 HDR image fusion example

Figure 1 HDR image fusion example TN-0903 Date: 10/06/09 Using image fusion to capture high-dynamic range (hdr) scenes High dynamic range (HDR) refers to the ability to distinguish details in scenes containing both very bright and relatively

More information

Touch & Gesture. HCID 520 User Interface Software & Technology

Touch & Gesture. HCID 520 User Interface Software & Technology Touch & Gesture HCID 520 User Interface Software & Technology Natural User Interfaces What was the first gestural interface? Myron Krueger There were things I resented about computers. Myron Krueger

More information

Figure 1. Mr Bean cartoon

Figure 1. Mr Bean cartoon Dan Diggins MSc Computer Animation 2005 Major Animation Assignment Live Footage Tooning using FilterMan 1 Introduction This report discusses the processes and techniques used to convert live action footage

More information

A new seal verification for Chinese color seal

A new seal verification for Chinese color seal Edith Cowan University Research Online ECU Publications 2011 2011 A new seal verification for Chinese color seal Zhihu Huang Jinsong Leng Edith Cowan University 10.4028/www.scientific.net/AMM.58-60.2558

More information

Picture Style Editor Ver Instruction Manual

Picture Style Editor Ver Instruction Manual ENGLISH Picture Style File Creating Software Picture Style Editor Ver. 1.18 Instruction Manual Content of this Instruction Manual PSE stands for Picture Style Editor. In this manual, the windows used in

More information

Various Calibration Functions for Webcams and AIBO under Linux

Various Calibration Functions for Webcams and AIBO under Linux SISY 2006 4 th Serbian-Hungarian Joint Symposium on Intelligent Systems Various Calibration Functions for Webcams and AIBO under Linux Csaba Kertész, Zoltán Vámossy Faculty of Science, University of Szeged,

More information

MarineBlue: A Low-Cost Chess Robot

MarineBlue: A Low-Cost Chess Robot MarineBlue: A Low-Cost Chess Robot David URTING and Yolande BERBERS {David.Urting, Yolande.Berbers}@cs.kuleuven.ac.be KULeuven, Department of Computer Science Celestijnenlaan 200A, B-3001 LEUVEN Belgium

More information

Toward an Augmented Reality System for Violin Learning Support

Toward an Augmented Reality System for Violin Learning Support Toward an Augmented Reality System for Violin Learning Support Hiroyuki Shiino, François de Sorbier, and Hideo Saito Graduate School of Science and Technology, Keio University, Yokohama, Japan {shiino,fdesorbi,saito}@hvrl.ics.keio.ac.jp

More information

A Kinect-based 3D hand-gesture interface for 3D databases

A Kinect-based 3D hand-gesture interface for 3D databases A Kinect-based 3D hand-gesture interface for 3D databases Abstract. The use of natural interfaces improves significantly aspects related to human-computer interaction and consequently the productivity

More information

S.P.Q.R. Legged Team Report from RoboCup 2003

S.P.Q.R. Legged Team Report from RoboCup 2003 S.P.Q.R. Legged Team Report from RoboCup 2003 L. Iocchi and D. Nardi Dipartimento di Informatica e Sistemistica Universitá di Roma La Sapienza Via Salaria 113-00198 Roma, Italy {iocchi,nardi}@dis.uniroma1.it,

More information

Motion Detector Using High Level Feature Extraction

Motion Detector Using High Level Feature Extraction Motion Detector Using High Level Feature Extraction Mohd Saifulnizam Zaharin 1, Norazlin Ibrahim 2 and Tengku Azahar Tuan Dir 3 Industrial Automation Department, Universiti Kuala Lumpur Malaysia France

More information

Human Computer Interaction by Gesture Recognition

Human Computer Interaction by Gesture Recognition IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735.Volume 9, Issue 3, Ver. V (May - Jun. 2014), PP 30-35 Human Computer Interaction by Gesture Recognition

More information

A Survey on Hand Gesture Recognition and Hand Tracking Arjunlal 1, Minu Lalitha Madhavu 2 1

A Survey on Hand Gesture Recognition and Hand Tracking Arjunlal 1, Minu Lalitha Madhavu 2 1 A Survey on Hand Gesture Recognition and Hand Tracking Arjunlal 1, Minu Lalitha Madhavu 2 1 PG scholar, Department of Computer Science And Engineering, SBCE, Alappuzha, India 2 Assistant Professor, Department

More information

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data Hrvoje Benko Microsoft Research One Microsoft Way Redmond, WA 98052 USA benko@microsoft.com Andrew D. Wilson Microsoft

More information

][ R G [ Q] Y =[ a b c. d e f. g h I

][ R G [ Q] Y =[ a b c. d e f. g h I Abstract Unsupervised Thresholding and Morphological Processing for Automatic Fin-outline Extraction in DARWIN (Digital Analysis and Recognition of Whale Images on a Network) Scott Hale Eckerd College

More information

Webcam Based Image Control System

Webcam Based Image Control System Webcam Based Image Control System Student Name: KONG Fanyu Advised by: Dr. David Rossiter CSIT 6910 Independent Project Fall Semester, 2011 Department of Computer Science and Engineering The Hong Kong

More information

Touch & Gesture. HCID 520 User Interface Software & Technology

Touch & Gesture. HCID 520 User Interface Software & Technology Touch & Gesture HCID 520 User Interface Software & Technology What was the first gestural interface? Myron Krueger There were things I resented about computers. Myron Krueger There were things I resented

More information

Portable Facial Recognition Jukebox Using Fisherfaces (Frj)

Portable Facial Recognition Jukebox Using Fisherfaces (Frj) Portable Facial Recognition Jukebox Using Fisherfaces (Frj) Richard Mo Department of Electrical and Computer Engineering The University of Michigan - Dearborn Dearborn, USA Adnan Shaout Department of Electrical

More information

EE368 Digital Image Processing Project - Automatic Face Detection Using Color Based Segmentation and Template/Energy Thresholding

EE368 Digital Image Processing Project - Automatic Face Detection Using Color Based Segmentation and Template/Energy Thresholding 1 EE368 Digital Image Processing Project - Automatic Face Detection Using Color Based Segmentation and Template/Energy Thresholding Michael Padilla and Zihong Fan Group 16 Department of Electrical Engineering

More information

Research Article Hand Posture Recognition Human Computer Interface

Research Article Hand Posture Recognition Human Computer Interface Research Journal of Applied Sciences, Engineering and Technology 7(4): 735-739, 2014 DOI:10.19026/rjaset.7.310 ISSN: 2040-7459; e-issn: 2040-7467 2014 Maxwell Scientific Publication Corp. Submitted: March

More information

PupilMouse: Cursor Control by Head Rotation Using Pupil Detection Technique

PupilMouse: Cursor Control by Head Rotation Using Pupil Detection Technique PupilMouse: Cursor Control by Head Rotation Using Pupil Detection Technique Yoshinobu Ebisawa, Daisuke Ishima, Shintaro Inoue, Yasuko Murayama Faculty of Engineering, Shizuoka University Hamamatsu, 432-8561,

More information

A Method of Multi-License Plate Location in Road Bayonet Image

A Method of Multi-License Plate Location in Road Bayonet Image A Method of Multi-License Plate Location in Road Bayonet Image Ying Qian The lab of Graphics and Multimedia Chongqing University of Posts and Telecommunications Chongqing, China Zhi Li The lab of Graphics

More information

International Journal of Research in Computer and Communication Technology, Vol 2, Issue 12, December- 2013

International Journal of Research in Computer and Communication Technology, Vol 2, Issue 12, December- 2013 Design Of Virtual Sense Technology For System Interface Mr. Chetan Dhule, Prof.T.H.Nagrare Computer Science & Engineering Department, G.H Raisoni College Of Engineering. ABSTRACT A gesture-based human

More information

Manual. Cell Border Tracker. Jochen Seebach Institut für Anatomie und Vaskuläre Biologie, WWU Münster

Manual. Cell Border Tracker. Jochen Seebach Institut für Anatomie und Vaskuläre Biologie, WWU Münster Manual Cell Border Tracker Jochen Seebach Institut für Anatomie und Vaskuläre Biologie, WWU Münster 1 Cell Border Tracker 1. System Requirements The software requires Windows XP operating system or higher

More information

Direct gaze based environmental controls

Direct gaze based environmental controls Loughborough University Institutional Repository Direct gaze based environmental controls This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation: SHI,

More information

CCD Automatic Gain Algorithm Design of Noncontact Measurement System Based on High-speed Circuit Breaker

CCD Automatic Gain Algorithm Design of Noncontact Measurement System Based on High-speed Circuit Breaker 2016 3 rd International Conference on Engineering Technology and Application (ICETA 2016) ISBN: 978-1-60595-383-0 CCD Automatic Gain Algorithm Design of Noncontact Measurement System Based on High-speed

More information

Integrated Digital System for Yarn Surface Quality Evaluation using Computer Vision and Artificial Intelligence

Integrated Digital System for Yarn Surface Quality Evaluation using Computer Vision and Artificial Intelligence Integrated Digital System for Yarn Surface Quality Evaluation using Computer Vision and Artificial Intelligence Sheng Yan LI, Jie FENG, Bin Gang XU, and Xiao Ming TAO Institute of Textiles and Clothing,

More information

VLSI Implementation of Impulse Noise Suppression in Images

VLSI Implementation of Impulse Noise Suppression in Images VLSI Implementation of Impulse Noise Suppression in Images T. Satyanarayana 1, A. Ravi Chandra 2 1 PG Student, VRS & YRN College of Engg. & Tech.(affiliated to JNTUK), Chirala 2 Assistant Professor, Department

More information

INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY

INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY T. Panayiotopoulos,, N. Zacharis, S. Vosinakis Department of Computer Science, University of Piraeus, 80 Karaoli & Dimitriou str. 18534 Piraeus, Greece themisp@unipi.gr,

More information

Control a 2-Axis Servomechanism by Gesture Recognition using a Generic WebCam

Control a 2-Axis Servomechanism by Gesture Recognition using a Generic WebCam Tavares, J. M. R. S.; Ferreira, R. & Freitas, F. / Control a 2-Axis Servomechanism by Gesture Recognition using a Generic WebCam, pp. 039-040, International Journal of Advanced Robotic Systems, Volume

More information

A Dynamic Gesture Language and Graphical Feedback for Interaction in a 3D User Interface

A Dynamic Gesture Language and Graphical Feedback for Interaction in a 3D User Interface EUROGRAPHICS 93/ R. J. Hubbold and R. Juan (Guest Editors), Blackwell Publishers Eurographics Association, 1993 Volume 12, (1993), number 3 A Dynamic Gesture Language and Graphical Feedback for Interaction

More information

Multimedia Systems and Technologies

Multimedia Systems and Technologies Multimedia Systems and Technologies Faculty of Engineering Master s s degree in Computer Engineering Marco Porta Computer Vision & Multimedia Lab Dipartimento di Ingegneria Industriale e dell Informazione

More information

Correction of Clipped Pixels in Color Images

Correction of Clipped Pixels in Color Images Correction of Clipped Pixels in Color Images IEEE Transaction on Visualization and Computer Graphics, Vol. 17, No. 3, 2011 Di Xu, Colin Doutre, and Panos Nasiopoulos Presented by In-Yong Song School of

More information

Aimetis Outdoor Object Tracker. 2.0 User Guide

Aimetis Outdoor Object Tracker. 2.0 User Guide Aimetis Outdoor Object Tracker 0 User Guide Contents Contents Introduction...3 Installation... 4 Requirements... 4 Install Outdoor Object Tracker...4 Open Outdoor Object Tracker... 4 Add a license... 5...

More information

Camera Overview. Olympus Digital Cameras for Materials Science Applications: For Clear and Precise Image Analysis. Digital Cameras for Microscopy

Camera Overview. Olympus Digital Cameras for Materials Science Applications: For Clear and Precise Image Analysis. Digital Cameras for Microscopy Digital Cameras for Microscopy Camera Overview For Materials Science Microscopes Olympus Digital Cameras for Materials Science Applications: For Clear and Precise Image Analysis Passionate about Imaging

More information

Colour correction for panoramic imaging

Colour correction for panoramic imaging Colour correction for panoramic imaging Gui Yun Tian Duke Gledhill Dave Taylor The University of Huddersfield David Clarke Rotography Ltd Abstract: This paper reports the problem of colour distortion in

More information

CONTENT INTRODUCTION BASIC CONCEPTS Creating an element of a black-and white line drawing DRAWING STROKES...

CONTENT INTRODUCTION BASIC CONCEPTS Creating an element of a black-and white line drawing DRAWING STROKES... USER MANUAL CONTENT INTRODUCTION... 3 1 BASIC CONCEPTS... 3 2 QUICK START... 7 2.1 Creating an element of a black-and white line drawing... 7 3 DRAWING STROKES... 15 3.1 Creating a group of strokes...

More information

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing Digital Image Processing Lecture # 6 Corner Detection & Color Processing 1 Corners Corners (interest points) Unlike edges, corners (patches of pixels surrounding the corner) do not necessarily correspond

More information

Ubiquitous Computing Summer Episode 16: HCI. Hannes Frey and Peter Sturm University of Trier. Hannes Frey and Peter Sturm, University of Trier 1

Ubiquitous Computing Summer Episode 16: HCI. Hannes Frey and Peter Sturm University of Trier. Hannes Frey and Peter Sturm, University of Trier 1 Episode 16: HCI Hannes Frey and Peter Sturm University of Trier University of Trier 1 Shrinking User Interface Small devices Narrow user interface Only few pixels graphical output No keyboard Mobility

More information

Affordance based Human Motion Synthesizing System

Affordance based Human Motion Synthesizing System Affordance based Human Motion Synthesizing System H. Ishii, N. Ichiguchi, D. Komaki, H. Shimoda and H. Yoshikawa Graduate School of Energy Science Kyoto University Uji-shi, Kyoto, 611-0011, Japan Abstract

More information

Getting started 1 System Requirements... 1 Software Installation... 2 Hardware Installation... 2 System Limitations and Tips on Scanning...

Getting started 1 System Requirements... 1 Software Installation... 2 Hardware Installation... 2 System Limitations and Tips on Scanning... Contents Getting started 1 System Requirements......................... 1 Software Installation......................... 2 Hardware Installation........................ 2 System Limitations and Tips on

More information

Segmentation using Saturation Thresholding and its Application in Content-Based Retrieval of Images

Segmentation using Saturation Thresholding and its Application in Content-Based Retrieval of Images Segmentation using Saturation Thresholding and its Application in Content-Based Retrieval of Images A. Vadivel 1, M. Mohan 1, Shamik Sural 2 and A.K.Majumdar 1 1 Department of Computer Science and Engineering,

More information

Development of Video Chat System Based on Space Sharing and Haptic Communication

Development of Video Chat System Based on Space Sharing and Haptic Communication Sensors and Materials, Vol. 30, No. 7 (2018) 1427 1435 MYU Tokyo 1427 S & M 1597 Development of Video Chat System Based on Space Sharing and Haptic Communication Takahiro Hayashi 1* and Keisuke Suzuki

More information

Research of an Algorithm on Face Detection

Research of an Algorithm on Face Detection , pp.217-222 http://dx.doi.org/10.14257/astl.2016.141.47 Research of an Algorithm on Face Detection Gong Liheng, Yang Jingjing, Zhang Xiao School of Information Science and Engineering, Hebei North University,

More information

Background Subtraction Fusing Colour, Intensity and Edge Cues

Background Subtraction Fusing Colour, Intensity and Edge Cues Background Subtraction Fusing Colour, Intensity and Edge Cues I. Huerta and D. Rowe and M. Viñas and M. Mozerov and J. Gonzàlez + Dept. d Informàtica, Computer Vision Centre, Edifici O. Campus UAB, 08193,

More information

Stamp detection in scanned documents

Stamp detection in scanned documents Annales UMCS Informatica AI X, 1 (2010) 61-68 DOI: 10.2478/v10065-010-0036-6 Stamp detection in scanned documents Paweł Forczmański Chair of Multimedia Systems, West Pomeranian University of Technology,

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

Person Tracking with a Mobile Robot based on Multi-Modal Anchoring

Person Tracking with a Mobile Robot based on Multi-Modal Anchoring Person Tracking with a Mobile Robot based on Multi-Modal M. Kleinehagenbrock, S. Lang, J. Fritsch, F. Lömker, G. A. Fink and G. Sagerer Faculty of Technology, Bielefeld University, 33594 Bielefeld E-mail:

More information

Implementing RoshamboGame System with Adaptive Skin Color Model

Implementing RoshamboGame System with Adaptive Skin Color Model American Journal of Engineering Research (AJER) e-issn: 2320-0847 p-issn : 2320-0936 Volume-6, Issue-12, pp-45-53 www.ajer.org Research Paper Open Access Implementing RoshamboGame System with Adaptive

More information

Vein and Fingerprint Identification Multi Biometric System: A Novel Approach

Vein and Fingerprint Identification Multi Biometric System: A Novel Approach Vein and Fingerprint Identification Multi Biometric System: A Novel Approach Hatim A. Aboalsamh Abstract In this paper, a compact system that consists of a Biometrics technology CMOS fingerprint sensor

More information

Paper Prototyping Kit

Paper Prototyping Kit Paper Prototyping Kit Share Your Minecraft UI IDEAs! Overview The Minecraft team is constantly looking to improve the game and make it more enjoyable, and we can use your help! We always want to get lots

More information

Cosmic Color Ribbon CR150D. Cosmic Color Bulbs CB50D. RGB, Macro & Color Effect Programming Guide for the. November 22, 2010 V1.0

Cosmic Color Ribbon CR150D. Cosmic Color Bulbs CB50D. RGB, Macro & Color Effect Programming Guide for the. November 22, 2010 V1.0 RGB, Macro & Color Effect Programming Guide for the Cosmic Color Ribbon CR150D & Cosmic Color Bulbs CB50D November 22, 2010 V1.0 Copyright Light O Rama, Inc. 2010 Table of Contents Introduction... 5 Firmware

More information

Face Recognition System Based on Infrared Image

Face Recognition System Based on Infrared Image International Journal of Engineering Inventions e-issn: 2278-7461, p-issn: 2319-6491 Volume 6, Issue 1 [October. 217] PP: 47-56 Face Recognition System Based on Infrared Image Yong Tang School of Electronics

More information

Point Calibration. July 3, 2012

Point Calibration. July 3, 2012 Point Calibration July 3, 2012 The purpose of the Point Calibration process is to generate a map of voltages (for galvos) or motor positions of the pointing device to the voltages or pixels of the reference

More information

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005.

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005. Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays Habib Abi-Rached Thursday 17 February 2005. Objective Mission: Facilitate communication: Bandwidth. Intuitiveness.

More information