3D Human-Gesture Interface for Fighting Games Using Motion Recognition Sensor

Size: px
Start display at page:

Download "3D Human-Gesture Interface for Fighting Games Using Motion Recognition Sensor"

Transcription

1 Wireless Pers Commun (2016) 89: DOI /s D Human-Gesture Interface for Fighting Games Using Motion Recognition Sensor Jongmin Kim 1 Hoill Jung 2 MyungA Kang 3 Kyungyong Chung 4 Published online: 19 April 2016 Springer Science+Business Media New York 2016 Abstract As augmented reality related technologies become commercialized due to requests for 3D content, they are developing a pattern whereby users utilize and consume the realism and reality of 3D content. Rather than using absolute position information, the pattern characteristics of gestures are extracted by considering body-proportion characteristics around the shoulders. Even if performing the same gesture, position coordinate values of the skeleton measured by a motion recognition sensor can vary, depending on the length and direction of the arm. In this paper, we propose a 3D human-gesture interface for fighting games using a motion recognition sensor. Recognizing gestures in the motion recognition sensor environment, we applied the gestures to a fighting action game. The motion characteristics of gestures are extracted by using joint information obtained from the motion recognition sensor, and 3D human motion is modeled mathematically. Motion is effectively modeled and analyzed with a method of expressing it in space via principal component analysis and then matching it with the 3D human-gesture interface for new & Kyungyong Chung kyungyong.chung@gmail.com Jongmin Kim mrjjoung@ccei.kr Hoill Jung hijung1982@gmail.com MyungA Kang makang@gwangju.ac.kr Creative Economy Support Team, Jeonnam Center for Createive Economy and Innovation, 32, Deokchung 2-gil, Yeosu-si, Jeollanam-do , Korea Intelligent System Laboratory, School of Computer Information Engineering, Sangji University, 83, Sangjidae-gil, Wonju-si, Gangwon-do , Korea Division of Computer Information Engineering, GwangJu University, 277, Hyodeok-ro, Nam-gu, Gwangju, Korea School of Computer Information Engineering, Sangji University, 83, Sangjidae-gil, Wonju-si, Gangwon-Do , Korea

2 928 J. Kim et al. input. Also, we propose an advanced pattern matching algorithm as a way to reduce motion constraints in a motion recognition system. Finally, based on the results of motion recognition, an example used as the interface of a 3D fight action game is presented. By obtaining high-quality 3D motion, the developed technology provides more realistic 3D content through real-time processing technology. Keywords 3D human gesture Motion recognition sensor Tracking Recognition 1 Introduction Recently, due to the increase in population, increases in income, and increasing expectations of services, studies on interfaces that provide natural interaction between people, equipment, and the environment have specifically been conducted. The need is increasing for technology that tracks an object in real time from video ranging from games to movies. In order to promote the visual industry, major foreign countries are expanding investment in informationization, standardization, the legal system, manpower training, creation of research and development foundations, etc. Humans convey information easily by using language in everyday life, but we actually convey information by nonverbal means like behavior or facial expression in many situations [1 3]. Motion recognition to maximize users convenience is globally expanding into industries around the world and is expected to grow rapidly. It is also leading a new field of computing in the development of devices that accurately detect motion in games, augmented reality, location-based services, gesture interfaces, smart TV, health care [4 6], stability analysis [7], and medical telemedicine [8]. Motion recognition-based equipment includes games and wearable devices for controllerbased motion, and includes the single camera and the stereo camera. Here, camera-based motion is categorized into the Kinect sensor, single-camera tracking, and time of flight (TOF) methods, depending on the applied technology. Motion information estimated through this technology provides realism and reality for 3D content. In addition, when connecting to augmented reality, interaction between the user and the object becomes possible. With the recent increase in demand for 3D content, gesture recognition analyzes a person s 3D posture that changes over time in order to track motion. That is, it estimates and tracks the human body from video and identifies body parts to determine what meaning is being conveyed. However, it is not easy to stably identify a body part and interpret the meaning of a gesture, because the human body is a 3D skeletal object with a high degree of freedom, whereas image recognition is from a 2D image [9]. Also, every person has a different human body size and wears a variety of clothes and caps, so it is not easy to extract gesture feature information of a 3D human body [8, 10]. In order to easily identify the human body and determine changes over time, the angles of the joints connecting each part of the body must be identified. Early gesture recognition research analyzed changes in the 3D posture of people wearing a special outfit or specific markers and then classified patterns [3]. This has the advantage of being relatively fast and accurate, but if a marker is covered, the motion cannot be determined, and it costs a lot of money for equipment and installation. Studies on 3D posture estimation of a person and motion tracking have been conducted in many ways. In particular, studies on specialized techniques for specific uses, such as health care and sports, are also underway. Studies are being conducted on tracking techniques using image- or silhouette-based information,

3 3D Human-Gesture Interface for Fighting Games Using a 929 models for tracking, ontologies, and situational awareness. They succeeded in commercialization of a motion tracking system with no markers in the fields of health care and sports, and can now track motion at low cost. Also, through commercialization, the system can be used to analyze 3D postures of a person and to analyze sports injuries in athletes. It is useful in the production of movies or 3D animation, because 3D motion can be analyzed based on joint angles and expressed as numerical data [9]. The 3D human position of each marker can be measured with multiple pre-calibrated cameras. If a part of the body is covered and it is thus not easy to estimate location, the location of markers will be specified by hand. This is a useful way to obtain accurate measurement, but it requires expensive equipment and specially designed studios [11, 12]. In addition, if it is hard to attach markers to the person to be observed, the application is essentially useless. The ideal method for human gesture recognition is to determine meaning by analyzing only the input image of the camera as if people are naturally observing the target with their eyes, without attaching a marker for motion recognition. Gesture recognition using cameras has been utilized in a variety of systems or platforms, such as telemedicine health care [13, 14], emergency situation monitoring [15], robotic control [16], computer graphics and games [17], peer-to-peer context awareness [18], physical security, medical information services, [8] and sign language recognition for the hearing impaired [19]. In particular, Microsoft s Kinect motion recognition sensor, which appeared in 2010, includes functions to provide depth images other than traditional RGB images. Thus, gesture recognition research using the motion recognition sensor and depth images is being more actively conducted [20]. The motion recognition sensor is composed of three lenses. It performs video recording through the RGB recognition lens and, at the same time, projects an infrared ray in a pixel unit through an infrared lens. Depth recognition images are created by a device designed to shoot infrared pixels projected on a scene and to recognize distance by imposing a depth calculation on the target. The emergence of such a sensor saves people the trouble of extracting the body detection required for gesture recognition and making pose estimations [15, 21, 22]. In previous studies, gesture was recognized by analyzing information from the gesture using a variety of sequential learning methods [12, 23 25]. In such studies, a fixed threshold model, a garbage model, or an adaptive threshold model were used to distinguish gestures from nongestures (actions, not gestures that occur between gestures). The fixed threshold model has a drawback in that it is not easy to determine the optimum threshold value. The garbage model has a problem whereby it is not possible to collect data and learn from it due to the diversity of non-gestures. To overcome the shortcomings of these models, an advanced pattern matching algorithm is proposed. This paper is organized as follows. Section 2 describes a 3D human-gesture recognition system. Section 3 suggests a 3D human gesture interface for fighting games using a motion recognition sensor. Section 4 presents the experimental results of the proposed method, and Sect. 5 provides the conclusion. 2 3D Human Gesture Virtual Interface The proposed 3D human-gesture virtual interface is intended to extract the location coordinates of the user s joints from the depth images of the motion recognition sensor, comparing them to dynamic gestures determined in advance. The extracted feature points were modeled for the gestures defined by the virtual reality (VR) interface and were

4 930 J. Kim et al. Fig. 1 Overview of the 3D human-gesture virtual interface system projected onto the space by principal component analysis. Due to the nature of the space, visually similar gestures have a similar distribution in the space. Thus, it becomes possible to analyze a gesture by matching it against visually similar gestures from model gestures in the space. The gesture information analyzed in that way is converted to a VR interface and can be used. Figure 1 shows the overview of the 3D human-gesture virtual interface system. Unlike 2D tracking technology, a great deal of computation time and real-time processing technology are required, because the motion information of a 3D person must be tracked while finding the corresponding areas in order to extract them in real time. This is 3D motion information extraction and tracking technology which is efficient in a multiview camera environment. The 3D human motion tracking technology analyzes a tracking and recognition surveillance system developed in previous research [2, 3, 11] and develops the technology for tracking motion with silhouette images [12] based on it. Figure 2 shows an overview of 3D human gesture tracking. In order to detect a person from the input image, the area and background image are separated. The area is detected, and then, the whole outline is extracted. After undergoing a fragmentation process and finding an image, detection is carried out by calculating the coordinates. If detected, the image is tracked, and if lost, it is restored quickly using Lucas Kanade motion estimation [11]. Here, a difference-image technique is applied in order to remove background image and noise. 3 3D Human-Gesture Interface for Fighting Games 3.1 Definition of the Gesture Gesture recognition uses one or more image cameras in order to obtain 3D motion, and uses computer vision technology to recognize a particular motion. For preprocessing work on a grayscale image used to recognize and detect images (including noise), log

5 3D Human-Gesture Interface for Fighting Games Using a 931 Fig. 2 Overview of 3D human-gesture tracking transformation, linear transformation and an exponential function are used [2, 3, 11]. Gestures are the ones used in three-dimensional games, categorized into seven gestures. Defined gestures are difficult to distinguish. Thus, they need to be mathematically modeled. In this paper, 3D positions for both hands and feet and moving velocity vectors of the head and both feet are used as features. These features are generated into an input vector of 21-plus dimensions. Analyzing them enables gestures to be distinguished. Figure 3 shows the feature space vectors for fighting games. Figure 3a represents the relative position vectors by Pv LH, Pv RH, Pv LF, Pv RF and Fig. 3b represents the velocity vectors by Vv H, Vv LF, Vv RF. Fig. 3 Feature space vector for fighting games

6 932 J. Kim et al. From the images, an area is first segmented based on color, and the area is detected by extracting an outline. Tracking starts after detecting the image by using an outline vector value. Real-time images are detected by repeatedly applying the Lucas Kanade method in the tracking process [11]. 3.2 Feature Extraction To extract a pattern feature from the data acquired by the motion recognition sensor, for gesture recognition, it is necessary to extract features in consideration of the nature of the gesture and to determine the expression from the appropriate input in the gesture model. In order to obtain the 3D posture of a person, the probability for each part of the body is calculated. The posture is estimated through optimization based on relative position information. The final posture is determined by expanding the corresponding area for each body part to 3D [7, 8]. In this paper, information about the user s 20 joints was obtained from input gestures using the Kinect Software Development Kit (SDK) AP NUI Skeleton application programming interface (API) [26]. Figure 4 shows the gesture feature extraction information [17]. As shown in Fig. 4, for the gesture features, 20 angles are used as physical characteristic points [7, 8] by calculating the X Y and X Z axis angles for joints: {head-rib}, {rib-pelvis}, {right elbow right hand}, {right shoulder right elbow}, {hip right knee}, {left elbow the left hand}, {right knee right foot}, {left shoulder the left elbow}, and {hip left knee} D Human Gesture Modeling The feature space vectors can be obtained for each frame. However, the input data are high-level and do not have common features. For that reason, this study modeled 3D human gestures using principal component analysis [18, 27 29]. Calculating a space Fig. 4 Gesture feature extraction information

7 3D Human-Gesture Interface for Fighting Games Using a 933 Fig. 5 Cumulative contributions according to values vector, it is necessary to obtain the mean space vector for all feature space vectors and to then seek the differences between feature space vectors. The average vector C and the new feature set can be expressed with Eq. (1). Pv is the relative position vector, and Vv is the velocity vector in the feature space vector. C ¼ 1 N X N i¼1 ½Pv LH ; Pv RH ; Pv LF ; Pv RF ; Vv H ; Vv RF ; Vv LF T ð1þ After the space vectors are constructed, they are projected onto the low-dimensional vector space by Eq. (2). M i is the low-dimensional vector space. M i ¼ ½e 1 ; e 2 ; e 3 ;...; e k T ½Pv LH ; Pv RH ; Pv LF ; Pv RF ; Vv H ; Vv RF ; Vv LF T C ð2þ Principal component analysis reduces dimensions and summarizes data by enticing feature space vectors. Therefore, it is necessary to determine the number of principal components to be retained from the entire object image. Figure 5 shows the cumulative contributions according to the values. As shown in Fig. 5, five vectors with larger values make a greater contribution to the space vector. Thus, this study converts the dimensions into a fivedimensional space for the 3D human gesture analysis. We used previous research into 3D human gesture modeling [23, 30]. 3.4 Advanced Pattern Matching Algorithm For predefined gestures, analysis can be made by modeling them in the space and determining new gestures from 3D human-gestures in the space vector feature [7, 8]. However, for more natural interface implementation, importance should be placed on the means of the gesture, rather than numerical data of the same gesture. For example, for the defined right kick gesture, the position of the arms is not necessary to distinguish the gesture. In other words, it does not matter if the position of the arms is excluded from 3D-human gesture recognition [23]. In this study, we mitigated the restrictions of 3D human gestures by excluding unnecessary features and applying this to gesture recognition. Figure 6 shows the advanced pattern matching algorithm considering 3D human gestures.

8 934 J. Kim et al. Fig. 6 Advanced pattern matching algorithm considering 3D human-gestures Table 1 Feature values related to respective 3D human-gestures Gesture Left punch (LP) Right punch (RP) Left kick (LK) Right kick (RK) Running (R) Sitting (S) Flying kick (FK) Related feature space vector Positions of both hands Positions of both hands Positions of both feet Positions of both feet Positions of head, pelvis, feet Positions of both feet, position of head Positions of both feet, position of head Fig. 7 Accuracy experimental results on advanced pattern matching algorithm

9 3D Human-Gesture Interface for Fighting Games Using a 935 Fig. 8 Distance results of model gesture. a Left foot kick, b left punch

10 936 J. Kim et al. Fig. 9 Action game control applying the proposed gesture recognition algorithm Table 1 shows the feature values related to the respective 3D human-gestures. Each gesture was assumed for the input value, and irrelevant feature space values were replaced by average values of the model. 4 Experimental Result Performance evaluation of the system according to whether the proposed method was applied was carried out by measuring the similarity between input motion and the modeled motion. The user interface consists of RGB and real-time-motion data, plus skeletal, depth, and 3D viewer properties considering user convenience [12, 28]. The motion recognition sensor-based interface system was developed in a Windows bit environment using Microsoft Visual Studio 2015 C?? in an Intel core i7 processer at 4.0 GHz with 16 GB RAM. Also used were the Kinect SDK v1.8, Kinect Runtime v1.8, the Kinect Developer Toolkit v1.8, OpenNI v2.2, NiTE v2.2, and the OpenGL API. The average value of model motion before input modification and after input modification of data for seven motions (LP, RP, LK, RK, R, S, FK) was calculated [23, 30]. The system outputs motion within a distance of more than a set threshold from among the motions that are the closest to the motion entered. Figure 7 shows the accuracy experimental results depending on whether the advanced pattern matching algorithm was used. To show the accuracy experimental results of the advanced pattern matching algorithm proposed in this thesis, examples of gestures that have the same experimental meaning, but different numerical values, are presented in Fig. 8. The distance value of models in the space when the input vector was changed was compared to when the input vector was not changed. Figure 8a illustrates the result of the gesture kick when opening up both hands after modeling the gesture kick with two hands together. As shown in Fig. 8a, the gesture left foot kick has a closer distance to the model gesture than the case before the input vector change. Figure 8b shows execution of the gesture punch without bending the knees a little after modeling the gesture punch when bending the knees. As shown in Fig. 8b, the gesture punch also had a closer distance for the model gesture than the case before input vector change. As shown in the above results, heuristic information was added to the input vector for recognition, and thereby the meaning of a gesture was taken into account. Considering the meaning of a gesture leads to recognizing more general gestures, and to reducing system operation constraints. The distance of the model gesture was reduced, though each gesture

11 3D Human-Gesture Interface for Fighting Games Using a 937 had a different reduction. Without the change, the decision of the gesture with the closest distance value also brings about a correct result. Nevertheless, by getting a relative distance from a different modeled gesture, it is possible to obtain more stable results. Figure 9 shows the proposed gesture recognition system. The motion recognition sensor was first used to analyze a gesture by an actor, and then 3D gesture information was extracted to analyze the gesture and perform recognition. The result obtained by gesture recognition is connected to the interface module. The interface module interprets the gesture recognition result for the game interface. This thesis defined the gestures used in TEKKEN 7 [31], and set the keyboard sequence for each gesture. The two modules were installed in different systems, which were connected by a local area network to create a virtual interface between human and computer. Therefore, in the environment, users are able to experience VR to operate a game character. The proposed system made it possible to control more accurate gestures by removing uncertain gestures. 5 Conclusions Motion recognition to maximize user convenience is a technology with huge future potential that can create new added value, and its growth potential is great. As augmented reality related derivatives have recently been commercialized, a pattern is developing where users more often encounter and access 3D content. This paper described gesture recognition using motion information obtained from a motion recognition sensor. In order to numerically express the 20-plus physical feature points of human motion, five physical characteristics were defined. We extracted motion information by calculating 3D information from these feature points. Principal component analysis was used to model these data, and an advanced pattern matching algorithm was used for more reliable recognition results. Also, motion detection and control functions of various smart devices were presented by applying gesture recognition proposed in an interface for 3D action games. The applied system used the Euclidean distance-based correlation coefficient and did not consider the distribution patterns for all motion. In addition, uncommon motion might not be recognized in some cases. This can be solved by considering the distribution pattern of the modeled motion. It was also found that 3D motion data contain a lot of errors. This can be solved by additionally implementing an error correction process, which is expected to show more stable results. This fundamental technology will be very helpful for the commercialization of related technologies and will create significant value because it has a lot of applications. Acknowledgments This study was conducted by research funds from Gwangju University in References 1. Weiser, M. (2001). The computer for the 21st century. Scientific Anerica, 265(3), Kang, S. K., Chung, K. Y., & Lee, J. H. (2014). Real-time tracking and recognition systems for interactive telemedicine health services. Wireless Personal Communications, 79(4), Kang, S. K., Chung, K. Y., & Lee, J. H. (2015). Ontology based inference system for adaptive object recognition. Multimedia Tools and Applications, 74(20), Jung, H., & Chung, K. (2015). Sequential pattern profiling based bio-detection for smart health service. Cluster Computing, 18(1),

12 938 J. Kim et al. 5. Jung, H., & Chung, K. (2016). Knowledge based dietary nutrition recommendation for obesity management. Information Technology and Management, 17(1), Jung, E. Y., Kim, J. H., Chung, K. Y., & Park, D. K. (2014). Mobile healthcare application with EMR interoperability for diabetes patients. Cluster Computing, 17(3), Kim, S. H., & Chung, K. Y. (2014). 3D simulator for stability analysis of finite slope causing plane activity. Multimedia Tools and Applications, 68(2), Kim, S. H., & Chung, K. Y. (2015). Medical information service system based on human 3D anatomical model. Multimedia Tools and Applications, 74(20), Pavlovic, V. I., Sharma, R., & Huang, T. (1997). Visual interpretation of hand gestures for humancomputer interaction: A review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), Haritaoglu, I., Harwood, D., & Davis, L. S. (1998). W4: Who? When? Where? What? A real-time system for detecting and tracking people. In Third Face and Gesture Recognition Conference (pp ). 11. Kang, S. K., Chung, K. Y., & Lee, J. H. (2014). Development of head detection and tracking systems for visual surveillance. Personal and Ubiquitous Computing, 18(3), Kim, J. M., Chung, K., & Kang, M. A. (2016). Continuous gesture recognition using HLAC and lowdimensional space. Wireless Personal Communications, 86(1), Jo, S. M., & Chung, K. (2014). Design of access control system for telemedicine secure XML documents. Multimedia Tools and Applications, 74(7), Jung, H., & Chung, K. (2016). PHR based life health index mobile service using decision support model. Wireless Personal Communications, 86(1), Kim, S. H., & Chung, K. (2015). Emergency situation monitoring service using context motion tracking of chronic disease patients. Cluster Computing, 18(2), Petis, M., & Fukui, K. (2012). Both-hand gesture recognition based on KOMSM with volume subspaces for robot teleoperation. In Proceedings of IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (pp ). 17. Kim, J. C., Jung, H., Kim, S. H., & Chung, K. (2016). Slope based intelligent 3D disaster simulation using physics engine. Wireless Personal Communications, 86(1), Jung, H., & Chung, K. (2016). P2P context awareness based sensibility design recommendation using color and bio-signal analysis. Peer-to-Peer Networking and Applications, 9(3), Li, Y. (2012). Multi-scenario gesture recognition using Kinect. In Proceedings of the International Conference on Computer Games (pp ). 20. Yun, H., Kim, K., Lee, J., & Lee, H. (2014). Development of experience dance game using kinectmotion capture. KIPS Transactions on Software and Data Engineering, 3(1), Oikonomidis, L., Kyriazis, N., & Argyros, A. A. (2011). Efficient model-based 3D tracking of hand articulations using Kinect. In In British Machine Vision Conference. 22. Sung, J., Ponce, C., Selman, B., & Saxena, A. (2011). Human activity detection from RGBD images. In Proceedings of the International Workshop on Association for the Advancement of Artificial Intelligence. 23. Kim, J. M. (2008). Three dimensional gesture recognition using PCA of stereo images and modified matching algorithm. In Proceedings of the International Conference on Fuzzy Systems and Knowledge Discovery (pp ). 24. Holden, E. J., Lee, G., & Owens, R. (2005). Australian Sign Language Recognition. Machine Vision and Applications, 1(5), Nickel, K., & Stiefelhagen, R. (2004). Real-time person tracking and pointing gesture recognition for human-robot interaction. Computer Vision in Human-Computer Interaction, 3058, Microsoft Kinect SDK Murase, H., & Nayar, S. K. (1995). Visual learning and recognition 3-D objects from appearance. International Journal of Computer Vision, 14, Chung, K., Kim, J. C., & Park, R. C. (2016). Knowledge-based health service considering user convenience using hybrid Wi-Fi P2P. Information Technology and Management, 17(1), Jung, E. Y., Kim, J. H., Chung, K. Y., & Park, D. K. (2013). Home health gateway based healthcare services through U-health platform. Wireless Personal Communications, 73(2), Kim, J. M., & Kang, M. A. (2011). Appearance-based object recognition using higher correlation feature information and PCA. In Proceedings of the International Conference on Fuzzy Systems and Knowledge Discovery (pp ). 31. TEKKEN 7.

13 3D Human-Gesture Interface for Fighting Games Using a 939 Jongmin Kim has received B.S. in 2002 Howon University and M.S., Ph.D. degrees in 2004, 2008, respectively, the Department of Computer Science and Statistics, Chosun University, Korea. He is currently a senior researcher in the Creative Economy Support Team. Jeonnam Center for Creative Economy & Innovation, Korea. His research interests include Pattern Recognition, Intelligent Computing and Neural Network, Image Processing, and Mobile Application Service. Hoill Jung has received B.S and M.S. degrees from the School of Computer Information Engineering, Sangji University, Korea in 2010 and 2013, respectively. He was worked for Local Information Institute Corporation. He is currently in the doctorate course of the School of Computer Information Engineering, Sangji University, Korea. He has been a researcher at Intelligent System Lab., Sangji University. His research interests include Medical Data Mining, Sensibility Engineering, Knowledge System, and Recommendation. MyungA Kang has received B.S. in 1992 Gwangju University and M.S., Ph.D. degrees in 1995, 1999, respectively, the Department of Computer Statistics, Chosun University, Korea. She is currently a professor in the Division of Computer Information Engineering, Gwangju University, Korea. Her research interests include Pattern Recognition, Intelligent Computing and Neural Network, Image Processing, and Mobile Application Service.

14 940 J. Kim et al. Kyungyong Chung has received B.S., M.S., and Ph.D. degrees in 2000, 2002, and 2005, respectively, all from the Department of Computer Information Engineering, Inha University, Korea. He has worked for Software Technology Leading Department, Korea IT Industry Promotion Agency (KIPA). He is currently a professor in the School of Computer Information Engineering, Sangji University, Korea. His research interests include Medical Data Mining, Healthcare, Knowledge System, HCI, and Recommendation.

Gesture Recognition with Real World Environment using Kinect: A Review

Gesture Recognition with Real World Environment using Kinect: A Review Gesture Recognition with Real World Environment using Kinect: A Review Prakash S. Sawai 1, Prof. V. K. Shandilya 2 P.G. Student, Department of Computer Science & Engineering, Sipna COET, Amravati, Maharashtra,

More information

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung,

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung, IJCSNS International Journal of Computer Science and Network Security, VOL.11 No.9, September 2011 55 A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang,

More information

The User Activity Reasoning Model Based on Context-Awareness in a Virtual Living Space

The User Activity Reasoning Model Based on Context-Awareness in a Virtual Living Space , pp.62-67 http://dx.doi.org/10.14257/astl.2015.86.13 The User Activity Reasoning Model Based on Context-Awareness in a Virtual Living Space Bokyoung Park, HyeonGyu Min, Green Bang and Ilju Ko Department

More information

Toward an Augmented Reality System for Violin Learning Support

Toward an Augmented Reality System for Violin Learning Support Toward an Augmented Reality System for Violin Learning Support Hiroyuki Shiino, François de Sorbier, and Hideo Saito Graduate School of Science and Technology, Keio University, Yokohama, Japan {shiino,fdesorbi,saito}@hvrl.ics.keio.ac.jp

More information

Stabilize humanoid robot teleoperated by a RGB-D sensor

Stabilize humanoid robot teleoperated by a RGB-D sensor Stabilize humanoid robot teleoperated by a RGB-D sensor Andrea Bisson, Andrea Busatto, Stefano Michieletto, and Emanuele Menegatti Intelligent Autonomous Systems Lab (IAS-Lab) Department of Information

More information

Design a Model and Algorithm for multi Way Gesture Recognition using Motion and Image Comparison

Design a Model and Algorithm for multi Way Gesture Recognition using Motion and Image Comparison e-issn 2455 1392 Volume 2 Issue 10, October 2016 pp. 34 41 Scientific Journal Impact Factor : 3.468 http://www.ijcter.com Design a Model and Algorithm for multi Way Gesture Recognition using Motion and

More information

A Study on Motion-Based UI for Running Games with Kinect

A Study on Motion-Based UI for Running Games with Kinect A Study on Motion-Based UI for Running Games with Kinect Jimin Kim, Pyeong Oh, Hanho Lee, Sun-Jeong Kim * Interaction Design Graduate School, Hallym University 1 Hallymdaehak-gil, Chuncheon-si, Gangwon-do

More information

GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL

GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL Darko Martinovikj Nevena Ackovska Faculty of Computer Science and Engineering Skopje, R. Macedonia ABSTRACT Despite the fact that there are different

More information

Augmented Keyboard: a Virtual Keyboard Interface for Smart glasses

Augmented Keyboard: a Virtual Keyboard Interface for Smart glasses Augmented Keyboard: a Virtual Keyboard Interface for Smart glasses Jinki Jung Jinwoo Jeon Hyeopwoo Lee jk@paradise.kaist.ac.kr zkrkwlek@paradise.kaist.ac.kr leehyeopwoo@paradise.kaist.ac.kr Kichan Kwon

More information

Classification for Motion Game Based on EEG Sensing

Classification for Motion Game Based on EEG Sensing Classification for Motion Game Based on EEG Sensing Ran WEI 1,3,4, Xing-Hua ZHANG 1,4, Xin DANG 2,3,4,a and Guo-Hui LI 3 1 School of Electronics and Information Engineering, Tianjin Polytechnic University,

More information

SIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB

SIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB SIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB S. Kajan, J. Goga Institute of Robotics and Cybernetics, Faculty of Electrical Engineering and Information Technology, Slovak University

More information

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots Hafid NINISS Forum8 - Robot Development Team Abstract: The purpose of this work is to develop a man-machine interface for

More information

The Hand Gesture Recognition System Using Depth Camera

The Hand Gesture Recognition System Using Depth Camera The Hand Gesture Recognition System Using Depth Camera Ahn,Yang-Keun VR/AR Research Center Korea Electronics Technology Institute Seoul, Republic of Korea e-mail: ykahn@keti.re.kr Park,Young-Choong VR/AR

More information

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES International Journal of Advanced Research in Engineering and Technology (IJARET) Volume 9, Issue 3, May - June 2018, pp. 177 185, Article ID: IJARET_09_03_023 Available online at http://www.iaeme.com/ijaret/issues.asp?jtype=ijaret&vtype=9&itype=3

More information

3D-Position Estimation for Hand Gesture Interface Using a Single Camera

3D-Position Estimation for Hand Gesture Interface Using a Single Camera 3D-Position Estimation for Hand Gesture Interface Using a Single Camera Seung-Hwan Choi, Ji-Hyeong Han, and Jong-Hwan Kim Department of Electrical Engineering, KAIST, Gusung-Dong, Yusung-Gu, Daejeon, Republic

More information

Robust Hand Gesture Recognition for Robotic Hand Control

Robust Hand Gesture Recognition for Robotic Hand Control Robust Hand Gesture Recognition for Robotic Hand Control Ankit Chaudhary Robust Hand Gesture Recognition for Robotic Hand Control 123 Ankit Chaudhary Department of Computer Science Northwest Missouri State

More information

CSE Tue 10/09. Nadir Weibel

CSE Tue 10/09. Nadir Weibel CSE 118 - Tue 10/09 Nadir Weibel Today Admin Teams Assignments, grading, submissions Mini Quiz on Week 1 (readings and class material) Low-Fidelity Prototyping 1st Project Assignment Computer Vision, Kinect,

More information

Distance Estimation with a Two or Three Aperture SLR Digital Camera

Distance Estimation with a Two or Three Aperture SLR Digital Camera Distance Estimation with a Two or Three Aperture SLR Digital Camera Seungwon Lee, Joonki Paik, and Monson H. Hayes Graduate School of Advanced Imaging Science, Multimedia, and Film Chung-Ang University

More information

DESIGN STYLE FOR BUILDING INTERIOR 3D OBJECTS USING MARKER BASED AUGMENTED REALITY

DESIGN STYLE FOR BUILDING INTERIOR 3D OBJECTS USING MARKER BASED AUGMENTED REALITY DESIGN STYLE FOR BUILDING INTERIOR 3D OBJECTS USING MARKER BASED AUGMENTED REALITY 1 RAJU RATHOD, 2 GEORGE PHILIP.C, 3 VIJAY KUMAR B.P 1,2,3 MSRIT Bangalore Abstract- To ensure the best place, position,

More information

Hand & Upper Body Based Hybrid Gesture Recognition

Hand & Upper Body Based Hybrid Gesture Recognition Hand & Upper Body Based Hybrid Gesture Prerna Sharma #1, Naman Sharma *2 # Research Scholor, G. B. P. U. A. & T. Pantnagar, India * Ideal Institue of Technology, Ghaziabad, India Abstract Communication

More information

Air Marshalling with the Kinect

Air Marshalling with the Kinect Air Marshalling with the Kinect Stephen Witherden, Senior Software Developer Beca Applied Technologies stephen.witherden@beca.com Abstract. The Kinect sensor from Microsoft presents a uniquely affordable

More information

GESTURE BASED HUMAN MULTI-ROBOT INTERACTION. Gerard Canal, Cecilio Angulo, and Sergio Escalera

GESTURE BASED HUMAN MULTI-ROBOT INTERACTION. Gerard Canal, Cecilio Angulo, and Sergio Escalera GESTURE BASED HUMAN MULTI-ROBOT INTERACTION Gerard Canal, Cecilio Angulo, and Sergio Escalera Gesture based Human Multi-Robot Interaction Gerard Canal Camprodon 2/27 Introduction Nowadays robots are able

More information

Design and Implementation of an Intuitive Gesture Recognition System Using a Hand-held Device

Design and Implementation of an Intuitive Gesture Recognition System Using a Hand-held Device Design and Implementation of an Intuitive Gesture Recognition System Using a Hand-held Device Hung-Chi Chu 1, Yuan-Chin Cheng 1 1 Department of Information and Communication Engineering, Chaoyang University

More information

RESEARCH AND DEVELOPMENT OF DSP-BASED FACE RECOGNITION SYSTEM FOR ROBOTIC REHABILITATION NURSING BEDS

RESEARCH AND DEVELOPMENT OF DSP-BASED FACE RECOGNITION SYSTEM FOR ROBOTIC REHABILITATION NURSING BEDS RESEARCH AND DEVELOPMENT OF DSP-BASED FACE RECOGNITION SYSTEM FOR ROBOTIC REHABILITATION NURSING BEDS Ming XING and Wushan CHENG College of Mechanical Engineering, Shanghai University of Engineering Science,

More information

WHITE PAPER Need for Gesture Recognition. April 2014

WHITE PAPER Need for Gesture Recognition. April 2014 WHITE PAPER Need for Gesture Recognition April 2014 TABLE OF CONTENTS Abstract... 3 What is Gesture Recognition?... 4 Market Trends... 6 Factors driving the need for a Solution... 8 The Solution... 10

More information

Real-time AR Edutainment System Using Sensor Based Motion Recognition

Real-time AR Edutainment System Using Sensor Based Motion Recognition , pp. 271-278 http://dx.doi.org/10.14257/ijseia.2016.10.1.26 Real-time AR Edutainment System Using Sensor Based Motion Recognition Sungdae Hong 1, Hyunyi Jung 2 and Sanghyun Seo 3,* 1 Dept. of Film and

More information

A Real Time Static & Dynamic Hand Gesture Recognition System

A Real Time Static & Dynamic Hand Gesture Recognition System International Journal of Engineering Inventions e-issn: 2278-7461, p-issn: 2319-6491 Volume 4, Issue 12 [Aug. 2015] PP: 93-98 A Real Time Static & Dynamic Hand Gesture Recognition System N. Subhash Chandra

More information

Image Extraction using Image Mining Technique

Image Extraction using Image Mining Technique IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 9 (September. 2013), V2 PP 36-42 Image Extraction using Image Mining Technique Prof. Samir Kumar Bandyopadhyay,

More information

Comparison of Head Movement Recognition Algorithms in Immersive Virtual Reality Using Educative Mobile Application

Comparison of Head Movement Recognition Algorithms in Immersive Virtual Reality Using Educative Mobile Application Comparison of Head Recognition Algorithms in Immersive Virtual Reality Using Educative Mobile Application Nehemia Sugianto 1 and Elizabeth Irenne Yuwono 2 Ciputra University, Indonesia 1 nsugianto@ciputra.ac.id

More information

COMPARATIVE STUDY AND ANALYSIS FOR GESTURE RECOGNITION METHODOLOGIES

COMPARATIVE STUDY AND ANALYSIS FOR GESTURE RECOGNITION METHODOLOGIES http:// COMPARATIVE STUDY AND ANALYSIS FOR GESTURE RECOGNITION METHODOLOGIES Rafiqul Z. Khan 1, Noor A. Ibraheem 2 1 Department of Computer Science, A.M.U. Aligarh, India 2 Department of Computer Science,

More information

Live Hand Gesture Recognition using an Android Device

Live Hand Gesture Recognition using an Android Device Live Hand Gesture Recognition using an Android Device Mr. Yogesh B. Dongare Department of Computer Engineering. G.H.Raisoni College of Engineering and Management, Ahmednagar. Email- yogesh.dongare05@gmail.com

More information

The Control of Avatar Motion Using Hand Gesture

The Control of Avatar Motion Using Hand Gesture The Control of Avatar Motion Using Hand Gesture ChanSu Lee, SangWon Ghyme, ChanJong Park Human Computing Dept. VR Team Electronics and Telecommunications Research Institute 305-350, 161 Kajang-dong, Yusong-gu,

More information

Research Seminar. Stefano CARRINO fr.ch

Research Seminar. Stefano CARRINO  fr.ch Research Seminar Stefano CARRINO stefano.carrino@hefr.ch http://aramis.project.eia- fr.ch 26.03.2010 - based interaction Characterization Recognition Typical approach Design challenges, advantages, drawbacks

More information

Enhancing Shipboard Maintenance with Augmented Reality

Enhancing Shipboard Maintenance with Augmented Reality Enhancing Shipboard Maintenance with Augmented Reality CACI Oxnard, CA Dennis Giannoni dgiannoni@caci.com (805) 288-6630 INFORMATION DEPLOYED. SOLUTIONS ADVANCED. MISSIONS ACCOMPLISHED. Agenda Virtual

More information

Ubiquitous Home Simulation Using Augmented Reality

Ubiquitous Home Simulation Using Augmented Reality Proceedings of the 2007 WSEAS International Conference on Computer Engineering and Applications, Gold Coast, Australia, January 17-19, 2007 112 Ubiquitous Home Simulation Using Augmented Reality JAE YEOL

More information

KINECT CONTROLLED HUMANOID AND HELICOPTER

KINECT CONTROLLED HUMANOID AND HELICOPTER KINECT CONTROLLED HUMANOID AND HELICOPTER Muffakham Jah College of Engineering & Technology Presented by : MOHAMMED KHAJA ILIAS PASHA ZESHAN ABDUL MAJEED AZMI SYED ABRAR MOHAMMED ISHRAQ SARID MOHAMMED

More information

Guided Filtering Using Reflected IR Image for Improving Quality of Depth Image

Guided Filtering Using Reflected IR Image for Improving Quality of Depth Image Guided Filtering Using Reflected IR Image for Improving Quality of Depth Image Takahiro Hasegawa, Ryoji Tomizawa, Yuji Yamauchi, Takayoshi Yamashita and Hironobu Fujiyoshi Chubu University, 1200, Matsumoto-cho,

More information

FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM

FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM Takafumi Taketomi Nara Institute of Science and Technology, Japan Janne Heikkilä University of Oulu, Finland ABSTRACT In this paper, we propose a method

More information

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces Huidong Bai The HIT Lab NZ, University of Canterbury, Christchurch, 8041 New Zealand huidong.bai@pg.canterbury.ac.nz Lei

More information

Fabrication of the kinect remote-controlled cars and planning of the motion interaction courses

Fabrication of the kinect remote-controlled cars and planning of the motion interaction courses Available online at www.sciencedirect.com ScienceDirect Procedia - Social and Behavioral Sciences 174 ( 2015 ) 3102 3107 INTE 2014 Fabrication of the kinect remote-controlled cars and planning of the motion

More information

Face Detection System on Ada boost Algorithm Using Haar Classifiers

Face Detection System on Ada boost Algorithm Using Haar Classifiers Vol.2, Issue.6, Nov-Dec. 2012 pp-3996-4000 ISSN: 2249-6645 Face Detection System on Ada boost Algorithm Using Haar Classifiers M. Gopi Krishna, A. Srinivasulu, Prof (Dr.) T.K.Basak 1, 2 Department of Electronics

More information

Automatic Crack Detection on Pressed panels using camera image Processing

Automatic Crack Detection on Pressed panels using camera image Processing 8th European Workshop On Structural Health Monitoring (EWSHM 2016), 5-8 July 2016, Spain, Bilbao www.ndt.net/app.ewshm2016 Automatic Crack Detection on Pressed panels using camera image Processing More

More information

Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping

Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping Robotics and Autonomous Systems 54 (2006) 414 418 www.elsevier.com/locate/robot Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping Masaki Ogino

More information

A Publicly Available RGB-D Data Set of Muslim Prayer Postures Recorded Using Microsoft Kinect for Windows

A Publicly Available RGB-D Data Set of Muslim Prayer Postures Recorded Using Microsoft Kinect for Windows J Basic Appl Sci Res, 4(7)115-125, 2014 2014, TextRoad Publication ISSN 2090-4304 Journal of Basic and Applied Scientific Research wwwtextroadcom A Publicly Available RGB-D Data Set of Muslim Prayer Postures

More information

Driver Assistance for "Keeping Hands on the Wheel and Eyes on the Road"

Driver Assistance for Keeping Hands on the Wheel and Eyes on the Road ICVES 2009 Driver Assistance for "Keeping Hands on the Wheel and Eyes on the Road" Cuong Tran and Mohan Manubhai Trivedi Laboratory for Intelligent and Safe Automobiles (LISA) University of California

More information

3D Interaction using Hand Motion Tracking. Srinath Sridhar Antti Oulasvirta

3D Interaction using Hand Motion Tracking. Srinath Sridhar Antti Oulasvirta 3D Interaction using Hand Motion Tracking Srinath Sridhar Antti Oulasvirta EIT ICT Labs Smart Spaces Summer School 05-June-2013 Speaker Srinath Sridhar PhD Student Supervised by Prof. Dr. Christian Theobalt

More information

A Smart Home Design and Implementation Based on Kinect

A Smart Home Design and Implementation Based on Kinect 2018 International Conference on Physics, Computing and Mathematical Modeling (PCMM 2018) ISBN: 978-1-60595-549-0 A Smart Home Design and Implementation Based on Kinect Jin-wen DENG 1,2, Xue-jun ZHANG

More information

Content Based Image Retrieval Using Color Histogram

Content Based Image Retrieval Using Color Histogram Content Based Image Retrieval Using Color Histogram Nitin Jain Assistant Professor, Lokmanya Tilak College of Engineering, Navi Mumbai, India. Dr. S. S. Salankar Professor, G.H. Raisoni College of Engineering,

More information

Design and Implementation of the 3D Real-Time Monitoring Video System for the Smart Phone

Design and Implementation of the 3D Real-Time Monitoring Video System for the Smart Phone ISSN (e): 2250 3005 Volume, 06 Issue, 11 November 2016 International Journal of Computational Engineering Research (IJCER) Design and Implementation of the 3D Real-Time Monitoring Video System for the

More information

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright E90 Project Proposal 6 December 2006 Paul Azunre Thomas Murray David Wright Table of Contents Abstract 3 Introduction..4 Technical Discussion...4 Tracking Input..4 Haptic Feedack.6 Project Implementation....7

More information

Digitalisation as day-to-day-business

Digitalisation as day-to-day-business Digitalisation as day-to-day-business What is today feasible for the company in the future Prof. Jivka Ovtcharova INSTITUTE FOR INFORMATION MANAGEMENT IN ENGINEERING Baden-Württemberg Driving force for

More information

MSMS Software for VR Simulations of Neural Prostheses and Patient Training and Rehabilitation

MSMS Software for VR Simulations of Neural Prostheses and Patient Training and Rehabilitation MSMS Software for VR Simulations of Neural Prostheses and Patient Training and Rehabilitation Rahman Davoodi and Gerald E. Loeb Department of Biomedical Engineering, University of Southern California Abstract.

More information

ReVRSR: Remote Virtual Reality for Service Robots

ReVRSR: Remote Virtual Reality for Service Robots ReVRSR: Remote Virtual Reality for Service Robots Amel Hassan, Ahmed Ehab Gado, Faizan Muhammad March 17, 2018 Abstract This project aims to bring a service robot s perspective to a human user. We believe

More information

3D display is imperfect, the contents stereoscopic video are not compatible, and viewing of the limitations of the environment make people feel

3D display is imperfect, the contents stereoscopic video are not compatible, and viewing of the limitations of the environment make people feel 3rd International Conference on Multimedia Technology ICMT 2013) Evaluation of visual comfort for stereoscopic video based on region segmentation Shigang Wang Xiaoyu Wang Yuanzhi Lv Abstract In order to

More information

Implementation of Augmented Reality System for Smartphone Advertisements

Implementation of Augmented Reality System for Smartphone Advertisements , pp.385-392 http://dx.doi.org/10.14257/ijmue.2014.9.2.39 Implementation of Augmented Reality System for Smartphone Advertisements Young-geun Kim and Won-jung Kim Department of Computer Science Sunchon

More information

Automated Virtual Observation Therapy

Automated Virtual Observation Therapy Automated Virtual Observation Therapy Yin-Leng Theng Nanyang Technological University tyltheng@ntu.edu.sg Owen Noel Newton Fernando Nanyang Technological University fernando.onn@gmail.com Chamika Deshan

More information

Short Course on Computational Illumination

Short Course on Computational Illumination Short Course on Computational Illumination University of Tampere August 9/10, 2012 Matthew Turk Computer Science Department and Media Arts and Technology Program University of California, Santa Barbara

More information

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005.

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005. Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays Habib Abi-Rached Thursday 17 February 2005. Objective Mission: Facilitate communication: Bandwidth. Intuitiveness.

More information

VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL

VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL Instructor : Dr. K. R. Rao Presented by: Prasanna Venkatesh Palani (1000660520) prasannaven.palani@mavs.uta.edu

More information

Estimation of Folding Operations Using Silhouette Model

Estimation of Folding Operations Using Silhouette Model Estimation of Folding Operations Using Silhouette Model Yasuhiro Kinoshita Toyohide Watanabe Abstract In order to recognize the state of origami, there are only techniques which use special devices or

More information

Mixed Reality technology applied research on railway sector

Mixed Reality technology applied research on railway sector Mixed Reality technology applied research on railway sector Yong-Soo Song, Train Control Communication Lab, Korea Railroad Research Institute Uiwang si, Korea e-mail: adair@krri.re.kr Jong-Hyun Back, Train

More information

Partner sought to develop a Free Viewpoint Video capture system for virtual and mixed reality applications

Partner sought to develop a Free Viewpoint Video capture system for virtual and mixed reality applications Technology Request Partner sought to develop a Free Viewpoint Video capture system for virtual and mixed reality applications Summary An Austrian company active in the area of artistic entertainment and

More information

AR 2 kanoid: Augmented Reality ARkanoid

AR 2 kanoid: Augmented Reality ARkanoid AR 2 kanoid: Augmented Reality ARkanoid B. Smith and R. Gosine C-CORE and Memorial University of Newfoundland Abstract AR 2 kanoid, Augmented Reality ARkanoid, is an augmented reality version of the popular

More information

Exhibition Strategy of Digital 3D Data of Object in Archives using Digitally Mediated Technologies for High User Experience

Exhibition Strategy of Digital 3D Data of Object in Archives using Digitally Mediated Technologies for High User Experience , pp.150-156 http://dx.doi.org/10.14257/astl.2016.140.29 Exhibition Strategy of Digital 3D Data of Object in Archives using Digitally Mediated Technologies for High User Experience Jaeho Ryu 1, Minsuk

More information

VIEW: Visual Interactive Effective Worlds Lorentz Center International Center for workshops in the Sciences June Dr.

VIEW: Visual Interactive Effective Worlds Lorentz Center International Center for workshops in the Sciences June Dr. Virtual Reality & Presence VIEW: Visual Interactive Effective Worlds Lorentz Center International Center for workshops in the Sciences 25-27 June 2007 Dr. Frederic Vexo Virtual Reality & Presence Outline:

More information

Vision-based User-interfaces for Pervasive Computing. CHI 2003 Tutorial Notes. Trevor Darrell Vision Interface Group MIT AI Lab

Vision-based User-interfaces for Pervasive Computing. CHI 2003 Tutorial Notes. Trevor Darrell Vision Interface Group MIT AI Lab Vision-based User-interfaces for Pervasive Computing Tutorial Notes Vision Interface Group MIT AI Lab Table of contents Biographical sketch..ii Agenda..iii Objectives.. iv Abstract..v Introduction....1

More information

Immersive Real Acting Space with Gesture Tracking Sensors

Immersive Real Acting Space with Gesture Tracking Sensors , pp.1-6 http://dx.doi.org/10.14257/astl.2013.39.01 Immersive Real Acting Space with Gesture Tracking Sensors Yoon-Seok Choi 1, Soonchul Jung 2, Jin-Sung Choi 3, Bon-Ki Koo 4 and Won-Hyung Lee 1* 1,2,3,4

More information

Hand Gesture Recognition for Kinect v2 Sensor in the Near Distance Where Depth Data Are Not Provided

Hand Gesture Recognition for Kinect v2 Sensor in the Near Distance Where Depth Data Are Not Provided , pp. 407-418 http://dx.doi.org/10.14257/ijseia.2016.10.12.34 Hand Gesture Recognition for Kinect v2 Sensor in the Near Distance Where Depth Data Are Not Provided Min-Soo Kim 1 and Choong Ho Lee 2 1 Dept.

More information

Face Detector using Network-based Services for a Remote Robot Application

Face Detector using Network-based Services for a Remote Robot Application Face Detector using Network-based Services for a Remote Robot Application Yong-Ho Seo Department of Intelligent Robot Engineering, Mokwon University Mokwon Gil 21, Seo-gu, Daejeon, Republic of Korea yhseo@mokwon.ac.kr

More information

Application Areas of AI Artificial intelligence is divided into different branches which are mentioned below:

Application Areas of AI   Artificial intelligence is divided into different branches which are mentioned below: Week 2 - o Expert Systems o Natural Language Processing (NLP) o Computer Vision o Speech Recognition And Generation o Robotics o Neural Network o Virtual Reality APPLICATION AREAS OF ARTIFICIAL INTELLIGENCE

More information

Research on Hand Gesture Recognition Using Convolutional Neural Network

Research on Hand Gesture Recognition Using Convolutional Neural Network Research on Hand Gesture Recognition Using Convolutional Neural Network Tian Zhaoyang a, Cheng Lee Lung b a Department of Electronic Engineering, City University of Hong Kong, Hong Kong, China E-mail address:

More information

Non-Contact Gesture Recognition Using the Electric Field Disturbance for Smart Device Application

Non-Contact Gesture Recognition Using the Electric Field Disturbance for Smart Device Application , pp.133-140 http://dx.doi.org/10.14257/ijmue.2014.9.2.13 Non-Contact Gesture Recognition Using the Electric Field Disturbance for Smart Device Application Young-Chul Kim and Chang-Hyub Moon Dept. Electronics

More information

Development and Integration of Artificial Intelligence Technologies for Innovation Acceleration

Development and Integration of Artificial Intelligence Technologies for Innovation Acceleration Development and Integration of Artificial Intelligence Technologies for Innovation Acceleration Research Supervisor: Minoru Etoh (Professor, Open and Transdisciplinary Research Initiatives, Osaka University)

More information

Image Interpretation System for Informed Consent to Patients by Use of a Skeletal Tracking

Image Interpretation System for Informed Consent to Patients by Use of a Skeletal Tracking Image Interpretation System for Informed Consent to Patients by Use of a Skeletal Tracking Naoki Kamiya 1, Hiroki Osaki 2, Jun Kondo 2, Huayue Chen 3, and Hiroshi Fujita 4 1 Department of Information and

More information

Anti-shaking Algorithm for the Mobile Phone Camera in Dim Light Conditions

Anti-shaking Algorithm for the Mobile Phone Camera in Dim Light Conditions Anti-shaking Algorithm for the Mobile Phone Camera in Dim Light Conditions Jong-Ho Lee, In-Yong Shin, Hyun-Goo Lee 2, Tae-Yoon Kim 2, and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 26

More information

Intelligent Identification System Research

Intelligent Identification System Research 2016 International Conference on Manufacturing Construction and Energy Engineering (MCEE) ISBN: 978-1-60595-374-8 Intelligent Identification System Research Zi-Min Wang and Bai-Qing He Abstract: From the

More information

Video Synthesis System for Monitoring Closed Sections 1

Video Synthesis System for Monitoring Closed Sections 1 Video Synthesis System for Monitoring Closed Sections 1 Taehyeong Kim *, 2 Bum-Jin Park 1 Senior Researcher, Korea Institute of Construction Technology, Korea 2 Senior Researcher, Korea Institute of Construction

More information

Motion Recognition in Wearable Sensor System Using an Ensemble Artificial Neuro-Molecular System

Motion Recognition in Wearable Sensor System Using an Ensemble Artificial Neuro-Molecular System Motion Recognition in Wearable Sensor System Using an Ensemble Artificial Neuro-Molecular System Si-Jung Ryu and Jong-Hwan Kim Department of Electrical Engineering, KAIST, 355 Gwahangno, Yuseong-gu, Daejeon,

More information

Image Processing and Particle Analysis for Road Traffic Detection

Image Processing and Particle Analysis for Road Traffic Detection Image Processing and Particle Analysis for Road Traffic Detection ABSTRACT Aditya Kamath Manipal Institute of Technology Manipal, India This article presents a system developed using graphic programming

More information

Design and Development of a Marker-based Augmented Reality System using OpenCV and OpenGL

Design and Development of a Marker-based Augmented Reality System using OpenCV and OpenGL Design and Development of a Marker-based Augmented Reality System using OpenCV and OpenGL Yap Hwa Jentl, Zahari Taha 2, Eng Tat Hong", Chew Jouh Yeong" Centre for Product Design and Manufacturing (CPDM).

More information

A Mathematical model for the determination of distance of an object in a 2D image

A Mathematical model for the determination of distance of an object in a 2D image A Mathematical model for the determination of distance of an object in a 2D image Deepu R 1, Murali S 2,Vikram Raju 3 Maharaja Institute of Technology Mysore, Karnataka, India rdeepusingh@mitmysore.in

More information

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged ADVANCED ROBOTICS SOLUTIONS * Intelli Mobile Robot for Multi Specialty Operations * Advanced Robotic Pick and Place Arm and Hand System * Automatic Color Sensing Robot using PC * AI Based Image Capturing

More information

VISUAL FINGER INPUT SENSING ROBOT MOTION

VISUAL FINGER INPUT SENSING ROBOT MOTION VISUAL FINGER INPUT SENSING ROBOT MOTION Mr. Vaibhav Shersande 1, Ms. Samrin Shaikh 2, Mr.Mohsin Kabli 3, Mr.Swapnil Kale 4, Mrs.Ranjana Kedar 5 Student, Dept. of Computer Engineering, KJ College of Engineering

More information

Service Robots in an Intelligent House

Service Robots in an Intelligent House Service Robots in an Intelligent House Jesus Savage Bio-Robotics Laboratory biorobotics.fi-p.unam.mx School of Engineering Autonomous National University of Mexico UNAM 2017 OUTLINE Introduction A System

More information

USING VIRTUAL REALITY SIMULATION FOR SAFE HUMAN-ROBOT INTERACTION 1. INTRODUCTION

USING VIRTUAL REALITY SIMULATION FOR SAFE HUMAN-ROBOT INTERACTION 1. INTRODUCTION USING VIRTUAL REALITY SIMULATION FOR SAFE HUMAN-ROBOT INTERACTION Brad Armstrong 1, Dana Gronau 2, Pavel Ikonomov 3, Alamgir Choudhury 4, Betsy Aller 5 1 Western Michigan University, Kalamazoo, Michigan;

More information

A SURVEY ON HAND GESTURE RECOGNITION

A SURVEY ON HAND GESTURE RECOGNITION A SURVEY ON HAND GESTURE RECOGNITION U.K. Jaliya 1, Dr. Darshak Thakore 2, Deepali Kawdiya 3 1 Assistant Professor, Department of Computer Engineering, B.V.M, Gujarat, India 2 Assistant Professor, Department

More information

Definitions of Ambient Intelligence

Definitions of Ambient Intelligence Definitions of Ambient Intelligence 01QZP Ambient intelligence Fulvio Corno Politecnico di Torino, 2017/2018 http://praxis.cs.usyd.edu.au/~peterris Summary Technology trends Definition(s) Requested features

More information

Face Registration Using Wearable Active Vision Systems for Augmented Memory

Face Registration Using Wearable Active Vision Systems for Augmented Memory DICTA2002: Digital Image Computing Techniques and Applications, 21 22 January 2002, Melbourne, Australia 1 Face Registration Using Wearable Active Vision Systems for Augmented Memory Takekazu Kato Takeshi

More information

Number Plate Detection with a Multi-Convolutional Neural Network Approach with Optical Character Recognition for Mobile Devices

Number Plate Detection with a Multi-Convolutional Neural Network Approach with Optical Character Recognition for Mobile Devices J Inf Process Syst, Vol.12, No.1, pp.100~108, March 2016 http://dx.doi.org/10.3745/jips.04.0022 ISSN 1976-913X (Print) ISSN 2092-805X (Electronic) Number Plate Detection with a Multi-Convolutional Neural

More information

Electronic Travel Aid Based on. Consumer Depth Devices to Avoid Moving Objects

Electronic Travel Aid Based on. Consumer Depth Devices to Avoid Moving Objects Contemporary Engineering Sciences, Vol. 9, 2016, no. 17, 835-841 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ces.2016.6692 Electronic Travel Aid Based on Consumer Depth Devices to Avoid Moving

More information

Issues in Information Systems Volume 13, Issue 2, pp , 2012

Issues in Information Systems Volume 13, Issue 2, pp , 2012 131 A STUDY ON SMART CURRICULUM UTILIZING INTELLIGENT ROBOT SIMULATION SeonYong Hong, Korea Advanced Institute of Science and Technology, gosyhong@kaist.ac.kr YongHyun Hwang, University of California Irvine,

More information

ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit)

ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit) Exhibit R-2 0602308A Advanced Concepts and Simulation ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit) FY 2005 FY 2006 FY 2007 FY 2008 FY 2009 FY 2010 FY 2011 Total Program Element (PE) Cost 22710 27416

More information

Realization of Multi-User Tangible Non-Glasses Mixed Reality Space

Realization of Multi-User Tangible Non-Glasses Mixed Reality Space Indian Journal of Science and Technology, Vol 9(24), DOI: 10.17485/ijst/2016/v9i24/96161, June 2016 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 Realization of Multi-User Tangible Non-Glasses Mixed

More information

Convolutional Neural Network-based Steganalysis on Spatial Domain

Convolutional Neural Network-based Steganalysis on Spatial Domain Convolutional Neural Network-based Steganalysis on Spatial Domain Dong-Hyun Kim, and Hae-Yeoun Lee Abstract Steganalysis has been studied to detect the existence of hidden messages by steganography. However,

More information

Image Recognition for PCB Soldering Platform Controlled by Embedded Microchip Based on Hopfield Neural Network

Image Recognition for PCB Soldering Platform Controlled by Embedded Microchip Based on Hopfield Neural Network 436 JOURNAL OF COMPUTERS, VOL. 5, NO. 9, SEPTEMBER Image Recognition for PCB Soldering Platform Controlled by Embedded Microchip Based on Hopfield Neural Network Chung-Chi Wu Department of Electrical Engineering,

More information

International Conference on Advances in Mechanical Engineering and Industrial Informatics (AMEII 2015)

International Conference on Advances in Mechanical Engineering and Industrial Informatics (AMEII 2015) International Conference on Advances in Mechanical Engineering and Industrial Informatics (AMEII 2015) Equipment body feeling maintenance teaching system Research Based on Kinect Fushuan Wu 1, a, Jianren

More information

Interior Design using Augmented Reality Environment

Interior Design using Augmented Reality Environment Interior Design using Augmented Reality Environment Kalyani Pampattiwar 2, Akshay Adiyodi 1, Manasvini Agrahara 1, Pankaj Gamnani 1 Assistant Professor, Department of Computer Engineering, SIES Graduate

More information

Development of an Education System for Surface Mount Work of a Printed Circuit Board

Development of an Education System for Surface Mount Work of a Printed Circuit Board Development of an Education System for Surface Mount Work of a Printed Circuit Board H. Ishii, T. Kobayashi, H. Fujino, Y. Nishimura, H. Shimoda, H. Yoshikawa Kyoto University Gokasho, Uji, Kyoto, 611-0011,

More information

Development a File Transfer Application by Handover for 3D Video Communication System in Synchronized AR Space

Development a File Transfer Application by Handover for 3D Video Communication System in Synchronized AR Space Development a File Transfer Application by Handover for 3D Video Communication System in Synchronized AR Space Yuki Fujibayashi and Hiroki Imamura Department of Information Systems Science, Graduate School

More information

Environmental control by remote eye tracking

Environmental control by remote eye tracking Loughborough University Institutional Repository Environmental control by remote eye tracking This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation: SHI,

More information