3D Human-Gesture Interface for Fighting Games Using Motion Recognition Sensor

Similar documents
Gesture Recognition with Real World Environment using Kinect: A Review

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung,

The User Activity Reasoning Model Based on Context-Awareness in a Virtual Living Space

Toward an Augmented Reality System for Violin Learning Support

Stabilize humanoid robot teleoperated by a RGB-D sensor

Design a Model and Algorithm for multi Way Gesture Recognition using Motion and Image Comparison

A Study on Motion-Based UI for Running Games with Kinect

GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL

Augmented Keyboard: a Virtual Keyboard Interface for Smart glasses

Classification for Motion Game Based on EEG Sensing

SIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB

Kinect Interface for UC-win/Road: Application to Tele-operation of Small Robots

The Hand Gesture Recognition System Using Depth Camera

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES

3D-Position Estimation for Hand Gesture Interface Using a Single Camera

Robust Hand Gesture Recognition for Robotic Hand Control

CSE Tue 10/09. Nadir Weibel

Distance Estimation with a Two or Three Aperture SLR Digital Camera

DESIGN STYLE FOR BUILDING INTERIOR 3D OBJECTS USING MARKER BASED AUGMENTED REALITY

Hand & Upper Body Based Hybrid Gesture Recognition

Air Marshalling with the Kinect

GESTURE BASED HUMAN MULTI-ROBOT INTERACTION. Gerard Canal, Cecilio Angulo, and Sergio Escalera

Design and Implementation of an Intuitive Gesture Recognition System Using a Hand-held Device

RESEARCH AND DEVELOPMENT OF DSP-BASED FACE RECOGNITION SYSTEM FOR ROBOTIC REHABILITATION NURSING BEDS

WHITE PAPER Need for Gesture Recognition. April 2014

Real-time AR Edutainment System Using Sensor Based Motion Recognition

A Real Time Static & Dynamic Hand Gesture Recognition System

Image Extraction using Image Mining Technique

Comparison of Head Movement Recognition Algorithms in Immersive Virtual Reality Using Educative Mobile Application

COMPARATIVE STUDY AND ANALYSIS FOR GESTURE RECOGNITION METHODOLOGIES

Live Hand Gesture Recognition using an Android Device

The Control of Avatar Motion Using Hand Gesture

Research Seminar. Stefano CARRINO fr.ch

Enhancing Shipboard Maintenance with Augmented Reality

Ubiquitous Home Simulation Using Augmented Reality

KINECT CONTROLLED HUMANOID AND HELICOPTER

Guided Filtering Using Reflected IR Image for Improving Quality of Depth Image

FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces

Fabrication of the kinect remote-controlled cars and planning of the motion interaction courses

Face Detection System on Ada boost Algorithm Using Haar Classifiers

Automatic Crack Detection on Pressed panels using camera image Processing

Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping

A Publicly Available RGB-D Data Set of Muslim Prayer Postures Recorded Using Microsoft Kinect for Windows

Driver Assistance for "Keeping Hands on the Wheel and Eyes on the Road"

3D Interaction using Hand Motion Tracking. Srinath Sridhar Antti Oulasvirta

A Smart Home Design and Implementation Based on Kinect

Content Based Image Retrieval Using Color Histogram

Design and Implementation of the 3D Real-Time Monitoring Video System for the Smart Phone

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

Digitalisation as day-to-day-business

MSMS Software for VR Simulations of Neural Prostheses and Patient Training and Rehabilitation

ReVRSR: Remote Virtual Reality for Service Robots

3D display is imperfect, the contents stereoscopic video are not compatible, and viewing of the limitations of the environment make people feel

Implementation of Augmented Reality System for Smartphone Advertisements

Automated Virtual Observation Therapy

Short Course on Computational Illumination

Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005.

VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL

Estimation of Folding Operations Using Silhouette Model

Mixed Reality technology applied research on railway sector

Partner sought to develop a Free Viewpoint Video capture system for virtual and mixed reality applications

AR 2 kanoid: Augmented Reality ARkanoid

Exhibition Strategy of Digital 3D Data of Object in Archives using Digitally Mediated Technologies for High User Experience

VIEW: Visual Interactive Effective Worlds Lorentz Center International Center for workshops in the Sciences June Dr.

Vision-based User-interfaces for Pervasive Computing. CHI 2003 Tutorial Notes. Trevor Darrell Vision Interface Group MIT AI Lab

Immersive Real Acting Space with Gesture Tracking Sensors

Hand Gesture Recognition for Kinect v2 Sensor in the Near Distance Where Depth Data Are Not Provided

Face Detector using Network-based Services for a Remote Robot Application

Application Areas of AI Artificial intelligence is divided into different branches which are mentioned below:

Research on Hand Gesture Recognition Using Convolutional Neural Network

Non-Contact Gesture Recognition Using the Electric Field Disturbance for Smart Device Application

Development and Integration of Artificial Intelligence Technologies for Innovation Acceleration

Image Interpretation System for Informed Consent to Patients by Use of a Skeletal Tracking

Anti-shaking Algorithm for the Mobile Phone Camera in Dim Light Conditions

Intelligent Identification System Research

Video Synthesis System for Monitoring Closed Sections 1

Motion Recognition in Wearable Sensor System Using an Ensemble Artificial Neuro-Molecular System

Image Processing and Particle Analysis for Road Traffic Detection

Design and Development of a Marker-based Augmented Reality System using OpenCV and OpenGL

A Mathematical model for the determination of distance of an object in a 2D image

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

VISUAL FINGER INPUT SENSING ROBOT MOTION

Service Robots in an Intelligent House

USING VIRTUAL REALITY SIMULATION FOR SAFE HUMAN-ROBOT INTERACTION 1. INTRODUCTION

A SURVEY ON HAND GESTURE RECOGNITION

Definitions of Ambient Intelligence

Face Registration Using Wearable Active Vision Systems for Augmented Memory

Number Plate Detection with a Multi-Convolutional Neural Network Approach with Optical Character Recognition for Mobile Devices

Electronic Travel Aid Based on. Consumer Depth Devices to Avoid Moving Objects

Issues in Information Systems Volume 13, Issue 2, pp , 2012

ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit)

Realization of Multi-User Tangible Non-Glasses Mixed Reality Space

Convolutional Neural Network-based Steganalysis on Spatial Domain

Image Recognition for PCB Soldering Platform Controlled by Embedded Microchip Based on Hopfield Neural Network

International Conference on Advances in Mechanical Engineering and Industrial Informatics (AMEII 2015)

Interior Design using Augmented Reality Environment

Development of an Education System for Surface Mount Work of a Printed Circuit Board

Development a File Transfer Application by Handover for 3D Video Communication System in Synchronized AR Space

Environmental control by remote eye tracking

Transcription:

Wireless Pers Commun (2016) 89:927 940 DOI 10.1007/s11277-016-3294-9 3D Human-Gesture Interface for Fighting Games Using Motion Recognition Sensor Jongmin Kim 1 Hoill Jung 2 MyungA Kang 3 Kyungyong Chung 4 Published online: 19 April 2016 Springer Science+Business Media New York 2016 Abstract As augmented reality related technologies become commercialized due to requests for 3D content, they are developing a pattern whereby users utilize and consume the realism and reality of 3D content. Rather than using absolute position information, the pattern characteristics of gestures are extracted by considering body-proportion characteristics around the shoulders. Even if performing the same gesture, position coordinate values of the skeleton measured by a motion recognition sensor can vary, depending on the length and direction of the arm. In this paper, we propose a 3D human-gesture interface for fighting games using a motion recognition sensor. Recognizing gestures in the motion recognition sensor environment, we applied the gestures to a fighting action game. The motion characteristics of gestures are extracted by using joint information obtained from the motion recognition sensor, and 3D human motion is modeled mathematically. Motion is effectively modeled and analyzed with a method of expressing it in space via principal component analysis and then matching it with the 3D human-gesture interface for new & Kyungyong Chung kyungyong.chung@gmail.com Jongmin Kim mrjjoung@ccei.kr Hoill Jung hijung1982@gmail.com MyungA Kang makang@gwangju.ac.kr 1 2 3 4 Creative Economy Support Team, Jeonnam Center for Createive Economy and Innovation, 32, Deokchung 2-gil, Yeosu-si, Jeollanam-do 550-812, Korea Intelligent System Laboratory, School of Computer Information Engineering, Sangji University, 83, Sangjidae-gil, Wonju-si, Gangwon-do 220-702, Korea Division of Computer Information Engineering, GwangJu University, 277, Hyodeok-ro, Nam-gu, Gwangju, Korea School of Computer Information Engineering, Sangji University, 83, Sangjidae-gil, Wonju-si, Gangwon-Do 220-702, Korea

928 J. Kim et al. input. Also, we propose an advanced pattern matching algorithm as a way to reduce motion constraints in a motion recognition system. Finally, based on the results of motion recognition, an example used as the interface of a 3D fight action game is presented. By obtaining high-quality 3D motion, the developed technology provides more realistic 3D content through real-time processing technology. Keywords 3D human gesture Motion recognition sensor Tracking Recognition 1 Introduction Recently, due to the increase in population, increases in income, and increasing expectations of services, studies on interfaces that provide natural interaction between people, equipment, and the environment have specifically been conducted. The need is increasing for technology that tracks an object in real time from video ranging from games to movies. In order to promote the visual industry, major foreign countries are expanding investment in informationization, standardization, the legal system, manpower training, creation of research and development foundations, etc. Humans convey information easily by using language in everyday life, but we actually convey information by nonverbal means like behavior or facial expression in many situations [1 3]. Motion recognition to maximize users convenience is globally expanding into industries around the world and is expected to grow rapidly. It is also leading a new field of computing in the development of devices that accurately detect motion in games, augmented reality, location-based services, gesture interfaces, smart TV, health care [4 6], stability analysis [7], and medical telemedicine [8]. Motion recognition-based equipment includes games and wearable devices for controllerbased motion, and includes the single camera and the stereo camera. Here, camera-based motion is categorized into the Kinect sensor, single-camera tracking, and time of flight (TOF) methods, depending on the applied technology. Motion information estimated through this technology provides realism and reality for 3D content. In addition, when connecting to augmented reality, interaction between the user and the object becomes possible. With the recent increase in demand for 3D content, gesture recognition analyzes a person s 3D posture that changes over time in order to track motion. That is, it estimates and tracks the human body from video and identifies body parts to determine what meaning is being conveyed. However, it is not easy to stably identify a body part and interpret the meaning of a gesture, because the human body is a 3D skeletal object with a high degree of freedom, whereas image recognition is from a 2D image [9]. Also, every person has a different human body size and wears a variety of clothes and caps, so it is not easy to extract gesture feature information of a 3D human body [8, 10]. In order to easily identify the human body and determine changes over time, the angles of the joints connecting each part of the body must be identified. Early gesture recognition research analyzed changes in the 3D posture of people wearing a special outfit or specific markers and then classified patterns [3]. This has the advantage of being relatively fast and accurate, but if a marker is covered, the motion cannot be determined, and it costs a lot of money for equipment and installation. Studies on 3D posture estimation of a person and motion tracking have been conducted in many ways. In particular, studies on specialized techniques for specific uses, such as health care and sports, are also underway. Studies are being conducted on tracking techniques using image- or silhouette-based information,

3D Human-Gesture Interface for Fighting Games Using a 929 models for tracking, ontologies, and situational awareness. They succeeded in commercialization of a motion tracking system with no markers in the fields of health care and sports, and can now track motion at low cost. Also, through commercialization, the system can be used to analyze 3D postures of a person and to analyze sports injuries in athletes. It is useful in the production of movies or 3D animation, because 3D motion can be analyzed based on joint angles and expressed as numerical data [9]. The 3D human position of each marker can be measured with multiple pre-calibrated cameras. If a part of the body is covered and it is thus not easy to estimate location, the location of markers will be specified by hand. This is a useful way to obtain accurate measurement, but it requires expensive equipment and specially designed studios [11, 12]. In addition, if it is hard to attach markers to the person to be observed, the application is essentially useless. The ideal method for human gesture recognition is to determine meaning by analyzing only the input image of the camera as if people are naturally observing the target with their eyes, without attaching a marker for motion recognition. Gesture recognition using cameras has been utilized in a variety of systems or platforms, such as telemedicine health care [13, 14], emergency situation monitoring [15], robotic control [16], computer graphics and games [17], peer-to-peer context awareness [18], physical security, medical information services, [8] and sign language recognition for the hearing impaired [19]. In particular, Microsoft s Kinect motion recognition sensor, which appeared in 2010, includes functions to provide depth images other than traditional RGB images. Thus, gesture recognition research using the motion recognition sensor and depth images is being more actively conducted [20]. The motion recognition sensor is composed of three lenses. It performs video recording through the RGB recognition lens and, at the same time, projects an infrared ray in a pixel unit through an infrared lens. Depth recognition images are created by a device designed to shoot infrared pixels projected on a scene and to recognize distance by imposing a depth calculation on the target. The emergence of such a sensor saves people the trouble of extracting the body detection required for gesture recognition and making pose estimations [15, 21, 22]. In previous studies, gesture was recognized by analyzing information from the gesture using a variety of sequential learning methods [12, 23 25]. In such studies, a fixed threshold model, a garbage model, or an adaptive threshold model were used to distinguish gestures from nongestures (actions, not gestures that occur between gestures). The fixed threshold model has a drawback in that it is not easy to determine the optimum threshold value. The garbage model has a problem whereby it is not possible to collect data and learn from it due to the diversity of non-gestures. To overcome the shortcomings of these models, an advanced pattern matching algorithm is proposed. This paper is organized as follows. Section 2 describes a 3D human-gesture recognition system. Section 3 suggests a 3D human gesture interface for fighting games using a motion recognition sensor. Section 4 presents the experimental results of the proposed method, and Sect. 5 provides the conclusion. 2 3D Human Gesture Virtual Interface The proposed 3D human-gesture virtual interface is intended to extract the location coordinates of the user s joints from the depth images of the motion recognition sensor, comparing them to dynamic gestures determined in advance. The extracted feature points were modeled for the gestures defined by the virtual reality (VR) interface and were

930 J. Kim et al. Fig. 1 Overview of the 3D human-gesture virtual interface system projected onto the space by principal component analysis. Due to the nature of the space, visually similar gestures have a similar distribution in the space. Thus, it becomes possible to analyze a gesture by matching it against visually similar gestures from model gestures in the space. The gesture information analyzed in that way is converted to a VR interface and can be used. Figure 1 shows the overview of the 3D human-gesture virtual interface system. Unlike 2D tracking technology, a great deal of computation time and real-time processing technology are required, because the motion information of a 3D person must be tracked while finding the corresponding areas in order to extract them in real time. This is 3D motion information extraction and tracking technology which is efficient in a multiview camera environment. The 3D human motion tracking technology analyzes a tracking and recognition surveillance system developed in previous research [2, 3, 11] and develops the technology for tracking motion with silhouette images [12] based on it. Figure 2 shows an overview of 3D human gesture tracking. In order to detect a person from the input image, the area and background image are separated. The area is detected, and then, the whole outline is extracted. After undergoing a fragmentation process and finding an image, detection is carried out by calculating the coordinates. If detected, the image is tracked, and if lost, it is restored quickly using Lucas Kanade motion estimation [11]. Here, a difference-image technique is applied in order to remove background image and noise. 3 3D Human-Gesture Interface for Fighting Games 3.1 Definition of the Gesture Gesture recognition uses one or more image cameras in order to obtain 3D motion, and uses computer vision technology to recognize a particular motion. For preprocessing work on a grayscale image used to recognize and detect images (including noise), log

3D Human-Gesture Interface for Fighting Games Using a 931 Fig. 2 Overview of 3D human-gesture tracking transformation, linear transformation and an exponential function are used [2, 3, 11]. Gestures are the ones used in three-dimensional games, categorized into seven gestures. Defined gestures are difficult to distinguish. Thus, they need to be mathematically modeled. In this paper, 3D positions for both hands and feet and moving velocity vectors of the head and both feet are used as features. These features are generated into an input vector of 21-plus dimensions. Analyzing them enables gestures to be distinguished. Figure 3 shows the feature space vectors for fighting games. Figure 3a represents the relative position vectors by Pv LH, Pv RH, Pv LF, Pv RF and Fig. 3b represents the velocity vectors by Vv H, Vv LF, Vv RF. Fig. 3 Feature space vector for fighting games

932 J. Kim et al. From the images, an area is first segmented based on color, and the area is detected by extracting an outline. Tracking starts after detecting the image by using an outline vector value. Real-time images are detected by repeatedly applying the Lucas Kanade method in the tracking process [11]. 3.2 Feature Extraction To extract a pattern feature from the data acquired by the motion recognition sensor, for gesture recognition, it is necessary to extract features in consideration of the nature of the gesture and to determine the expression from the appropriate input in the gesture model. In order to obtain the 3D posture of a person, the probability for each part of the body is calculated. The posture is estimated through optimization based on relative position information. The final posture is determined by expanding the corresponding area for each body part to 3D [7, 8]. In this paper, information about the user s 20 joints was obtained from input gestures using the Kinect Software Development Kit (SDK) AP NUI Skeleton application programming interface (API) [26]. Figure 4 shows the gesture feature extraction information [17]. As shown in Fig. 4, for the gesture features, 20 angles are used as physical characteristic points [7, 8] by calculating the X Y and X Z axis angles for joints: {head-rib}, {rib-pelvis}, {right elbow right hand}, {right shoulder right elbow}, {hip right knee}, {left elbow the left hand}, {right knee right foot}, {left shoulder the left elbow}, and {hip left knee}. 3.3 3D Human Gesture Modeling The feature space vectors can be obtained for each frame. However, the input data are high-level and do not have common features. For that reason, this study modeled 3D human gestures using principal component analysis [18, 27 29]. Calculating a space Fig. 4 Gesture feature extraction information

3D Human-Gesture Interface for Fighting Games Using a 933 Fig. 5 Cumulative contributions according to values vector, it is necessary to obtain the mean space vector for all feature space vectors and to then seek the differences between feature space vectors. The average vector C and the new feature set can be expressed with Eq. (1). Pv is the relative position vector, and Vv is the velocity vector in the feature space vector. C ¼ 1 N X N i¼1 ½Pv LH ; Pv RH ; Pv LF ; Pv RF ; Vv H ; Vv RF ; Vv LF T ð1þ After the space vectors are constructed, they are projected onto the low-dimensional vector space by Eq. (2). M i is the low-dimensional vector space. M i ¼ ½e 1 ; e 2 ; e 3 ;...; e k T ½Pv LH ; Pv RH ; Pv LF ; Pv RF ; Vv H ; Vv RF ; Vv LF T C ð2þ Principal component analysis reduces dimensions and summarizes data by enticing feature space vectors. Therefore, it is necessary to determine the number of principal components to be retained from the entire object image. Figure 5 shows the cumulative contributions according to the values. As shown in Fig. 5, five vectors with larger values make a greater contribution to the space vector. Thus, this study converts the dimensions into a fivedimensional space for the 3D human gesture analysis. We used previous research into 3D human gesture modeling [23, 30]. 3.4 Advanced Pattern Matching Algorithm For predefined gestures, analysis can be made by modeling them in the space and determining new gestures from 3D human-gestures in the space vector feature [7, 8]. However, for more natural interface implementation, importance should be placed on the means of the gesture, rather than numerical data of the same gesture. For example, for the defined right kick gesture, the position of the arms is not necessary to distinguish the gesture. In other words, it does not matter if the position of the arms is excluded from 3D-human gesture recognition [23]. In this study, we mitigated the restrictions of 3D human gestures by excluding unnecessary features and applying this to gesture recognition. Figure 6 shows the advanced pattern matching algorithm considering 3D human gestures.

934 J. Kim et al. Fig. 6 Advanced pattern matching algorithm considering 3D human-gestures Table 1 Feature values related to respective 3D human-gestures Gesture Left punch (LP) Right punch (RP) Left kick (LK) Right kick (RK) Running (R) Sitting (S) Flying kick (FK) Related feature space vector Positions of both hands Positions of both hands Positions of both feet Positions of both feet Positions of head, pelvis, feet Positions of both feet, position of head Positions of both feet, position of head Fig. 7 Accuracy experimental results on advanced pattern matching algorithm

3D Human-Gesture Interface for Fighting Games Using a 935 Fig. 8 Distance results of model gesture. a Left foot kick, b left punch

936 J. Kim et al. Fig. 9 Action game control applying the proposed gesture recognition algorithm Table 1 shows the feature values related to the respective 3D human-gestures. Each gesture was assumed for the input value, and irrelevant feature space values were replaced by average values of the model. 4 Experimental Result Performance evaluation of the system according to whether the proposed method was applied was carried out by measuring the similarity between input motion and the modeled motion. The user interface consists of RGB and real-time-motion data, plus skeletal, depth, and 3D viewer properties considering user convenience [12, 28]. The motion recognition sensor-based interface system was developed in a Windows 10 64-bit environment using Microsoft Visual Studio 2015 C?? in an Intel core i7 processer at 4.0 GHz with 16 GB RAM. Also used were the Kinect SDK v1.8, Kinect Runtime v1.8, the Kinect Developer Toolkit v1.8, OpenNI v2.2, NiTE v2.2, and the OpenGL API. The average value of model motion before input modification and after input modification of data for seven motions (LP, RP, LK, RK, R, S, FK) was calculated [23, 30]. The system outputs motion within a distance of more than a set threshold from among the motions that are the closest to the motion entered. Figure 7 shows the accuracy experimental results depending on whether the advanced pattern matching algorithm was used. To show the accuracy experimental results of the advanced pattern matching algorithm proposed in this thesis, examples of gestures that have the same experimental meaning, but different numerical values, are presented in Fig. 8. The distance value of models in the space when the input vector was changed was compared to when the input vector was not changed. Figure 8a illustrates the result of the gesture kick when opening up both hands after modeling the gesture kick with two hands together. As shown in Fig. 8a, the gesture left foot kick has a closer distance to the model gesture than the case before the input vector change. Figure 8b shows execution of the gesture punch without bending the knees a little after modeling the gesture punch when bending the knees. As shown in Fig. 8b, the gesture punch also had a closer distance for the model gesture than the case before input vector change. As shown in the above results, heuristic information was added to the input vector for recognition, and thereby the meaning of a gesture was taken into account. Considering the meaning of a gesture leads to recognizing more general gestures, and to reducing system operation constraints. The distance of the model gesture was reduced, though each gesture

3D Human-Gesture Interface for Fighting Games Using a 937 had a different reduction. Without the change, the decision of the gesture with the closest distance value also brings about a correct result. Nevertheless, by getting a relative distance from a different modeled gesture, it is possible to obtain more stable results. Figure 9 shows the proposed gesture recognition system. The motion recognition sensor was first used to analyze a gesture by an actor, and then 3D gesture information was extracted to analyze the gesture and perform recognition. The result obtained by gesture recognition is connected to the interface module. The interface module interprets the gesture recognition result for the game interface. This thesis defined the gestures used in TEKKEN 7 [31], and set the keyboard sequence for each gesture. The two modules were installed in different systems, which were connected by a local area network to create a virtual interface between human and computer. Therefore, in the environment, users are able to experience VR to operate a game character. The proposed system made it possible to control more accurate gestures by removing uncertain gestures. 5 Conclusions Motion recognition to maximize user convenience is a technology with huge future potential that can create new added value, and its growth potential is great. As augmented reality related derivatives have recently been commercialized, a pattern is developing where users more often encounter and access 3D content. This paper described gesture recognition using motion information obtained from a motion recognition sensor. In order to numerically express the 20-plus physical feature points of human motion, five physical characteristics were defined. We extracted motion information by calculating 3D information from these feature points. Principal component analysis was used to model these data, and an advanced pattern matching algorithm was used for more reliable recognition results. Also, motion detection and control functions of various smart devices were presented by applying gesture recognition proposed in an interface for 3D action games. The applied system used the Euclidean distance-based correlation coefficient and did not consider the distribution patterns for all motion. In addition, uncommon motion might not be recognized in some cases. This can be solved by considering the distribution pattern of the modeled motion. It was also found that 3D motion data contain a lot of errors. This can be solved by additionally implementing an error correction process, which is expected to show more stable results. This fundamental technology will be very helpful for the commercialization of related technologies and will create significant value because it has a lot of applications. Acknowledgments This study was conducted by research funds from Gwangju University in 2015. References 1. Weiser, M. (2001). The computer for the 21st century. Scientific Anerica, 265(3), 66 76. 2. Kang, S. K., Chung, K. Y., & Lee, J. H. (2014). Real-time tracking and recognition systems for interactive telemedicine health services. Wireless Personal Communications, 79(4), 2611 2626. 3. Kang, S. K., Chung, K. Y., & Lee, J. H. (2015). Ontology based inference system for adaptive object recognition. Multimedia Tools and Applications, 74(20), 8893 8905. 4. Jung, H., & Chung, K. (2015). Sequential pattern profiling based bio-detection for smart health service. Cluster Computing, 18(1), 209 219.

938 J. Kim et al. 5. Jung, H., & Chung, K. (2016). Knowledge based dietary nutrition recommendation for obesity management. Information Technology and Management, 17(1), 29 42. 6. Jung, E. Y., Kim, J. H., Chung, K. Y., & Park, D. K. (2014). Mobile healthcare application with EMR interoperability for diabetes patients. Cluster Computing, 17(3), 871 880. 7. Kim, S. H., & Chung, K. Y. (2014). 3D simulator for stability analysis of finite slope causing plane activity. Multimedia Tools and Applications, 68(2), 455 463. 8. Kim, S. H., & Chung, K. Y. (2015). Medical information service system based on human 3D anatomical model. Multimedia Tools and Applications, 74(20), 8939 8950. 9. Pavlovic, V. I., Sharma, R., & Huang, T. (1997). Visual interpretation of hand gestures for humancomputer interaction: A review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 677 695. 10. Haritaoglu, I., Harwood, D., & Davis, L. S. (1998). W4: Who? When? Where? What? A real-time system for detecting and tracking people. In Third Face and Gesture Recognition Conference (pp. 222 227). 11. Kang, S. K., Chung, K. Y., & Lee, J. H. (2014). Development of head detection and tracking systems for visual surveillance. Personal and Ubiquitous Computing, 18(3), 515 522. 12. Kim, J. M., Chung, K., & Kang, M. A. (2016). Continuous gesture recognition using HLAC and lowdimensional space. Wireless Personal Communications, 86(1), 255 270. 13. Jo, S. M., & Chung, K. (2014). Design of access control system for telemedicine secure XML documents. Multimedia Tools and Applications, 74(7), 2257 2271. 14. Jung, H., & Chung, K. (2016). PHR based life health index mobile service using decision support model. Wireless Personal Communications, 86(1), 315 332. 15. Kim, S. H., & Chung, K. (2015). Emergency situation monitoring service using context motion tracking of chronic disease patients. Cluster Computing, 18(2), 747 759. 16. Petis, M., & Fukui, K. (2012). Both-hand gesture recognition based on KOMSM with volume subspaces for robot teleoperation. In Proceedings of IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (pp. 191 196). 17. Kim, J. C., Jung, H., Kim, S. H., & Chung, K. (2016). Slope based intelligent 3D disaster simulation using physics engine. Wireless Personal Communications, 86(1), 183 199. 18. Jung, H., & Chung, K. (2016). P2P context awareness based sensibility design recommendation using color and bio-signal analysis. Peer-to-Peer Networking and Applications, 9(3), 546 557. 19. Li, Y. (2012). Multi-scenario gesture recognition using Kinect. In Proceedings of the International Conference on Computer Games (pp. 126 130). 20. Yun, H., Kim, K., Lee, J., & Lee, H. (2014). Development of experience dance game using kinectmotion capture. KIPS Transactions on Software and Data Engineering, 3(1), 49 56. 21. Oikonomidis, L., Kyriazis, N., & Argyros, A. A. (2011). Efficient model-based 3D tracking of hand articulations using Kinect. In In British Machine Vision Conference. 22. Sung, J., Ponce, C., Selman, B., & Saxena, A. (2011). Human activity detection from RGBD images. In Proceedings of the International Workshop on Association for the Advancement of Artificial Intelligence. 23. Kim, J. M. (2008). Three dimensional gesture recognition using PCA of stereo images and modified matching algorithm. In Proceedings of the International Conference on Fuzzy Systems and Knowledge Discovery (pp. 116 120). 24. Holden, E. J., Lee, G., & Owens, R. (2005). Australian Sign Language Recognition. Machine Vision and Applications, 1(5), 312 320. 25. Nickel, K., & Stiefelhagen, R. (2004). Real-time person tracking and pointing gesture recognition for human-robot interaction. Computer Vision in Human-Computer Interaction, 3058, 28 38. 26. Microsoft Kinect SDK. http://www.microsoft.com/en-us/kinectforwindows 27. Murase, H., & Nayar, S. K. (1995). Visual learning and recognition 3-D objects from appearance. International Journal of Computer Vision, 14, 5 24. 28. Chung, K., Kim, J. C., & Park, R. C. (2016). Knowledge-based health service considering user convenience using hybrid Wi-Fi P2P. Information Technology and Management, 17(1), 67 80. 29. Jung, E. Y., Kim, J. H., Chung, K. Y., & Park, D. K. (2013). Home health gateway based healthcare services through U-health platform. Wireless Personal Communications, 73(2), 207 218. 30. Kim, J. M., & Kang, M. A. (2011). Appearance-based object recognition using higher correlation feature information and PCA. In Proceedings of the International Conference on Fuzzy Systems and Knowledge Discovery (pp. 1874 1878). 31. TEKKEN 7. http://tk7.tekken-net.kr/

3D Human-Gesture Interface for Fighting Games Using a 939 Jongmin Kim has received B.S. in 2002 Howon University and M.S., Ph.D. degrees in 2004, 2008, respectively, the Department of Computer Science and Statistics, Chosun University, Korea. He is currently a senior researcher in the Creative Economy Support Team. Jeonnam Center for Creative Economy & Innovation, Korea. His research interests include Pattern Recognition, Intelligent Computing and Neural Network, Image Processing, and Mobile Application Service. Hoill Jung has received B.S and M.S. degrees from the School of Computer Information Engineering, Sangji University, Korea in 2010 and 2013, respectively. He was worked for Local Information Institute Corporation. He is currently in the doctorate course of the School of Computer Information Engineering, Sangji University, Korea. He has been a researcher at Intelligent System Lab., Sangji University. His research interests include Medical Data Mining, Sensibility Engineering, Knowledge System, and Recommendation. MyungA Kang has received B.S. in 1992 Gwangju University and M.S., Ph.D. degrees in 1995, 1999, respectively, the Department of Computer Statistics, Chosun University, Korea. She is currently a professor in the Division of Computer Information Engineering, Gwangju University, Korea. Her research interests include Pattern Recognition, Intelligent Computing and Neural Network, Image Processing, and Mobile Application Service.

940 J. Kim et al. Kyungyong Chung has received B.S., M.S., and Ph.D. degrees in 2000, 2002, and 2005, respectively, all from the Department of Computer Information Engineering, Inha University, Korea. He has worked for Software Technology Leading Department, Korea IT Industry Promotion Agency (KIPA). He is currently a professor in the School of Computer Information Engineering, Sangji University, Korea. His research interests include Medical Data Mining, Healthcare, Knowledge System, HCI, and Recommendation.