The User Activity Reasoning Model Based on Context-Awareness in a Virtual Living Space

Similar documents
Designing the Smart Foot Mat and Its Applications: as a User Identification Sensor for Smart Home Scenarios

ALPAS: Analog-PIR-sensor-based Activity Recognition System in Smarthome

A Smart Home Design and Implementation Based on Kinect

Pervasive and mobile computing based human activity recognition system

Multi-Resolution Estimation of Optical Flow on Vehicle Tracking under Unpredictable Environments

Non-Contact Gesture Recognition Using the Electric Field Disturbance for Smart Device Application

Definitions of Ambient Intelligence

Understanding User Privacy in Internet of Things Environments IEEE WORLD FORUM ON INTERNET OF THINGS / 30

This list supersedes the one published in the November 2002 issue of CR.

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM

InSciTe Adaptive: Intelligent Technology Analysis Service Considering User Intention

Ubiquitous Home Simulation Using Augmented Reality

Training of EEG Signal Intensification for BCI System. Haesung Jeong*, Hyungi Jeong*, Kong Borasy*, Kyu-Sung Kim***, Sangmin Lee**, Jangwoo Kwon*

An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi

Intelligent Power Economy System (Ipes)

Design Home Energy Feedback: Understanding Home Contexts and Filling the Gaps

Efficiency Analysis of the Smart Controller Switch System using RF Communication for Energy Saving

Sensory Fusion for Image

Immersive Real Acting Space with Gesture Tracking Sensors

Privacy Preserving, Standard- Based Wellness and Activity Data Modelling & Management within Smart Homes

License Plate Localisation based on Morphological Operations

A User Interface Level Context Model for Ambient Assisted Living

Wi-Fi Fingerprinting through Active Learning using Smartphones

System of Recognizing Human Action by Mining in Time-Series Motion Logs and Applications

A Smart Building as a Cyber Physical System

A New Forecasting System using the Latent Dirichlet Allocation (LDA) Topic Modeling Technique

Research on Hand Gesture Recognition Using Convolutional Neural Network

The Autonomous Performance Improvement of Mobile Robot using Type-2 Fuzzy Self-Tuning PID Controller

A Study of Optimal Spatial Partition Size and Field of View in Massively Multiplayer Online Game Server

Motion Recognition in Wearable Sensor System Using an Ensemble Artificial Neuro-Molecular System

Particle Swarm Optimization-Based Consensus Achievement of a Decentralized Sensor Network

Location Estimation based on Received Signal Strength from Access Pointer and Machine Learning Techniques

DEVELOPMENT OF A ROBOID COMPONENT FOR PLAYER/STAGE ROBOT SIMULATOR

Design a Model and Algorithm for multi Way Gesture Recognition using Motion and Image Comparison

3rd International Conference on Mechanical Engineering and Intelligent Systems (ICMEIS 2015)

Hand Gesture Recognition System for Daily Information Retrieval Swapnil V.Ghorpade 1, Sagar A.Patil 2,Amol B.Gore 3, Govind A.

Smart Home System for Energy Saving using Genetic- Fuzzy-Neural Networks Approach

On-site Safety Management Using Image Processing and Fuzzy Inference

Number Plate Recognition Using Segmentation

Convolutional Neural Network-based Steganalysis on Spatial Domain

Simulation of Smart Home Activity Datasets

Toward an Augmented Reality System for Violin Learning Support

Ontology-based Context Aware for Ubiquitous Home Care for Elderly People

Real Time Word to Picture Translation for Chinese Restaurant Menus

A User-Friendly Interface for Rules Composition in Intelligent Environments

Advanced Analytics for Intelligent Society

Analysis of Computer IoT technology in Multiple Fields

Tableau Machine: An Alien Presence in the Home

Implementation of Augmented Reality System for Smartphone Advertisements

A Study on Motion-Based UI for Running Games with Kinect

A Study on the Physical Effects in 4D

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT

Provisioning of Context-Aware Augmented Reality Services Using MPEG-4 BIFS. Byoung-Dai Lee

Available online at ScienceDirect. Procedia Engineering 111 (2015 )

ELG 5121/CSI 7631 Fall Projects Overview. Projects List

iwindow Concept of an intelligent window for machine tools using augmented reality

Casattenta: WSN Based smart technology

The Control of Avatar Motion Using Hand Gesture

Activity Analyzing with Multisensor Data Correlation

Verified Mobile Code Repository Simulator for the Intelligent Space *

Available online at ScienceDirect. Procedia Computer Science 56 (2015 )

MANAGING USER PRIVACY IN UBIQUITOUS COMPUTING APPLICATIONS

Research Article Evaluation of a Home Biomonitoring Autonomous Mobile Robot

Gesture Recognition with Real World Environment using Kinect: A Review

Homeostasis Lighting Control System Using a Sensor Agent Robot

Human Authentication from Brain EEG Signals using Machine Learning

A SURVEY ON HCI IN SMART HOMES. Department of Electrical Engineering Michigan Technological University

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

Human Activity Recognition using Single Accelerometer on Smartphone Put on User s Head with Head-Mounted Display

Applications of Machine Learning Techniques in Human Activity Recognition

Building a Machining Knowledge Base for Intelligent Machine Tools

Energy modeling/simulation Using the BIM technology in the Curriculum of Architectural and Construction Engineering and Management

ithrow : A NEW GESTURE-BASED WEARABLE INPUT DEVICE WITH TARGET SELECTION ALGORITHM

SIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB

An Embedding Model for Mining Human Trajectory Data with Image Sharing

Bluetooth Low Energy Sensing Technology for Proximity Construction Applications

Virtual Reality to Support Modelling. Martin Pett Modelling and Visualisation Business Unit Transport Systems Catapult

OSGi-Based Context-Aware Middleware for Building Intelligent Services in a Smart Home Environment

Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback

3D-Position Estimation for Hand Gesture Interface Using a Single Camera

ScienceDirect. An Integrated Xbee arduino And Differential Evolution Approach for Localization in Wireless Sensor Networks

User interface for remote control robot

Unsupervised Pixel Based Change Detection Technique from Color Image

A Study on Imaging Cameras Fire Prevention Solutions Using Thermal

The Connected Home: Are You Ready?

Pervasive Computing: Study for Homes

A study on facility management application scenario of BIMGIS modeling data

Virtual Reality Calendar Tour Guide

The Hand Gesture Recognition System Using Depth Camera

Tools for Ubiquitous Computing Research

A Driver Assaulting Event Detection Using Intel Real-Sense Camera

I. INTRODUCTION II. LITERATURE SURVEY. International Journal of Advanced Networking & Applications (IJANA) ISSN:

PlaceLab. A House_n + TIAX Initiative

A Survey on Smart City using IoT (Internet of Things)

Extraction and Recognition of Text From Digital English Comic Image Using Median Filter

Automated Number Plate Recognition System Using Machine learning algorithms (Kstar)

Jim Mangione June, 2017

Journal Title ISSN 5. MIS QUARTERLY BRIEFINGS IN BIOINFORMATICS

Cognitive Radio: Smart Use of Radio Spectrum

Development of a telepresence agent

Transcription:

, pp.62-67 http://dx.doi.org/10.14257/astl.2015.86.13 The User Activity Reasoning Model Based on Context-Awareness in a Virtual Living Space Bokyoung Park, HyeonGyu Min, Green Bang and Ilju Ko Department of Global Media, Soongsil University {bokyoungp, norwoods, banglgreen, andy}@ssu.ac.kr Abstract. Smart homes are intelligent integrated environments, which provide residents with various and convenient automated service based on wired or wireless network using interface of home appliances, sensors and systems in living space. Context-awareness technology is essential to recognize the inhabitants context of in smart home research. This paper presents a simulator in a virtual space with virtual sensors and a virtual user for context-awareness in a living space. User data generated by the presented simulator is trained through an SVM Classifier; as a result the user activity reasoning model based on context-awareness in a virtual space was created. Keywords: smart homes, context-awareness, SVM, simulation, user activity reasoning 1 Introduction Smart homes have become an active topic of research with the continuous interest. Smart homes are intelligent integrated environments, which provide residents with various and convenient automated service based on wired or wireless network using interface of home appliances, sensors and systems in living space [1]. Contextawareness technology is essential to recognize the inhabitants activity and context information in smart home research. Context-awareness is the technology to recognize human activity, voice and the environmental change surrounding the people [2], this makes it possible to identify the desire and to control the environment to provide necessary services to the occupants. Indeed, many studies are needed for the collection of data including activities of daily living for context-awareness research in living space, to do this the various researchers are progressing in the actual living space equipped with devices and users have wearable sensors. For the experimental environment, the restrictions such as high cost, long periods and resistance of participants arise. In addition, it is difficult to maintain the consistency of the data collected due to the long experiment period. To remove these kinds of constraints the researches using simulators, which make the virtual space and generate the data in a relatively short period. Smart home research using simulation is divided into implementation of user activity recognition algorithm and service validation using context-awareness [3]. Simulators for user activity recognition research are based on the event-driven ISSN: 2287-1233 ASTL Copyright 2015 SERSC

approach. The activities invoke each event in a virtual space, they are used in activity recognition algorithm tests together with sensor data. SIMACT simulates the smart home using a virtual sensor of a 3D engine, which creates the data for activity recognition algorithm test and research [4]. Persim is a simulator to create realistic sensory data using 3D space, sensors, smart character modeling and it generates the automatic scenario. It creates the data in a virtual space, in which the virtual character lives and actuates the sensors, and proves data similar to the data from the real world [3]. The simulators for context-awareness aims for the validation of predefined services and the functions of smart homes before implementing in the real world. Fu et al proposes Context-awareness Simulators for smart home systems based on the XML and rule-based system which provides the service in a virtual living space by a user s location change [5]. UbiREAL is a simulation application to automatically control the virtual devices in a 3D virtual space based on contexts. In addition, UbiREAL provides a GUI module allowing users to place virtual devices in a 3D smart home, to change the state of a virtual device and visualizing the route of an avatar, so that users can observe the action of the devices by context information intuitively [6]. In this paper, we implement a simulator to collect the data for creation of a user activity reasoning model based on context-awareness. For this, we define the virtual space in a simulator similar to the real world, divide the whole living space into the detail area and then we place the objects and sensors on those constructions. We extract the typical activity types that can occur in a living space, and we use the collected data through the simulator to train the reasoning model. The rest of this paper is organized as follows. Section 2 presents the user activity reasoning model based on context-awareness in a virtual living space. Section 3 describes the experiment environment using the simulation and the results. In section 4, we draw the conclusions of this paper. 2 The user activity reasoning model The user activity reasoning model presented in this paper consists of a simulator to collect a user s activity data and the activity reasoning model. The simulator is a virtual space and contextual information collect module. The activity reasoning model is composed of the feature selection and preprocessing, the learning module, the reasoning model and the proposed model is illustrated in Fig. 1. To generate the data for reasoning model, we build a virtual space, place the objects and sensors in a simulator, then create a virtual character. In this space, the activity and environmental information is collected through the user data collection module when the virtual character moves and uses the objects. The data is used in training of a user s activities via classifier module. The activity classification results are used to create the model to infer and validate the user s activity. Copyright 2015 SERSC 63

Fig. 1. The user activity reasoning model The living space consists of the detailed space based on the main purpose of the specific space. In each detailed space, the various things associated with peoples actions exist and the space is divided by the place occupied by the things [7]. In this research, we classified the five types of space such as living room, kitchen, hallway, bathroom, and bedroom based on the main purpose of the space which is a normal one-room type. In this paper, of those 3 characteristics used in a research [7], we divided the detailed space using characteristics of a goal and movement. The space is divided into a specialized area, a residence area and a comfort area based on the characteristic of the goal of using the furniture and appliances. The space is divided into a static area and a moving area based on the characteristic of movement with flow line in a space. This segmented area can be added to or changed when objects or the spaces are added accordingly. It is important to arrange the sensors in the virtual living space to capture the context information of the user in that place and the status of the objects. We use sensors such as camera sensor, current sensor, pressure sensor, contact sensor, light detection sensor, temperature and humidity sensor. They are designed to have same function as those in the real world. For the contextual information, we use the data such as spatial information and the area information in which a user does for the current activity, objects used by the inhabitant, whether or not to use an object, posture or the behavior of a user, the previous activity, the start time and the duration time of the current activity, week day/holiday, weather, and temperature. The typical behaviors in a living space are extracted using the survey statistics for the life time of whole population [8]. We use activity types like meal preparation, eating, washing, putting on clothes, laundry, folding clothes, cleaning, watching TV, using a computer, going out, relaxing, and sleeping. The series of activities in a virtual living space are defined as a composite activity. Each composite activity is made up of a set of basic activities and environment information with a user s activity. For the learning module in our research, we used SVM which was developed by Vapnik, a Russian statistician, in 1995. SVM is currently recognized as the classifier having the best generalization ability [9], it shows the best performance in users activity recognition research comparing the other classifiers [10]. To get a high accuracy of activity reasoning model, we need a certain amount or more data to be accumulated; this data should be used in the learning module. To train the SVM classifier, it requires the data values represented by a vector of number 64 Copyright 2015 SERSC

rather than the unstructured data type. Therefore, the context information created in a virtual space which is represented by the character type or categorized data should be converted to a numeric value. Of the data acquired from defined sensors, if they have a value of variable length or non-numeric form, then they should be transferred to numeric form data as well. We obtain the activity classification results through these data after the preprocessing and the training for classifiers. At this time, we process the training for classifiers using the subset data randomly selected, we validate the activity reasoning using the remainder data set. When the training and the validation of the proposed model is complete, with this model it is possible to deduce the specific activity using space, user and environment information. 3 Experiment and the results The simulator is composed of a virtual space and a virtual character and is built using Unity3D for the presented user activity reasoning model. We implemented the simulator to collect the data of user s activity, location and sensor values by manipulating the character. Fig. 2. Simulation screen Creating the data through the simulator is the method to overcome the limitation of the data collection in the reality to keep the consistency of data. If we run the simulation repeatedly and collect the data, a large amount of data can be generated with less cost and less time than the actual experiment. This makes the system more accurate. The simulator simply consists of a virtual space and a virtual character, within a virtual space, a user moves by operating the player character using a keyboard and a mouse while a character interacts with objects. During this simulation, related contextual information is saved in a database. When a user starts a certain activity, we can adjust the time parameter as shown in the simulation screen in Fig. 2, and we make it possible to shorten the time in a virtual space. Before starting a specific activity, we enter the value of activity type, previous activity, weather, temperature, and time. After the entry we start the experiment, then we finalize the selected activity type, and finally we terminate it. In this way we can proceed with the experiment by selecting another activity. Data is created by the activity type; it consists of an object, status of object, space, time, environment Copyright 2015 SERSC 65

information. This data is created only when a certain event occurs, or sensor values are changed. We generate the series of activity data which can be occurred in a living space during a day using the simulator. We create the data while changing the environmental factor. For the experiment, we created 7 weeks data of an adult who has a normal pattern while giving the different environment information. Table 1. Reasoning rate for activities Activity Accuracy Activity Accuracy Meal preparation 90.85 Cleaning 97.90 Eating 100.00 Watching TV 57.75 Washing(face, body, brushing teeth) 97.78 Using a Computer 69.09 Putting on clothes(make up etc.) 90.97 Going Out 90.86 Laundry 76.00 Relaxing 75.23 Folding clothes 50.00 Sleeping 70.12 The 10 types of activity data created by the simulator, they are used in a user s activity reasoning after training the classifier. The reasoning accuracy is the probability to infer the certain activity correctly using contextual information data. The reasoning accuracy is a percentage of the number of data correctly inferred divided by the total number of generated data by activity type. The result of the experiment and the reasoning is summarized as the Table 1. As shown in the Table 1, the activities which occurs frequently in a home, such as meal preparation, eating, washing, putting on clothes, cleaning or going out shows high accuracy; activities which have a low frequency of occurrence such as laundry, folding clothes, and using a computer or the activities which have the probability to occur in the same place like relaxing, watching TV and sleeping shows low accuracy relatively. The proposed model is based on the learning module, so it is possible to create new reasoning model after we change new objects/activity types in a simulator or we want to apply a new behavior pattern. In that case, we simply regenerate the user data after reflecting the changes. 4 Conclusion In this paper, we have presented a user model as a learning model based on context information including the detail space in a virtual living space and composite behavior occurred in the area. We performed the simulation using a virtual character based on the typical activities can occur in a living space and created the contextual data doing this. User s activity and environment information is used to train the classifier module by each activity type, the activity classification results as a result of learning used for user s activity reasoning model. We expect the proposed model in this paper can be used for context-awareness of multiple-residents in the same place where the users do the behaviors including interaction between them and the inferring the activity of the single-inhabitant. In our 66 Copyright 2015 SERSC

future work, we will evaluate performance of user activity reasoning through the additional data generation using the proposed simulator. Acknowledgements. This research was supported by Next-Generation Information Computing Development Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (No. 2012M3C4A7032783) References 1. Ricquebourg, V., Menga, D., Durand, D., Marhic, B., Delahoche, L., Logé, C.: The smart home concept: our immediate future. In: 1st IEEE International Conference on E-Learning in Industrial Electronics, pp. 23--28. IEEE (2006) 2. Weiser, M.: Some computer science issues in ubiquitous computing. Communications of the ACM. 36, 75--84 (1993) 3. Helal, A., Cho, K., Lee, W., Sung, Y., Lee, J. W., Kim, E.: 3D Modeling and simulation of Human Activities in smart spaces. In: 2012 9th International Conference on Ubiquitous Intelligence & Computing and 9th International Conference on Autonomic & Trusted Computing (UIC/ATC), pp. 112--119. IEEE (2012) 4. Bouchard, K., Ajroud, A., Bouchard, B., Bouzouane, A.: SIMACT: a 3D open source smart home simulator for activity recognition. In: Advances in Computer Science and Information Technology, pp. 524--533. Springer, Heidelberg (2010) 5. Fu, Q., Li, P., Chen, C., Qi, L., Lu, Y., Yu, C.: A configurable context-aware simulator for smart home systems. In: 2011 6th International Conference on Pervasive Computing and Applications (ICPCA), pp. 39--44. IEEE (2011) 6. Nishikawa, H., Yamamoto, S., Tamai, M., Nishigaki, K., Kitani, T., Shibata, N., Yasumoto. K., Ito, M.: UbiREAL: realistic smartspace simulator for systematic testing. In: UbiComp 2006: Ubiquitous Computing, pp. 459--476. Springer, Heidelberg (2006) 7. Sung, B.K., Bang, G.R., Min, H.K., Lee, M.H., Ko, I.J.: Research of Space model for Context awareness based on user activity in shared living space. In: 2013 Workshop on Convergent and smart computing systems, pp. 117--120 (2013) 8. Behavioral classification casebook : Life Time usage survey statistics of whole population 2009, Statistics Korea, http://meta.narastat.kr/ 9. Burges, C.J.: A Tutorial on Support Vector Machines for Pattern Recognition. Data mining and knowledge discovery. 2, 121--167 (1998) 10. Cook, D.J., Krishnan, N.C., Rashidi, P.: Activity discovery and activity recognition: A new partnership. IEEE Transactions on Cybernetics. 43, 820--828 (2013) Copyright 2015 SERSC 67