An Unreal Based Platform for Developing Intelligent Virtual Agents

Similar documents
INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY

WEB-BASED, DYNAMIC AND INTELLIGENT SIMULATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

Agent Models of 3D Virtual Worlds

Agents for Serious gaming: Challenges and Opportunities

Designing 3D Virtual Worlds as a Society of Agents

DESIGN AGENTS IN VIRTUAL WORLDS. A User-centred Virtual Architecture Agent. 1. Introduction

Catholijn M. Jonker and Jan Treur Vrije Universiteit Amsterdam, Department of Artificial Intelligence, Amsterdam, The Netherlands

An Overview of the Mimesis Architecture: Integrating Intelligent Narrative Control into an Existing Gaming Environment

CISC 1600 Lecture 3.4 Agent-based programming

Intentional Embodied Agents

Introduction to Autonomous Agents and Multi-Agent Systems Lecture 1

Lecture 1: Introduction and Preliminaries

Agile Behaviour Design: A Design Approach for Structuring Game Characters and Interactions

Overview Agents, environments, typical components

Agent-Based Systems. Agent-Based Systems. Agent-Based Systems. Five pervasive trends in computing history. Agent-Based Systems. Agent-Based Systems

Intelligent Agents Living in Social Virtual Environments Bringing Max Into Second Life

value in developing technologies that work with it. In Guerra s work (Guerra,

1 Introduction. 2 Agent Chameleons

Framework for Simulating the Human Behavior for Intelligent Virtual Agents. Part I: Framework Architecture

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

Chapter 31. Intelligent System Architectures

Dynamic Designs of 3D Virtual Worlds Using Generative Design Agents

Agents in the Real World Agents and Knowledge Representation and Reasoning

AIEDAM Special Issue: Sketching, and Pen-based Design Interaction Edited by: Maria C. Yang and Levent Burak Kara

Behaviour-Based Control. IAR Lecture 5 Barbara Webb

COMP310 Multi-Agent Systems Chapter 3 - Deductive Reasoning Agents. Dr Terry R. Payne Department of Computer Science

Development of an Intelligent Agent based Manufacturing System

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

Capturing and Adapting Traces for Character Control in Computer Role Playing Games

Co-evolution of agent-oriented conceptual models and CASO agent programs

ACE: A Platform for the Real Time Simulation of Virtual Human Agents

arxiv: v1 [cs.se] 5 Mar 2018

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

A review of Reasoning About Rational Agents by Michael Wooldridge, MIT Press Gordon Beavers and Henry Hexmoor

Essay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam

SOFTWARE AGENTS IN HANDLING ABNORMAL SITUATIONS IN INDUSTRIAL PLANTS

THE MECA SAPIENS ARCHITECTURE

- Basics of informatics - Computer network - Software engineering - Intelligent media processing - Human interface. Professor. Professor.

Beyond Emergence: From Emergent to Guided Narrative

BDI: Applications and Architectures

Touch Perception and Emotional Appraisal for a Virtual Agent

ADVANCES IN IT FOR BUILDING DESIGN

VIRTUAL AGORA: REPRESENTATION OF AN ANCIENT GREEK AGORA IN VIRTUAL WORLDS USING BIOLOGICALLY-INSPIRED MOTIVATIONAL AGENTS

Towards the development of cognitive robots

FP7 ICT Call 6: Cognitive Systems and Robotics

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

ACTIVE, A PLATFORM FOR BUILDING INTELLIGENT OPERATING ROOMS

AFOL: Towards a New Intelligent Interactive Programming Language for Children

STRATEGO EXPERT SYSTEM SHELL

Using Dynamic Capability Evaluation to Organize a Team of Cooperative, Autonomous Robots

What is Artificial Intelligence? Alternate Definitions (Russell + Norvig) Human intelligence

Robot Task-Level Programming Language and Simulation

Context-Aware Interaction in a Mobile Environment

This list supersedes the one published in the November 2002 issue of CR.

Keywords Multi-Agent, Distributed, Cooperation, Fuzzy, Multi-Robot, Communication Protocol. Fig. 1. Architecture of the Robots.

Outline. Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types

Birth of An Intelligent Humanoid Robot in Singapore

IEEE Systems, Man, and Cybernetics Society s Perspectives and Brain-Related Technical Activities

Action semantics in Smart Objects Workshop Paper

Socially-aware emergent narrative

A NEW SIMULATION FRAMEWORK OF OPERATIONAL EFFECTIVENESS ANALYSIS FOR UNMANNED GROUND VEHICLE

Individual Test Item Specifications

Associated Emotion and its Expression in an Entertainment Robot QRIO

School of Computer Science. Course Title: Introduction to Human-Computer Interaction Date: 8/16/11

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

SENG609.22: Agent-Based Software Engineering Assignment. Agent-Oriented Engineering Survey

Autonomous Robotic (Cyber) Weapons?

AOSE Agent-Oriented Software Engineering: A Review and Application Example TNE 2009/2010. António Castro

OVERVIEW OF ARTIFICIAL INTELLIGENCE (AI) TECHNOLOGIES. Presented by: WTI

A Conceptual Modeling Method to Use Agents in Systems Analysis

Software Agent Reusability Mechanism at Application Level

Dynamic Designs of 3D Virtual Worlds Using Generative Design Agents

CSCE 315: Programming Studio

This tutorial is prepared for the students at beginner level who aspire to learn Artificial Intelligence.

Why interest in visual perception?

Craig Barnes. Previous Work. Introduction. Tools for Programming Agents

A Realistic Reaction System for Modern Video Games

Booklet of teaching units

ROE Simulation Program

Key-Words: - Fuzzy Behaviour Controls, Multiple Target Tracking, Obstacle Avoidance, Ultrasonic Range Finders

Lecture Overview. Artificial Intelligence Part I. Lab Exam Results. Evaluations

CSC384 Intro to Artificial Intelligence* *The following slides are based on Fahiem Bacchus course lecture notes.

CPS331 Lecture: Agents and Robots last revised November 18, 2016

Advances and Perspectives in Health Information Standards

AI MAGAZINE AMER ASSOC ARTIFICIAL INTELL UNITED STATES English ANNALS OF MATHEMATICS AND ARTIFICIAL

Development of an API to Create Interactive Storytelling Systems

Chapter 2 Introduction to Haptics 2.1 Definition of Haptics

The Nature of Informatics

BOX, Floor 5, Tower 3, Clements Inn, London WC2A 2AZ, United Kingdom

A SURVEY OF SOCIALLY INTERACTIVE ROBOTS

Cognitive Robotics. Behavior Control. Hans-Dieter Burkhard June 2014

Gameplay as On-Line Mediation Search

Artificial Intelligence

EDUCATIONAL PROGRAM YEAR bachiller. The black forest FIRST YEAR OF HIGH SCHOOL PROGRAM

Multi-Agent Systems in Distributed Communication Environments

Where are we? Knowledge Engineering Semester 2, Speech Act Theory. Categories of Agent Interaction

Lecturers. Alessandro Vinciarelli

Modeling and Simulation: Linking Entertainment & Defense

UMLEmb: UML for Embedded Systems. II. Modeling in SysML. Eurecom

Transcription:

An Unreal Based Platform for Developing Intelligent Virtual Agents N. AVRADINIS, S. VOSINAKIS, T. PANAYIOTOPOULOS, A. BELESIOTIS, I. GIANNAKAS, R. KOUTSIAMANIS, K. TILELIS Knowledge Engineering Lab, Department of Informatics University of Piraeus 80 Karaoli & Dimitriou str., Piraeus 18534 GREECE avrad@unipi.gr, themisp@unipi.gr Abstract: - In this paper we suggest an approach for developing programmable intelligent virtual agents over Unreal. We propose various techniques for manipulating creating and modifying Unreal Engine s actors, as well as a method for developing an additional external controller responsible for intelligent decision making by creating programmable agents. Key-Words: - Virtual Agents, Virtual Environments, Task Definition, Planning, Simulation, Programmable Agents 1 Introduction Synthetic characters alongside with virtual environments can be used in dynamic simulations in order to increase the simulation s believability. Such agents should not only be believable but also be capable of interacting with their environment and other agents, [1,2,4], and simple to program for accomplishing numerous tasks. The virtual environment should aid the agents roles and should be able to adapt to different types of simulations [3]. Previous research efforts in the Knowledge Engineer Laboratory has shown that it is possible to develop Intelligent Virtual Agents capable of exhibiting complex behaviors, either over the network in a distributed architecture, [1,10], or over the web, [3], or as a stand-alone application, [5,7,11]. On the other hand, the Unreal Engine has been used for interactive storytelling, [9]. In this paper we propose a framework for developing programmable agents, using the Unreal Engine for embodying the agents and visualizing the environment. The Unreal Engine fulfils most of the prerequisites stated. It is a freely available, up-todate 3D graphics engine, providing character animation and basic interaction. Moreover, via the UnrealScript scripting language the engine can be parameterized and extended to real-time programmed scenarios. This paper is structured as follows. In Section 2 we demonstrate a generic IVA architecture. In section 3 we present an overview of the agent s architecture in Unreal and we discuss the use of UnrealScript. In section 4, we discuss the possible solutions for developing programmable agents over Unreal and we propose our approach. In section 5 we demonstrate an example of a dynamic simulation. Finally, in section 6, conclusions along with future work are discussed. 2 A Generic IVA Architecture For our platform we adopted an IVA architecture developed by our Laboratory, which is already presented in [8]. The architecture is shown in Figure1. An IVA has the following characteristics which define its behaviour: High level beliefs Ground beliefs Basic emotions Object relations Geometry data Attributes High level drives Personality The agent s decision mechanism consists of three layers: The cognitive layer The non-cognitive layer The low level layer In the cognitive layer, the High level Beliefs, Basic emotions, object relations, attributes and high level drives dictate the agent s causality and appraisal attribution, thus forming a decision. Based on that decision, conscious actions are defined. The results of each step of the process are evaluated and alter the agent s appraisal and causality attribution as well as their characteristics accordingly. In the non-cognitive layer the ground beliefs, basic emotions, geometry data, the attributes and the personality invoke reactive actions. That means that

the appraisal and causality attribution stage is omitted, thus resulting in the agents performing reflexive actions. Fig.1 Generic IVA Architecture Lastly, in the low level layer, reactive actions are generated with no consideration of the IVA s characteristics. They are the agent s reflexes. The aforementioned actions update the main characteristics. The action taking mechanism is activated by the sensory stimuli generated in the environment. The actions which are performed affect the environment. 3 Unreal Actors & Architecture The Unreal Engine is a general purpose game engine created by Epic Inc. It consists of a freely available runtime module that can be used to implement and visualize 3D virtual environments, as well as a development platform for map design and script editing. The Unreal engine is fully controllable and extendable by script code written in a proprietary language, UnrealScript. UnrealScript is an Object Oriented Programming (OOP) Language with C++ like semantics and syntax offering all the facilities of OOP languages, like classes and inheritance, but also implementing a simple state machine and basic event-driven programming. It is used to describe the characteristics, the behavior, the interactions and the appearance of the virtual environment and its components providing the developers with a powerful language capable of manipulating the Unreal Engine. We have tried to depict the behavioral model of Unreal Actors as a sense-decide-act loop, which is achieved by appropriate sensors and effectors. The real architecture is not known to us in detail, as the engine it is undocumented, except to the open source UnrealScript part and a few informal tutorials on the web. Nevertheless, the proposed approach seems to fit well to the Engine operation and can be used to our own research purposes. Figure2 depicts the architecture of the virtual agent s behavioral model. The following definitions provide an understanding of the terms used. An event and sensory stimulus is a cause for making a decision. Events are triggered by actions of other agents or the laws of the world. A sensor is the means through which the agent receives the messages of the environment. The state machine is a set of states and the rules describing the transitions between them. Effectors are the means through which the agent-decided actions are applied to the environment or other agents. The agent perceives the environment through the basic sensors and events provided by the engine. The messages created are received by the senses of the agent. For example, a bump on another agent creates an event triggering a sensor, whereas a message notifying the agent of the position of a specific object inside their field of view is a visual stimulus. Following the reception of the messages, depending on the current state, a decision for an action is made. The state machine along with the incoming messages defines that decision. The actions are categorized as follows: State changing Attribute changing

Sense activating Performing In Unreal, the low level and a small part of noncognitive processes has been implemented. There are no affective states and high level decision making is not implemented. Fig.3. General purpose decision mechanism and the case of Unreal Fig.2 Unreal Agent s Architecture The case of state changing is self explanatory, for example the transition between idling and wandering state. Actions can also result in the altering of an agent s attributes, which can be best described as properties, for example their location. An agent can activate one or more senses, for example, vision. Finally an agent undertakes an action causing a world event, subsequently triggering the sensory mechanisms of other agents [4]. In general the decision making process of Intelligent Virtual Agents can be categorized as follows [8] Low level behavior Non cognitive affective decision making High level decision making The low level behavior takes into account only the information that originates from the sensors and produces reflexive reactions, whereas the non cognitive decision making also receives input from the agent s affective state and produces more intelligent reactions. The high level decision making takes into account agent beliefs, performs reasoning and planning and in many cases implements a BDI, i.e. Belief-Desire-Intention, architecture. 4 Developing Programmable Agents Our main goal is to create an agent, whose behavior can be programmed using AI techniques, thus a programmable intelligent virtual agent, following the example of [10,11] 4.1 An UnrealScript Oriented Approach A first approach suggests implementing the agent using UnrealScript, either by modifying an already existing agent, or creating a new one from scratch. When modifying an agent, the developer should at first comprehend the underlying code of the existing agent. The next step consists of finding the code segments that are subject to change and modifying them accordingly without breaking the existing code s consistency. When creating a new agent, the developer should create a low level framework, describing basic agent components such as animations, physics and texturing, and afterwards determine the functionality of the agent and proceed with the implementation. The aforementioned solutions would have been simple had it not been for: Insufficient documentation The existing agents combat-oriented behavior

Increased code complexity and size Non readable/modifiable code in the engine Lack of utility libraries (e.g. I/O, string manipulation) Lack of direct connectivity with high level languages used in AI programming (e.g. Prolog, Lisp) 4.2 An External Controller Approach The suggested solution is to use a language other than UnrealScript to implement the agent s behavior. The basic concept of such an approach is to utilize the sensors / effectors of Unreal Actors, i.e. to take advantage of the unreal environment in order to receive messages, as well as to visualize the world. The decision mechanism of the unreal engine is bypassed, and is handed over to a new application, the external controller, inspired by the case of DIVA, [1]. This application can be enhanced via a high level language such as Prolog, benefiting from its numerous advantages. as the message itself is passed over. The unreal part is also capable of receiving messages from the external controller, which are translated into certain actions. The actions of the agent have been broken down to two main categories, complex and elementary. The complex actions consist of a series of other actions, whereas the elementary, when combined, form a complex action. For example moving to a specific position is an elementary action, whilst following a moving object is complex action, as it can be broken down to several elementary actions, i.e. moving to different specific positions. In the external controller, the behavior of the agent is defined. Based on the incoming messages, decisions supporting the agents goals are made and actions are undertaken. These are in most cases high-level and complex and should be analyzed to a set of elementary actions which are transmitted to unreal and realized. The external controller in coordination with the modified unreal engine provides a framework for developing programmable agents. The behavior of agents can be shaped, depending on the concept of a given simulation. 5 Illustrative Example In order to demonstrate the proposed framework, a simulation has been created. The scenario of the simulation is the usage of predefined ingredients found in a kitchen, as instructed in a recipe. Fig.6 Simulation, Agent Searching For Ingredients Fig.4 External-Controller Architecture In the unreal part, a framework to handle the agent and its sensory input is implemented. When a message of the environment is captured by the agent s sensors, the external application is notified. Information about the type, the time-stamp, as well The Unreal Engine displays the virtual environment, i.e. a kitchen, the agents, i.e. the cook, handles the senses of the agent and stores information concerning the objects in use, i.e. the objects name. The agent senses their environment, via its sensors. For example it is capable of viewing the objects and their attributes. The information

received by the agent s senses, in an appropriate format, is transmitted to the external controller. The external controller functions like the brain of the agent. It withholds information concerning the recipe, such as the ingredients used, their order and the way they are used. According to the recipe the agent is guided through the cooking. 6 Conclusions & Future Work We have presented a framework for the development of programmable agents and their use in dynamic simulations over the Unreal Engine. We are currently trying to apply the framework to a network-based, multiprocessing, distributed environment, scaling our architecture. In addition, we intend to develop more believable and intelligent agents and employ their use in more demanding scenarios, [5]. Such scenarios can be behavioral control in emergency situations or interactive storytelling [9]. Virtual Environments, Lecture Notes in Artificial Intelligence, Vol.2792, Springer, 2003, pp.202-206. [8] Nikos Avradinis, Spyros Vosinakis, Themis Panayiotopoulos, Synthetic Characters with Emotional States, Lecture Notes in Artificial Intelligence, Vol.3025, Springer, 2004, pp.505-514. [9] Nikos Avradinis, Themis Panayiotopoulos, Ruth Aylett, Continuous Planning for Virtual environments, Intelligent Techniques for Planning, I. Vlachavas, (Ed), Idea Group Publishing, in press, 2004 [10] George Anastassakis, Themis Panayiotopoulos, A System for Logic based Intelligent Virtual Agents, International Journal On Artificial Intelligence Tools, in press, 2004 [11] Spyros Vosinakis, Themis Panayiotopoulos, A tool for constructing 3D environments with Virtual Agents, Multimedia Tools and Applications Journal, in press, 2004. References: [1] S.Vosinakis,G.Anastassakis, T. Panayiotopoulos, DIVA: Distributed Intelligent Virtual Agents, Workshop on Intelligent Virtual Agents, Virtual Agents 99, The Centre for Virtual Environments, University of Salford, 1999, pp. 131-134 [2] Michael Wooldridge, An Introduction to Multiagent Systems, John Wiley and Sons Ltd, 2002 [3] T. Panayiotopoulos, S. Vosinakis, J. Kalligatsis, K. Kabassi, Web-Based, Dynamic and Intelligent Simulation Systems, Intelligent Systems and Control International Conference (ISC 2000), Honolulu, Hawaii, USA, August 14-16, 2000, pp. 398-403. [4] Gerhard Weiss, Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence, The MIT Press, 2000 [5] V.S.Belessiotis, S.Vosinakis, T.Panayiotopoulos, The use of the Virtual Agent SimHuman in the ISM scenario system, Advances in Automation, Multimedia and Video Systems, and Modern Computer Science, V.V. Kluev, C.E.D Attellis, N.E. Mastorakis (Eds.), Electrical and Computer Engineering Series, WSES Press, 2001, pp.97-101. [6] Spyros Vosinakis, Themis Panayiotopoulos, A Task Definition Language for Virtual Agents, Journal of WSCG, Vol.11, No.3., UNION Agency, 2003, pp. 512-519. [7] Spyros Vosinakis, Themis Panayiotopoulos, Programmable Agent Perception in Intelligent