AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

Similar documents
MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

Creating a 3D environment map from 2D camera images in robotics

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

Fuzzy-Heuristic Robot Navigation in a Simulated Environment

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

Fuzzy Logic Based Robot Navigation In Uncertain Environments By Multisensor Integration

UNIT-III LIFE-CYCLE PHASES

CS295-1 Final Project : AIBO

Learning and Using Models of Kicking Motions for Legged Robots

AN AUTONOMOUS SIMULATION BASED SYSTEM FOR ROBOTIC SERVICES IN PARTIALLY KNOWN ENVIRONMENTS

Fall 17 Planning & Decision-making in Robotics Introduction; What is Planning, Role of Planning in Robots

S.P.Q.R. Legged Team Report from RoboCup 2003

RoboCup. Presented by Shane Murphy April 24, 2003

Spring 19 Planning Techniques for Robotics Introduction; What is Planning for Robotics?

The Science In Computer Science

Rearrangement task realization by multiple mobile robots with efficient calculation of task constraints

Robotic Systems ECE 401RB Fall 2007

Behaviour-Based Control. IAR Lecture 5 Barbara Webb

Keywords: Multi-robot adversarial environments, real-time autonomous robots

Learning and Using Models of Kicking Motions for Legged Robots

INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY

CMDragons 2009 Team Description

CYCLIC GENETIC ALGORITHMS FOR EVOLVING MULTI-LOOP CONTROL PROGRAMS

International Journal of Informative & Futuristic Research ISSN (Online):

CS123. Programming Your Personal Robot. Part 3: Reasoning Under Uncertainty

Open Source in Mobile Robotics

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

Pure Versus Applied Informatics

Randomized Motion Planning for Groups of Nonholonomic Robots

Robotics and Autonomous Systems

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

Moving Obstacle Avoidance for Mobile Robot Moving on Designated Path

CSC C85 Embedded Systems Project # 1 Robot Localization

SITUATED DESIGN OF VIRTUAL WORLDS USING RATIONAL AGENTS

Structure and Synthesis of Robot Motion

Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment

Wi-Fi Fingerprinting through Active Learning using Smartphones

Key-Words: - Fuzzy Behaviour Controls, Multiple Target Tracking, Obstacle Avoidance, Ultrasonic Range Finders

The Architecture of the Neural System for Control of a Mobile Robot

Move Evaluation Tree System

Distributed Robotics: Building an environment for digital cooperation. Artificial Intelligence series

SPQR RoboCup 2016 Standard Platform League Qualification Report

Outline. Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types

ES 492: SCIENCE IN THE MOVIES

IMPLEMENTING MULTIPLE ROBOT ARCHITECTURES USING MOBILE AGENTS

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Navigation of an Autonomous Underwater Vehicle in a Mobile Network

Energy-Efficient Mobile Robot Exploration

The Sony AIBO: Using IR for Maze Navigation

Robot Task-Level Programming Language and Simulation

Realistic Robot Simulator Nicolas Ward '05 Advisor: Prof. Maxwell

Saphira Robot Control Architecture

Craig Barnes. Previous Work. Introduction. Tools for Programming Agents

THE NEPTUS C4ISR FRAMEWORK: MODELS, TOOLS AND EXPERIMENTATION. Gil M. Gonçalves and João Borges Sousa {gil,

A Lego-Based Soccer-Playing Robot Competition For Teaching Design

NAVIGATION OF MOBILE ROBOT USING THE PSO PARTICLE SWARM OPTIMIZATION

USING A FUZZY LOGIC CONTROL SYSTEM FOR AN XPILOT COMBAT AGENT ANDREW HUBLEY AND GARY PARKER

Autonomous Mobile Robots

Research Statement MAXIM LIKHACHEV

Using Reactive and Adaptive Behaviors to Play Soccer

Rapid Development System for Humanoid Vision-based Behaviors with Real-Virtual Common Interface

Dimension Recognition and Geometry Reconstruction in Vectorization of Engineering Drawings

A Reactive Collision Avoidance Approach for Mobile Robot in Dynamic Environments

Multi-Platform Soccer Robot Development System

MarineSIM : Robot Simulation for Marine Environments

Indiana K-12 Computer Science Standards

Technical issues of MRL Virtual Robots Team RoboCup 2016, Leipzig Germany

Intelligent Power Economy System (Ipes)

AN HYBRID LOCOMOTION SERVICE ROBOT FOR INDOOR SCENARIOS 1

Image Extraction using Image Mining Technique

An Open Robot Simulator Environment

Subsumption Architecture in Swarm Robotics. Cuong Nguyen Viet 16/11/2015

Available theses (October 2011) MERLIN Group

PATH CLEARANCE USING MULTIPLE SCOUT ROBOTS

Overview Agents, environments, typical components

A Virtual Environments Editor for Driving Scenes

Available online at ScienceDirect. Procedia Computer Science 56 (2015 )

Mobile Robots Exploration and Mapping in 2D

EE631 Cooperating Autonomous Mobile Robots. Lecture 1: Introduction. Prof. Yi Guo ECE Department

ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit)

Unit 1: Introduction to Autonomous Robotics

Path Planning for Mobile Robots Based on Hybrid Architecture Platform

Soccer-Swarm: A Visualization Framework for the Development of Robot Soccer Players

Agents for Serious gaming: Challenges and Opportunities

Chapter 1 Introduction

Decision Science Letters

The Real-Time Control System for Servomechanisms

Human Robot Interaction (HRI)

This list supersedes the one published in the November 2002 issue of CR.

Simulation of a mobile robot navigation system

Transactions on Information and Communications Technologies vol 6, 1994 WIT Press, ISSN

Russell and Norvig: an active, artificial agent. continuum of physical configurations and motions

FU-Fighters. The Soccer Robots of Freie Universität Berlin. Why RoboCup? What is RoboCup?

Technical information about PhoToPlan

Design and Simulation of a New Self-Learning Expert System for Mobile Robot

4D-Particle filter localization for a simulated UAV

Team Description 2006 for Team RO-PE A

Designing Toys That Come Alive: Curious Robots for Creative Play

Master Artificial Intelligence

Transcription:

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS Nuno Sousa Eugénio Oliveira Faculdade de Egenharia da Universidade do Porto, Portugal Abstract: This paper describes a platform that enables a real robot control through a software Agent capable of learning, planning and navigating in a dynamic realtime environment. The Agent can be adapted to any mobile robot and creates a two-dimensional representation of the world that is used for navigation and path planning. Current implementation supports two environments: a simulated robot environment and a real world environment with a four legged walking robot. Our main contributions include: adapting BDI agents to real-time requirements and an algorithm to build a two-dimensional representation of that environment. Keywords: Agents, Algorithms, Control, Learning, Navigation, Path planning, Real-time, Robots, Sensor fusion 1. INTRODUCTION Although concepts akin to today s robots go back to 450B.C. (Wikipedia, 2006), robots nowadays still haven t delivered the promise of that companion that would help us in our daily lives. Today s robots are heavily used in the industry but they still fall behind the initial vision of creating an artificial human. In fact, despite all the advances in Behavioural, Navigation and Path Planning techniques, commercial robots sold today do not provide basic capabilities like navigation and path planning without relying on devices that are installed in their operating environment. This limitation prevents robots from being used effectively in Dynamic Real-Time Environments like a our own houses. In recent years we saw the beginnings of an industry that develops what are called Entertainment Robots. Although these robots are targeting the average person, they have been taken by Universities worldwide as a hardware platform on which new techniques can be investigated and tested. These robots are real world implementations of solutions to many problems faced when creating a robot, but the problems solved are mainly physically related. Solutions for high level problems like path planning are not provided in these commercial solutions. In fact most investigation projects that use these robots rely on artificial landmarks for navigation and path planning. A platform is proposed that can turn a robot into an Agent that not only has Navigation and Path Planning capabilities but that can also create a representation of the environment were it was inserted using only the sensors available on the robot. This platform builds upon existing Artificial Intelligence concepts and techniques for Navigation and Path Planning but uses an innovative technique in order to build a model of the environment.

2. THE PLATFORM The platform is described incrementally. First the Software Architecture chosen is presented, followed by the technique used to abstract the robot s hardware. The adaptations made to the Belief, Desire, Intention Agent Architecture in order to be used by a Real-Time environment are presented next, continuing with the technique used to build the Agent s Model of the World, ending with the Navigation and Path Planning techniques. 2.1 Service Oriented Architecture The platform proposed uses a Service Oriented Architecture. This type of architecture has been highly valued in the recent past and has a simple key concept: functionality should be split into several Services that work as a team. This means that there will be services that provide services to others, services that built on other services and even services that collaborate. The main purpose of this division is to ensure service isolation and clearly defined interfaces between the services. Having this isolation makes it possible to distribute the service by several systems. Having a clearly defined interface makes it possible to swap the implementation of a service without having any impact on services that are using it. A Service Oriented Architecture makes it possible to swap out services in order to improve the platform or simply to support a new type of robot. It also makes it possible and easy to test each service individually. This is highly important since one of the goals of the platform is to be adaptable to any robot. To achieve this the platform provides a mechanism to develop and test the interface with a new robot without having any other service running that might, either interfere with the tests, or simply cause entropy when the goal is to develop an interface with the hardware. To simplify the process of developing the interface with a new robot a Toy Service is provided by the platform (see Fig. 1). This service depends exclusively on the service that provides the interface with the robot s hardware in order to turn the robot into a simple toy, that is, a robot that: 1. moves forward; 2. stops moving forward and starts turning when it gets close to a wall and, 3. resumes its forward motion when no walls are in its way. 2.2 Abstract Robot Interface The platform defines an Abstract Interface towards the Robot s Hardware in order to be adaptable to any robot (see Fig. 2). This interface is Fig. 1. Platforms Services and their dependencies composed by a reduced set of objects that have a clearly defined behaviour. The Unified Modelling Language is used in order to clearly define what the implementations of this interface towards the different supported robots must provide. In its basis this abstraction consists on turning a robot into a service that provides: 1. a high level representation of the robot, 2. the measures obtained by the robots sensors and 3. a controller for the robot s motion. The high level representation turns the robot into a series of virtual objects attached to each other. Three types of objects are defined in this representation: the Robot s Body, its Parts and Sensors. The Robot s Body represents the main body of the robot, that is, the main physical volume that needs to be considered when the robot is in motion. Robot Parts represent the movable physical devices that exist on the robot s body that can be used by the agent. The Sensors represent the devices that are attached to the robot s body or parts that provide the inputs the Agent uses to update its internal state. The second functionality provided by the Abstract Robot Interface concerns gathering and distributing sensor information. The interface clearly defines a listening mechanism that makes it possible for any object to be notified as soon as a sensor provides updated information. This mechanism is asynchronous for greater flexibility and performance. Finally the Abstract Interface specifies that the robot must have a Motion Controller. The Motion Controller abstracts the low-level details that must be addressed in order to make the robot move. For instance, if the robot is a four legged robot, it is the responsibility of the Motion Controller to control all legs in order to provide the requested motion. The motion obtained may not be exactly the requested due to physical constraints. Here the Navigation and Path planning techniques come into play. The Motion Controller abstraction has a secondary goal: to remove the need to abstract the parts of the robot that are needed for its locomotion. Taking the four legged robot example

Fig. 2. Abstract Robot Interface as provided by the platform for Sony s AIBO robot Fig. 3. Belief, Desire, Intention Architecture Adapted for a Real-Time once more, if there are no activities that involve controlling the robot s legs directly there is no need to represent them as a set of virtual objects in the robot s interface. From the platform s point of view what matters are the robot s motion capabilities, not how that motion is achieved. 2.3 Real-Time Belief, Desire, Intention Agent The Agent Architecture chosen for the platform is the Belief, Desire, Intention Architecture (Bratman, 1987). This architecture defines an agent that has a strong notion of Beliefs, Desires and Intentions coupled with functions that: 1. generate and review the Beliefs, 2. generate the Desires from the Beliefs and Intentions, 3. filter the Desires in order to keep a consistent Intention Database and, 4. generate Actions that are executed in order to satisfy the Intentions in the Database. Looking at the pure definition of the Belief, Desire, Intention Agent Architecture it looks that the system should run continuously in a gather perception, compute and execute action cycle. This approach doesn t fit into a real-time environment for several reasons. First, engines in the robot take time to move a part to the position requested by an action. Second, during this time the sensors will provide new measures that should be used to review the Beliefs of the Agent. Third, the measures provided by the sensors could indicate that the Action should be aborted. All these facts point to the need to adapt the architecture to a real-time environment. Another adaption that is proposed for the Belief, Desire, Intention Agent Architecture is to adapt it for an Object Oriented world. The first motive for this adaptation is that the platform is using a Service Oriented Architecture. The second motive is that Object Oriented concepts provide the potential needed to develop complex system builded upon basic modules. The final motive for this adaptation is to lower the entry level for the platform to the knowledge that an average Software Developer should have. The proposed adaptation starts by merging the notion and Desire and Intention into a high level notion of Intention (see Fig. 3). Real-time systems must carefully managed its resources and it does not make sense to generate Desires that will be filtered in the same computation cycle. Merging the notions into a higher level notion opens up the possibility to merge the functions responsible for generating and filtering Desires. Using Object Oriented techniques it is possible to have the Desire generation and filtering functions coupled with the Intention. For instance, the Map The World Intention is a high level intention whose desire generation function produces other low levels intentions. These low levels intentions could go from the intention to plan a trip to an unknown part of the environment, to the intention to rotate a part of the robot that has a distance sensor attached to it in order to produce a Radar Scan of the environment. Each intention is able to generate a set of Actions. Actions are also adapted to be more than a simple output to be applied to the environment. An action is a low-level goal that can be translated into commands for the robot or code that needs to be executed by the platform. In the platform Actions are first class objects in the sense that they consume resources available during their execution. These resources include CPU power, access to the robot s Motion Controller, monitoring of sensors and controlling of the robot s movable parts. Action objects are capable of sending commands to the robot and of monitoring in real-time the outcome of those commands. The monitoring of the action s execution has two main goals: to verify if the action was successfully executed and to allow aborting or prevent the execution of the action. Actions need this mechanism because the robot is operating in a dynamic real-time environment. In this environment an Action that would move the robot forward a certain distance will have to be aborted if a person just moved to the front of the robot.

The platform manages the access to the robot s hardware resources with the introduction of a Resource Controller. The robot s Motion Controller and all its movable Parts are considered resources that are assigned to Actions during their execution. Until the Action reports that its execution has ended, successfully or not, the resources can not be assigned to any other Action. The order at which the resources are assigned to Actions is decided by the priority assigned to the Actions when they are created. When an Action terminates the agent will run a new Intention generating cycle. This makes the agent s main computation cycle as follows: 1. Initialise the Intention Database with the initial Intentions, 2. Generate new Intentions given the current Beliefs, 3. Generate Actions from the Intentions, 4. Serve the Actions and jump to point 2 when an Action reports that its execution finished. Since there can be more that one Action being executed at a given time there is the possibility of the main computation cycle being executed and all the Actions haven t been executed or had any resources to use for their execution. Actions keep a reference to the Intention that created them. When a new computation cycle is performed Actions that still haven t been executed will be removed from the Action Database. The set of Actions generated by the intentions is filtered to exclude any Action that is currently being executed. The remaining elements in the set are added to the Action Database. 2.4 Building a World Model The platform builds a two-dimensional model of the world. Although published work exists in methods to solve the Navigation and Path- Planning problem using a two-dimensional representation of the world, no up to date work that covers the problem of building a two-dimensional model of the world can be found in the academic circles. (Arkin, 1989) proposed that a world represented in two-dimensions could be split into convex polygons. These polygons would then become the nodes of a graph and the distance between their centroids the weight in the connections of the graph. Path Planning would use this graph for its activities. For Navigation the robot would use the walls of the world to correct the estimate of its position using the robot s sonars. For the platform the proposed solution is to implement an algorithm that creates a twodimensional representation of the world dynamically. To achieve this the notion of a World Cell is introduced. A World Cell behaves just like a Fig. 4. A World Model. Two-dimensional space is represented by a set of convex Cells convex polygon from a Navigation and Path Planning point of view but it might not be a fully known convex polygon. This is necessary in order to represent parts of the environment that are unknown to the agent. The edges of a World Cell are called Borders and can have three different types. Walls represent borders that can not be crossed by the robot but that can be used for Navigation. Frontiers represent borders that define the known limits of the environment. Finally, Passage define border of the cell that lead to other cells in the model. For Path Planning activities World Cells are mapped to nodes of a graph, Passages define the connections between the nodes and the weight of the connections is determined by the distance between the centroids of the Cells. The World Model is initialised as containing one cell composed by Frontiers that represent the robot s physical limits and the ambitious intention to Map the World. This intention will generate other intentions in order to achieve its goal. If the current Cell has a Border that is a Frontier that is within the range of a distance sensor, an intention is produced to generate a Scan of the World using that sensor. If the current Cell as a Border that is a Frontier that is out of sensor range, an intention is create to move the robot to a position were the border is within the range of a distance sensor. If the cell has no border that is a Frontier an intention is generated to go to the nearest cell that has a border that a border that is a Frontier. These three basic intentions are sufficient to make the robot sufficiently curious to continuously map the environment. To update the border of a World Cell the platform uses Scans created by distance sensors attached to movable parts of the robot. A Scan is a twodimensional area represented by a polygon. The borders of polygon that make up the Scan have two possible types: Frontier and Wall. Frontiers are used when the operational limits of the dis-

Fig. 5. A World Scan produced by a distance sensor attached to a movable part of the robot. Points used in the representation are marked with letters. j i h g f e d c b a Sensor Limit Scan Polygon Distance Point Wall Fig. 6. A Scan being added to a Cell that will originate a cell split procedure. The result is a passage between the two cells. j i h g New Cell f e d c b a Intersection Point Split Line k tance sensor have been reached. Walls are used when the distance sensor reports a distance within its operational limits. The Scan is simplified by the agent. This simplification consists on removing points that are co-linear. A Scan is added to a World Cell using twodimensional geometry procedures adapted to take into take into consideration the type of border. For instance, when the border of a Scan crosses the border of the cell that is a Frontier, the borders of the Scan are added to the Scan until an entry point is found. If the intersecting border of the cell is a passage, the borders of the Scan that fall outside of the cell are used to create a new Scan that is passed to the neighbouring cell for processing. The neighbour cell processes the Scan like any other Scan produced by the robot s sensors. The Agent will split a World Cell when its borders that are Walls or Passages are not forming a convex polygon. When the polygon is being analysed border that are Frontiers are turned into virtual passages during the analysis of the cells convexity properties. A Virtual Passages is a border created for the sake of convexity analysis that connects the latest border that was not a Frontier and the next border that is not a Frontier. 2.5 Navigation and Path Planning For Path Planing the platform uses the Anytime Dynamic A*, also known as D* presented by by (Likhachev, 2005). D* is a graph-based planning and replanning algorithm that can review its previous solutions when the underlying graph changes. The platform uses this capacity in order to update the planned path when World Model Changes, that is, a Cell is updated due to a newly found obstacle. For Navigation the platform uses the techniques defined by (Arkin, 1989) with a new enhancement: the use of Dynamic Landmarks. The platform will k Fig. 7. Dynamic Landmarks created by the platform for the world model presented in figure 4. define Dynamic Landmarks as points in the World Model that are corners. Corners are defined as points shared between two borders that are walls (see Fig. 7). The platform keeps track of the robot s current position. This position is subject to error when the robot starts moving. This error is specially significant if the robot is a walking robot. The Dynamic Landmarks are used to correct the platform s estimated robot position. (Arkin, 1989) proposed the use of rectilinear walls in order to correct the robots position in the model. This correction technique is used by the platform but the platform will use landmarks whenever they are available. 3. IMPLEMENTATION The platform provides a Graphical User Interface to support inspection of the Agent s internal structures. This interface displays the state of

Fig. 8. Screen shot of the World Model display provided by the platform the abstract robot interface, a two-dimensional representation of the World Model and a view of the robots Intention, Actions and resources in use. For the Abstract Robot Interface a graphical display draws the robot s abstract representation and the state of its parts. When the robot interface reports that a part has been moved the display is updated to display the state reported by the interface. The World Model display shows the World Cells, the robot and the last processed Scan. The World Cells are represented using a colour coding: known areas are represented using white and unknown areas in gray. The borders of the cells are also represented with colours: walls are shown as red lines, frontiers in blue and passages in green. The last processed Scan borders are colour coded with the same rule as the World Cells borders but are represented as dashed lines. The state of the robot is also displayed, including the state of its movable parts. 3.1 Simulated Environment A simulated environment was also built for the platform. Its main goal is to support building and testing the agent in simulated environment before testing it in the real environment. This solution makes it possible to clear the main problems from the agent s implementation before testing it with the real robot. This separation should be sufficient to distinguish between problems that are purely software related from problems that concern adapting to a specific robot. The support for a Simulated Environment can be used to build agents that exist only in purely simulated environments. This can be used to develop agents that will use robots that still haven t being implemented as physical devices, a functionality that can be used by robot development teams that develop their control software and the robot s hardware simultaneously. 3.2 Sony s AIBO in the Laboratory To test and demonstrate the platform a robot interface was implemented towards Sony s AIBO four legged robot. (Tira-Thompson, 2004) proposed a framework for rapid development of robots called Tekkotsu. Since then the framework has been continuously developed by a community of Carnegie Mellon University investigators. Since Tekkotsu s has an open source license it not only allows others to use it but to enhance it and give back to the community. The platform s implementation of the interface towards the AIBO uses the Tekkotsu framework. There are several reasons behind this choice: 1. Tekkotsu already provides a high level walking control that is used for the platforms Motion Controller, 2. Tekkotsu is C++ event driven code unlike Sony s proposed Open-R solution and, 3. Tekkotsu has supports the creation TCP/IP sockets using the AIBO s Wifi card. These sockets are

used to communicate between the robot and the agent. 4. CONCLUSIONS Tira-Thompson, Ethan (2004). Tekkotsu: A Rapid Development Framework for Robotics. PhD thesis. Carnegie Mellon University. Wikipedia (2006). The proposed platform should provide the major ground and low-level work need to turn a robot into an agent. Using the platform the low-level work needed to create a world model, plan movements in this model and navigate in the model is already provided. The platform also provides the grounds need to build a Belief, Desire, Intention agent. With this platform, robot s can be deployed into unknown environments and build their own model of that environment incrementally. It is no longer necessary to install devices that are used as guides or landmarks in the environment. This fact opens the possibility to use a robot in environments that can not be changed to support the robot. 5. FUTURE WORK The platform has to be tested with different robots to verify how well it adapts to a new robot. Even though the platform is being tested both in a simulated environment and a real world environment, tests in a real world environment using a different robot would provide insights into the platforms solutions. Path planning techniques described here do not take into consideration risk taking procedures. It seems to be a logical step for the robot to verify if there is a passage nearby to the target position even if it involves passing through an unknown part of the environment. This risk taking procedures have been excluded from the as they require further analysis. A possibility is to use Emotion Simulation techniques to control if the how agent decides if the risk should be taken or not. REFERENCES Arkin, Ronald C. (1989). Navigational path planning for a vision-based mobile robot. In: Robotica. Vol. 7. Cambridge University Press. Bratman, M. E. (1987). Intentions, plans, and practical reason. Harvard University Press: Cambridge, MA. Likhachev, Maxim (2005). Anytime dynamic a*: An anytime, replanning algorithm. In: Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS).