Skip to content

From PDDL to timelines

Riccardo De Benedictis edited this page Mar 2, 2020 · 4 revisions

The following timelines have been introduced to simplify the modeling of classical planning problems. Given the different notion of causality, indeed, it turned out to be useful the introduction of new timeline types which help in the resolution modeled problem.

The overall idea is to introduce a new type, called PropositionalAgent, to represent classical planning agents able to execute both simple and durative actions. The predicates for these agents resemble the actions of classical agents. For example, a classical action like pick-up(x - block) is translated into a predicate PickUp(Block x). The body of the rules will contain a goal for each action precondition as well as a fact for each action effect. It is worth to recall that it is possible to put disjunctions within the body of the rules, hence classical planning disjunctions can be easily modeled. Finally, facts and goals are temporally constrained to atoms representing the action in an intuitive way (i.e. preconditions are before the action which is before the effects and, in case of durative actions, overall conditions are constrained to be during the action).

A second type of timeline, called PropositionaState, is used for representing the state. Also in this case the predicate signature (predicate symbol and its arguments) is easily translatable however, since our framework cannot reason on false facts, we will add a polarity argument to tackle negative atoms. Unlike classical planning, in which atoms that are not not explicitly said to be true are assumed false, the initial state has to be defined extensively by introducing facts and constraining their start argument to be equal to the origin variable. Finally, goals of the problems will be added as goals such that their end argument is constrained to be equal to the horizon variable.

It is worth to notice that constraint among actions and state are not necessarily forced to be as those of classical planning. Specifically, we can easily model situations like states which change some time after an action.

Propositional agents

As already mentioned, agent timelines have been introduced for representing classical planning agents able to perform (durative) actions. Specifically, predicates associated to such agents are intended to represent the actions of classical planning agents. Atomic formulas associated to agents can overlap in a free way. Further constraints can be added in order to respect a classical planning rule, called no moving targets, which prevents two actions to overlap when they simultaneously make use of a value and one of the two is accessing the value to update it. In other words, any action which has a precondition p cannot temporally overlap with an action which has an effect ¬p. The idea behind this rule is that any reliance on the values at the points of change is unstable. Despite this rule looks anachronistic in a model that takes into account the time, some benchmark domains exploit it to get a desired behavior. Furthermore, since durative actions are kinda like two classical actions separated by a temporal interval, the no moving targets rule applies both to the start and to the end of the actions. Practically speaking, adding the ordering constraints dictated by the no moving target rule to the start and to the end of the actions, if needed, is the main reason for introducing the Agent type.

Similarly to state-variables, in order to define an agent it is sufficient to define a new derived type whose base type is a PropositionalAgent. All instances of the derived type will be, consequently, agents and the predicates defined within the new type will be considered as predicates of an agent. This allows the modeler to define agent predicates at domain definition phase. It is user's responsability, however, conversely to the state-variable case, to inherit from the Interval, in order to represent durative actions, or from the Impulse, for representing classical actions.

The following code shows an example of definition of an agent. The content of the rule is omitted for sake of space.

class BlocksAgent : PropositionalAgent {

  predicate PickUp(Block x) : Impulse {
    ...
  }

  predicate PutDown(Block x) : Impulse {
    ...
  }

  predicate Stack(Block x, Block y) : Impulse {
    ...
  }

  predicate Unstack(Block x, Block y) : Impulse {
    ...
  }
}

Propositional state

The last timeline which has been introduced for representing for representing classical planning problem is the propositional state timeline. This timeline resembles the state of a classical planning problem.

In order to define a propositional state it is sufficient to define a new derived type whose base type is a PropositionalState. All instances of the derived type will be, consequently, propositional states and the predicates defined within the new type will be considered as predicates of an agent. This allows the modeler to define classical planning predicates at domain definition phase. Propositional state predicates implicitly inherit from a new predicate called PropositionalPredicate which has a boolean argument called polarity. This argument is used for representing the polarity of classical planning predicates. Furthermore, PropositionalPredicate implicitly inherit from the Interval, therefore there is no need to define start, end and duration arguments.

The PropositionalState type has been provided for adding implicit constraints consisting in the threats as defined in partial order planning. Specifically, ordering constraints will be added between two atoms if their arguments, except fro the polarity, unify and the two atoms overlap in time.

An example of propositional state timeline is given by

class BlocksState : PropositionalState {

  predicate HandEmpty() {
    ...
  }

  predicate Clear(Block x) {
    ...
  }

  predicate OnTable(Block x) {
    ...
  }

  predicate On(Block x, Block y) {
    ...
  }
}

in which the content of the rules is omitted for sake of space.

Putting it all together

Now that we have the basic ingredients to define a classic planning problem, we will see how we can combine them. The basic idea is that we need an agent and a state. In case the \texttt{:typing} requirement is present, the first thing to do is to define types. The definition of types is given directly by the classical problem. We just introduce a basic type called Object, representing the topmost element of the type hierarchy, with an integer member called \texttt{id}. Consider, for example, the 4 Op-blocks world domain in which the only type is \texttt{block}, we can define it as

class Object {

  int id;

  Object(int id) : id(id) {}
}

class Block : Object {

  Block(int id) : Object(id) {}
}

The second step is to define the state. Since we need to assign goals to the agent, the state needs a reference to the agent. We will exploit the forward declaration capabilities and will use the agent reference before defining it. So, farther on with the 4 Op-blocks world domain, we have

class BlocksState : PropositionalState {

  BlocksAgent agent;

  BlocksState(BlocksAgent agent) : agent(agent) {}

  .
  .
}

Analogously, since the agent needs to assign goals to the state, it needs a reference to the state. So, continuing with our 4 Op-blocks world domain, we have

class BlocksAgent : PropositionalAgent {

  BlocksState propositional_state;

  PropositionalAgent() : propositional_state(new BlocksState(this)) {}

  .
  .
}

We can now define predicates for our state and for our agent. As already mentioned, the rules for the predicates relative to the state should contain goals on the agent representing the possible actions, properly constrained, for achieving a given goal. On the other hand, the rules for the predicates relative to the agent should contain goals for the preconditions and facts for the effects. We show an example 4 Op-blocks world domain:

class BlocksState : PropositionalState {

  BlocksAgent agent;

  BlocksState(BlocksAgent agent) : agent(agent) {}

  predicate Clear(Block x) {
    duration >= 1;
    {
      goal put_down = new agent.Put_down(at:start, x:x);
    } or {
      goal stack = new agent.Stack(at:start, x:x);
    } or {
      goal unstack = new agent.Unstack(at:start, y:x);
    }
  }

  .
  .
}

This, on the other hand, is an example, still in the 4 Op-blocks world domain, of a predicate defined for the agent:

class BlocksAgent : PropositionalAgent {

  BlocksState propositional_state;

  BlocksAgent() : propositional_state(new BlocksState(this)) {}

  predicate Pick_up(Block x) : Impulse {
    goal clear_x = new propositional_state.Clear(polarity:true, x:x);
    clear_x.start <= at - 1;
    clear_x.end >= at;

    goal ontable_x = new propositional_state.Ontable(polarity:true, x:x);
    ontable_x.start <= at - 1;
    ontable_x.end >= at;

    goal handempty = new propositional_state.Handempty(polarity:true);
    handempty.start <= at - 1;
    handempty.end >= at;

    fact not_ontable_x = new propositional_state.Ontable(polarity:false,
      x:x,
      start:at);
    not_ontable_x.duration >= 1;

    fact not_clear_x = new propositional_state.Clear(polarity:false,
      x:x,
      start:at);
    not_clear_x.duration >= 1;

    fact not_handempty = new propositional_state.Handempty(polarity:false,
      start:at);
    not_handempty.duration >= 1;

    fact holding_x = new propositional_state.Holding(polarity:true,
      x:x,
      start:at);
    holding_x.duration >= 1;
  }

  .
  .
}

Now that we have defined all the types, the predicates and rules associated to them, we can define the type instances

Block a = new Block(1);
Block b = new Block(2);

BlocksAgent agent = new BlocksAgent();

as well as the facts and the goals for our planning problem.

fact clear_a = new agent.propositional_state.Clear(polarity:true,
  x:a,
  start:origin);
clear_a.duration >= 1;

fact clear_b = new agent.propositional_state.Clear(polarity:true,
  x:b,
  start:origin);
clear_b.duration >= 1;

fact ontable_a = new agent.propositional_state.Ontable(polarity:true,
  x:a,
  start:origin);
ontable_a.duration >= 1;

fact ontable_b = new agent.propositional_state.Ontable(polarity:true,
  x:b,
  start:origin);
ontable_b.duration >= 1;

fact handempty = new agent.propositional_state.Handempty(polarity:true,
  start:origin);
handempty.duration >= 1;

goal on_b_a = new agent.propositional_state.On(polarity:true,
  x:b,
  y:a,
  end:horizon);

Notice that the translation from a classical planning problem to a timeline-based planning problem is pretty straightforward, and, as such, can be easily automated.