Rational Agent Example
This is a collection of behaviors that represent a "rational agent", i.e. one that tries to fulfill its goals by selecting plans to execute.
Description

This is a collection of behaviors that represent a "rational agent", i.e. one that tries to fulfill its goals by selecting plans to execute.

We designed it with extensibility and composability in mind. It has three main "modules":

Preference Generator: Determines the agent's goal.

Plan Generator: Creates an ordered list of actions the agent will take to achieve the goal

Action Components: Executes the actions to update the state.

It follows an action programming language paradigm. An agent has a goal (defined as a new state) that it can reach by executing actions (behaviors that will change its state) that it can only take under certain (pre)conditions.

Detailed Description

preference generator

Configuration:

  • update_prefs: number of timesteps until the preference generator reweights the preferences
  • decrement_prefs: number of timesteps until decrement function runs.
  • keepalive: number of timesteps until the agent's goal is considered stale.
  • calc_weight(): Function that reweights each preference.
  • decrement(): default function picks a random preference and decreases it by 1.
  • max_weight(): default function picks the priority preference by finding the need w/ the largest weight.

The preference generator selects the goal for the agent by checking the preferences of the agent. Each preference has a weight. The goal is stored as an object on the agent state. See function generate_goal for the typdef.

The default preference collection assumes all preferences have an ideal and current state that are represented as integers, and that the "decay" function (decrement) will only change one preference every decrement_prefs number of timesteps. A more sophisticated implementation could change multiple preference state based on other conditions (ex. cues from the environment).

plan generator

Configuration:

When a new goal is selected the plan generator finds the corresponding plan, loads the actions on to the state w/ the appropriate parameters & preconditions for each action.

In our simple example the plan is just a hash table of actions and parameters for the actions, but future versions can leverage dynamic plan generation to account for environmental stochasticity.

action components

An action component should expect to receive parameters from the state, which will be stored on the state as:

stateaction name :

{

storage property: parameters

...

}

When the action has finished running, it should remove itself from state.plan and set state.plan_flagsaction name = false (deprecated, in future just remove from plan)

cleanup

Configuration:

  • behavior_files: stores a mapping of action names to behavior files. To add an additional action add the action_name:file_name.

The cleanup file checks if the goal has been accomplished. If so it clears the plan and goal. If the goal hasn't been accomplished, it checks the plan and adds the behaviors that will need to run in the next round to satisfy the plan.

Extensions

If you wanted to extend the agent to do something else, like dance, you can do that by:

  • creating a dance preference in initial state
  • adding a plan of actions+parameters to the planning generator
  • mapping the action name to the behavior file that performs the action

And of course creating the behavior that will perform the action.