Main Content

rlTrainingOptions

Options for training reinforcement learning agents

Since R2019a

Description

Use an rlTrainingOptions object to specify options to train an agent within an environment. Training options include the maximum number of episodes to train, criteria for stopping training, criteria for saving agents, and options for using parallel computing. To train the agent using the specified options, pass this object to train.

For more information on training agents, see Train Reinforcement Learning Agents.

Creation

Description

trainOpts = rlTrainingOptions returns the default options for training a reinforcement learning agent.

example

trainOpts = rlTrainingOptions(Name=Value) creates the training option set trainOpts and sets its Properties using one or more name-value arguments.

Properties

expand all

Maximum number of episodes to train the agent, specified as a positive integer. Regardless of other criteria for termination, training terminates after MaxEpisodes.

Example: MaxEpisodes=1000

Maximum number of steps to run per episode, specified as a positive integer. In general, you define episode termination conditions in the environment. This value is the maximum number of steps to run in the episode if other termination conditions are not met.

Example: MaxStepsPerEpisode=1000

Window length for averaging the scores, rewards, and number of steps for each agent, specified as a scalar or vector.

If the training environment contains a single agent, specify ScoreAveragingWindowLength as a scalar.

If the training environment is a multi-agent environment, specify a scalar to apply the same window length to all agents.

To use a different window length for each agent, specify ScoreAveragingWindowLength as a vector. In this case, the order of the elements in the vector correspond to the order of the agents used during environment creation.

For options expressed in terms of averages, ScoreAveragingWindowLength is the number of episodes included in the average. For instance, if StopTrainingCriteria is "AverageReward", and StopTrainingValue is 500 for a given agent, then for that agent, training terminates when the average reward over the number of episodes specified in ScoreAveragingWindowLength equals or exceeds 500. For the other agents, training continues until:

  • All agents reach their stop criteria.

  • The number of episodes reaches MaxEpisodes.

  • You stop training by clicking the Stop Training button in Episode Manager or pressing Ctrl-C at the MATLAB® command line.

Example: ScoreAveragingWindowLength=10

Training termination condition, specified as one of the following strings:

  • "None" — Do not stop training until the number of episodes reaches MaxEpisodes.

  • "AverageSteps" — Stop training when the running average number of steps per episode equals or exceeds the critical value specified by the option StopTrainingValue. The average is computed using the window 'ScoreAveragingWindowLength'.

  • "AverageReward" — Stop training when the running average reward equals or exceeds the critical value.

  • "EpisodeReward" — Stop training when the reward in the current episode equals or exceeds the critical value.

  • "GlobalStepCount" — Stop training when the total number of steps in all episodes (the total number of times the agent is invoked) equals or exceeds the critical value.

  • "EpisodeCount" — Stop training when the number of training episodes equals or exceeds the critical value.

  • "EvaluationStatistic" — Stop training when the statistic returned by the evaluator object used with train (if any) equals or exceeds the specified value.

  • "Custom" — Stop training when the custom function specified in StopTrainingValue returns true.

Example: StopTrainingCriteria="AverageReward"

Critical value of the training termination condition, specified as a scalar, vector or as a function name or handle.

You can use a custom stop criteria by specifying StopTrainingValue as a function name or handle. Your function must have one input and one output, as shown in the following signature.

flag = myTerminationFcn(trainingStats)

Here, trainingStats is a structure that contains the following fields, all described in the trainStats output argument of train.

  • EpisodeIndex

  • EpisodeReward

  • EpisodeSteps

  • AverageReward

  • TotalAgentSteps

  • EpisodeQ0

  • SimulationInfo

  • EvaluationStatistics

  • TrainingOptions

The training stops when the specified function returns true.

When not using a custom termination criteria, the following indications apply.

If the training environment contains a single agent, specify StopTrainingValue as a scalar. If the training environment is a multi-agent environment, specify a scalar to apply the same termination criterion to all agents. To use a different termination criterion for each agent, specify StopTrainingValue as a vector. In this case, the order of the elements in the vector corresponds to the order of the agents used during environment creation.

For a given agent, training ends when the termination condition specified by the StopTrainingCriteria option equals or exceeds this value. For the other agents, the training continues until:

  • All agents reach their stop criteria.

  • The number of episodes reaches maxEpisodes.

  • You stop training by clicking the Stop Training button in Episode Manager or pressing Ctrl-C at the MATLAB command line.

For instance, if StopTrainingCriteria is "AverageReward" and StopTrainingValue is 100 for a given agent, then for that agent, training terminates when the average reward over the number of episodes specified in ScoreAveragingWindowLength equals or exceeds 100.

Example: StopTrainingValue=100

Condition for saving agents during training, specified as one of the following strings:

  • "None" — Do not save any agents during training.

  • "EpisodeReward" — Save the agent when the reward in the current episode equals or exceeds the critical value.

  • "AverageSteps" — Save the agent when the running average number of steps per episode equals or exceeds the critical value specified by the option StopTrainingValue. The average is computed using the window 'ScoreAveragingWindowLength'.

  • "AverageReward" — Save the agent when the running average reward over all episodes equals or exceeds the critical value.

  • "GlobalStepCount" — Save the agent when the total number of steps in all episodes (the total number of times the agent is invoked) equals or exceeds the critical value.

  • "EpisodeCount" — Save the agent when the number of training episodes equals or exceeds the critical value.

  • "EpisodeFrequency" — Save the agent with a period specified in SaveAgentValue. For example, if SaveAgentCriteria is specified as "EpisodeFrequency" andSaveAgentValue is specified as 10, the agent is saved after every ten episodes.

  • "EvaluationStatistic" — Save the agent when the statistic returned by the evaluator object used with train (if any) equals or exceeds the specified value.

  • "Custom" — Save the agent when the custom function specified in SaveAgentValue returns true.

Set this option to store candidate agents that perform well according to the criteria you specify. When you set this option to a value other than "none", the software sets the SaveAgentValue option to 500. You can change that value to specify the condition for saving the agent.

For instance, suppose you want to store for further testing any agent that yields an episode reward that equals or exceeds 100. To do so, set SaveAgentCriteria to "EpisodeReward" and set the SaveAgentValue option to 100. When an episode reward equals or exceeds 100, train saves the corresponding agent in a MAT-file in the folder specified by the SaveAgentDirectory option. The MAT-file is called AgentK.mat, where K is the number of the corresponding episode. The agent is stored within that MAT-file as saved_agent.

Example: SaveAgentCriteria="EpisodeReward"

Critical value of the condition for saving agents, specified as a scalar, vector or as a function name or handle.

You can use a custom save criteria by specifying SaveAgentValue as a function name or handle. Your function must have one input and one output, as shown in the following signature.

flag = mySaveFcn(trainingStats)

Here, trainingStats is a structure that contains the following fields, all described in the trainStats output argument of train.

  • EpisodeIndex

  • EpisodeReward

  • EpisodeSteps

  • AverageReward

  • TotalAgentSteps

  • EpisodeQ0

  • SimulationInfo

  • EvaluationStatistic

  • TrainingOptions

The training stops when the specified function returns true.

When not using a custom termination criteria, the following indications apply.

If the training environment contains a single agent, specify SaveAgentValue as a scalar.

If the training environment is a multi-agent environment, specify a scalar to apply the same saving criterion to each agent. To save the agents when one meets a particular criterion, specify SaveAgentValue as a vector. In this case, the order of the elements in the vector corresponds to the order of the agents used when creating the environment. When a criteria for saving an agent is met, all agents are saved in the same MAT-file.

When you specify a condition for saving candidate agents using SaveAgentCriteria, the software sets this value to 500. Change the value to specify the condition for saving the agent. See the SaveAgentCriteria option for more details.

Example: SaveAgentValue=100

Folder name for saved agents, specified as a string or character vector. The folder name can contain a full or relative path. When an episode occurs in which the conditions specified by the SaveAgentCriteria and SaveAgentValue options are satisfied, the software saves the agents in a MAT-file in this folder. If the folder does not exist, train creates it. When SaveAgentCriteria is "none", this option is ignored and train does not create a folder.

Example: SaveAgentDirectory = pwd + "\run1\Agents"

Flag for using parallel training, specified as a logical. Setting this option to true configures training to use parallel processing to simulate the environment, thereby enabling usage of multiple cores, processors, computer clusters or cloud resources to speed up training. To specify options for parallel training, use the ParallelizationOptions property.

Note that if you want to speed up deep neural network calculations (such as gradient computation, parameter update and prediction) using a local GPU, you do not need to set UseParallel to true. Instead, when creating your actor or critic representation, use an rlRepresentationOptions object in which the UseDevice option is set to "gpu". Using parallel computing or the GPU requires Parallel Computing Toolbox™ software. Using computer clusters or cloud resources additionally requires MATLAB Parallel Server™. For more information about training using multicore processors and GPUs, see Train Agents Using Parallel Computing and GPUs.

Example: UseParallel=true

Parallelization options to control parallel training, specified as a ParallelTraining object. For more information about training using parallel computing, see Train Agents Using Parallel Computing and GPUs.

The ParallelTraining object has the following properties, which you can modify using dot notation after creating the rlTrainingOptions object.

Parallel computing mode, specified as one of the following:

  • "sync" — Use parpool to run synchronous training on the available workers. In this case, each worker pauses execution until all workers are finished. The parallel pool client updates the actor and critic parameters based on the results from all the workers and sends the updated parameters to all workers. When training a PG agent using gradient-based parallelization Mode must be set to "sync".

  • "async" — Use parpool to run asynchronous training on the available workers. In this case, each worker sends its data back to the parallel pool client as soon as it finishes and then receives updated parameters from the client. The worker then continue with its task.

Example: Mode="async"

Randomizer initialization for workers, specified as one of the following:

  • –1 — Assign a unique random seed to each worker. The value of the seed is the worker ID.

  • –2 — Do not assign a random seed to the workers.

  • Vector — Manually specify the random seed for each worker. The number of elements in the vector must match the number of workers.

Example: WorkerRandomSeeds=[1 2 3 4]

Option to send model and workspace variables to parallel workers, specified as "on" or "off". When the option is "on", the client sends to the workers the variables defined in the base MATLAB workspace and used in the approximation models.

Example: TransferBaseWorkspaceVariables="off"

Additional files to attach to the parallel pool, specified as a string or string array.

Example: AttachedFiles="myInitFile.m"

Function to run before training starts, specified as a handle to a function having no input arguments. This function is run once per worker before training begins. Write this function to perform any processing that you need prior to training.

Example: AttachedFiles=@mySetupFcn

Function to run after training ends, specified as a handle to a function having no input arguments. You can write this function to clean up the workspace or perform other processing after training terminates.

Example: AttachedFiles=@myCleanupFcn

Option to display training progress at the command line, specified as the logical values false (0) or true (1). Set to true to write information from each training episode to the MATLAB command line during training.

Example: Verbose=false

Option to stop training when an error occurs during an episode, specified as "on" or "off". When this option is "off", errors are captured and returned in the SimulationInfo output of train, and training continues to the next episode.

Example: StopOnError="off"

Option to display training progress with Episode Manager, specified as "training-progress" or "none". By default, calling train opens the Reinforcement Learning Episode Manager, which graphically and numerically displays information about the training progress, such as the reward for each episode, average reward, number of episodes, and total number of steps. For more information, see train. To turn off this display, set this option to "none".

Example: Plots="none"

Object Functions

trainTrain reinforcement learning agents within a specified environment

Examples

collapse all

Create an options set for training a reinforcement learning agent. Set the maximum number of episodes and the maximum number of steps per episode to 1000. Configure the options to stop training when the average reward equals or exceeds 480, and turn on both the command-line display and Reinforcement Learning Episode Manager for displaying training results. You can set the options using name-value pair arguments when you create the options set. Any options that you do not explicitly set have their default values.

trainOpts = rlTrainingOptions(...
    MaxEpisodes=1000,...
    MaxStepsPerEpisode=1000,...
    StopTrainingCriteria="AverageReward",...
    StopTrainingValue=480,...
    Verbose=true,...
    Plots="training-progress")
trainOpts = 
  rlTrainingOptions with properties:

                   MaxEpisodes: 1000
            MaxStepsPerEpisode: 1000
                   StopOnError: "on"
    ScoreAveragingWindowLength: 5
          StopTrainingCriteria: "AverageReward"
             StopTrainingValue: 480
             SaveAgentCriteria: "none"
                SaveAgentValue: "none"
            SaveAgentDirectory: "savedAgents"
                       Verbose: 1
                         Plots: "training-progress"
                   UseParallel: 0
        ParallelizationOptions: [1x1 rl.option.ParallelTraining]

Alternatively, create a default options set and use dot notation to change some of the values.

trainOpts = rlTrainingOptions;
trainOpts.MaxEpisodes = 1000;
trainOpts.MaxStepsPerEpisode = 1000;
trainOpts.StopTrainingCriteria = "AverageReward";
trainOpts.StopTrainingValue = 480;
trainOpts.Verbose = true;
trainOpts.Plots = "training-progress";

trainOpts
trainOpts = 
  rlTrainingOptions with properties:

                   MaxEpisodes: 1000
            MaxStepsPerEpisode: 1000
                   StopOnError: "on"
    ScoreAveragingWindowLength: 5
          StopTrainingCriteria: "AverageReward"
             StopTrainingValue: 480
             SaveAgentCriteria: "none"
                SaveAgentValue: "none"
            SaveAgentDirectory: "savedAgents"
                       Verbose: 1
                         Plots: "training-progress"
                   UseParallel: 0
        ParallelizationOptions: [1x1 rl.option.ParallelTraining]

You can now use trainOpts as an input argument to the train command.

To turn on parallel computing for training a reinforcement learning agent, set the UseParallel training option to true.

trainOpts = rlTrainingOptions(UseParallel=true);

To configure your parallel training, configure the fields of the trainOpts.ParallelizationOptions. For example, specify the asynchronous training mode:

trainOpts.ParallelizationOptions.Mode = "async";
trainOpts.ParallelizationOptions
ans = 
  ParallelTraining with properties:

                              Mode: "async"
                 WorkerRandomSeeds: -1
    TransferBaseWorkspaceVariables: "on"
                     AttachedFiles: []
                          SetupFcn: []
                        CleanupFcn: []

You can now use trainOpts as an input argument to the train command to perform training with parallel computing.

To train an agent using the asynchronous advantage actor-critic (A3C) method, you must set the agent and parallel training options appropriately.

When creating the AC agent, set the NumStepsToLookAhead value to be greater than 1. Common values are 64 and 128.

agentOpts = rlACAgentOptions(NumStepsToLookAhead=64);

Use agentOpts when creating your agent. Alternatively, create your agent first and then modify its options, including the actor and critic options later using dot notation.

Configure the training algorithm to use asynchronous parallel training.

trainOpts = rlTrainingOptions(UseParallel=true);
trainOpts.ParallelizationOptions.Mode = "async";

You can now use trainOpts to train your AC agent using the A3C method.

For an example on asynchronous advantage actor-critic agent training, see Train AC Agent to Balance Cart-Pole System Using Parallel Computing.

Version History

Introduced in R2019a

expand all