A reinforcement learning policy is a mapping that selects an action to take based on observations from the environment. During training, the agent tunes the parameters of its policy representation to maximize the long-term reward.
Reinforcement Learning Toolbox™ software provides objects for actor and critic representations. The actor represents the policy that selects the best action to take. The critic represents the value function that estimates the value of the current policy. Depending on your application and selected agent, you can define policy and value functions using deep neural networks, linear basis functions, or look-up tables. For more information, see Create Policy and Value Function Representations.
|Value function critic representation for reinforcement learning agents|
|Q-Value function critic representation for reinforcement learning agents|
|Deterministic actor representation for reinforcement learning agents|
|Stochastic actor representation for reinforcement learning agents|
|Options set for reinforcement learning agent representations (critics and actors)|
|Value table or Q table|
|Get actor representation from reinforcement learning agent|
|Set actor representation of reinforcement learning agent|
|Get critic representation from reinforcement learning agent|
|Set critic representation of reinforcement learning agent|
|Obtain learnable parameter values from policy or value function representation|
|Set learnable parameter values of policy or value function representation|
Specify policy and value function representations using function approximators, such as deep neural networks.
You can import existing policies from other deep learning frameworks using the ONNX™ model format.