SARSA reinforcement learning agent
The SARSA algorithm is a model-free, online, on-policy reinforcement learning method. A SARSA agent is a value-based reinforcement learning agent which trains a critic to estimate the return or future rewards.
For more information on SARSA agents, see SARSA Agents.
For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.
|Train reinforcement learning agents within a specified environment|
|Simulate trained reinforcement learning agents within specified environment|
|Obtain action from agent or actor representation given environment observations|
|Get actor representation from reinforcement learning agent|
|Set actor representation of reinforcement learning agent|
|Get critic representation from reinforcement learning agent|
|Set critic representation of reinforcement learning agent|
|Create function that evaluates trained policy of reinforcement learning agent|
Create a SARSA Agent
Create or load an environment interface. For this example load the Basic Grid World environment interface.
env = rlPredefinedEnv("BasicGridWorld");
Create a critic value function representation using a Q table derived from the environment observation and action specifications.
qTable = rlTable(getObservationInfo(env),getActionInfo(env)); critic = rlQValueRepresentation(qTable,getObservationInfo(env),getActionInfo(env));
Create a SARSA agent using the specified critic value function and an epsilon value of
opt = rlSARSAAgentOptions; opt.EpsilonGreedyExploration.Epsilon = 0.05; agent = rlSARSAAgent(critic,opt)
agent = rlSARSAAgent with properties: AgentOptions: [1x1 rl.option.rlSARSAAgentOptions]
To check your agent, use getAction to return the action from a random observation.
ans = 1
You can now test and train the agent against the environment.