Reinforcement Learning multiple agent validation: Can I have a Simulink model host TWO agents and test them
Show older comments
Hi,
I am conducting research to see how PPO performs versus DDPG - for non-linear plants. I have trained two agents.
Can I have a Simulink circuit host TWO agents and test them? Basically trying to create a unified validation bench. Please see image below.
I did go through the documentation and tried this implementation but am getting errors.
Code:
% Set Simulink model pointers
PLANT_SIMULATION_MODEL = 'sm_Experimental_Setup'; % Simulink experimentation circuit
DDPG_AGENT = '/DDPG Sub-System/DDPG_Agent';
PPO_AGENT = '/PPO Sub-System/PPO_Agent';
% Load experiences from pre-trained agent
DDPG_agent = load(DDPG_MODEL_FILE,'agent');
PPO_agent = load(PPO_MODEL_FILE,'agent');
% Code here for setting (1) obsInfo and (2) actionInfo_DDPG and (3) actionInfo_PPO
% .... ...
% Intialise the environment with the serialised agent and run the test
env = rlSimulinkEnv(VALVE_SIMULATION_MODEL, [DDPG_AGENT PPO_AGENT], [obsInfo obsInfo], [actionInfo_DDPG actionInfo_PPO]);
simOpts = rlSimulationOptions('MaxSteps', 2000);
xpr = sim(env,[DDPG_agent.agent, PPO_agent.agent]);
ERROR message:
Error using rlSimulinkEnv (line 108)
No block diagram name specified.
Error in code_DDPG_PPO_Experimental_Setup (line 97)
env = rlSimulinkEnv(VALVE_SIMULATION_MODEL, [DDPG_AGENT PPO_AGENT], [obsInfo obsInfo], [actionInfo_DDPG actionInfo_PPO]);
Screen capture of Simulink model:

Accepted Answer
More Answers (0)
Categories
Find more on Reinforcement Learning in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!