How to implement reinforcement learning using code generation

I want to implement the reinforcement learning block in dSPACE using code generation, while the simulink will pop out the error 'The 'AgentWrapper' class does not suppot code generation'. Is there a way how to solve it?
Or is it possible to extract the neural work of reinforcement learning agent and import it into deep learning toolbox?
Thank you very much. Any suggestions are appreciated.

 Accepted Answer

As of R2020b, the RL Agent block does not support Code Generation (we are working on it) and is currently only used for training a Reinforcement Learning Agent in Simulink.
However, in R2020b, native Simulink blocks such as 'Image Classifier' and 'Predict' were introduced in Deep Learning Toolbox, and the MATLAB function block was enhanced to model Deep Learning networks in Simulink. These blocks allow using pre-trained networks including Reinforcement Learning policies in Simulink to perform inference.
Also, in R2021a, plain C Code generation for Deep Learning networks is supported (so no dependence on 3p libraries like one-dnn), which enables code generation from the native DL blocks and the enhanced MATLAB function block mentioned above.
Using these features, steps you could follow in R2021a are:
1) Use either Predict or the MATLAB function block to replace the existing RL Agent block, and pull in your trained agent into Simulink
2) Leverage the Plain C Code generation feature to generate code for your Reinforcement Learning Agent
Note:
To create a function that can be used within the 'MATLAB function' block to evaluate the learned policy (pre-trained agent), or to create agentData that can be imported into the 'Predict' block, please refer to the 'generatePolicyFunction' API.

7 Comments

Thank you very much. When using the predict block for reinforcement learning, what is the input of the block, as we know that the RL block action is determined by the observations and reward calculation.
The input to the 'Predict' block is your observations as it performs inference on the trained policy uisng the observations. The 'reward' input is not required in this case as that is used only during RL training.
Note: To load the pre-trained RL agent into simulink using the "predict' block, use the 'Load Network from mat file' option in the 'Block Parameters' and select the 'agentData.mat' file created using the 'generatePolicyFunction'.
Thank you for the reply.
  1. I did exactly as you said while it will pop out the dimension problem, which indicates Invalid setting for input port dimensions due to the total number of input and output elements are not the same. While it worked well with RL block and I just replace the block with Predict block. It seems the output of Predict is the whole elements sets of actions rather than single element, which is the actual output of RL agent.
  2. As for the other way, I implement the RL agent with the user defined function, while at the end of the simulation, it will pop out the error 'unable to save operating point because the fucntion block contains state variables that are not campatiable with simulation state save and restore'. and the function in the error is actually the interface function you mentioned above.
  3. And In the code generation, the error is 'saveing the operating point is only supported for models in Normal or Acclerator mode, and for model blocks in Normal or Accelerator mode'. Is this caused by the error in 2?
How should I solve it? Thank you very much for your help.
Hello,
The issue mentioned in point 1 is very similar to this MATLAB Answer. Please refer to it for more information.
Based on that MATLAB Answers post, using the MATLAB Function block in place of the predict block resolved the issue. Since you are facing issues with the MATLAB Function block setup as well, we might need to take a deeper look into the model.
Please contact Technical Support directly by creating a service request.
@Kishen Mahadevan are you still working on the code generation support of the RL agent? In that case: only for inference or for the training?
I am facing issue in generating c code , can someone help
This is for an Opal RT based HIL using RL agent

Sign in to comment.

More Answers (0)

Products

Release

R2019b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!