How to implement reinforcement learning using code generation
You are now following this question
- You will see updates in your followed content feed.
- You may receive emails, depending on your communication preferences.
An Error Occurred
Unable to complete the action because of changes made to the page. Reload the page to see its updated state.
Show older comments
0 votes
I want to implement the reinforcement learning block in dSPACE using code generation, while the simulink will pop out the error 'The 'AgentWrapper' class does not suppot code generation'. Is there a way how to solve it?
Or is it possible to extract the neural work of reinforcement learning agent and import it into deep learning toolbox?
Thank you very much. Any suggestions are appreciated.
Accepted Answer
Kishen Mahadevan
on 1 Mar 2021
1 vote
As of R2020b, the RL Agent block does not support Code Generation (we are working on it) and is currently only used for training a Reinforcement Learning Agent in Simulink.
However, in R2020b, native Simulink blocks such as 'Image Classifier' and 'Predict' were introduced in Deep Learning Toolbox, and the MATLAB function block was enhanced to model Deep Learning networks in Simulink. These blocks allow using pre-trained networks including Reinforcement Learning policies in Simulink to perform inference.
Also, in R2021a, plain C Code generation for Deep Learning networks is supported (so no dependence on 3p libraries like one-dnn), which enables code generation from the native DL blocks and the enhanced MATLAB function block mentioned above.
Using these features, steps you could follow in R2021a are:
1) Use either Predict or the MATLAB function block to replace the existing RL Agent block, and pull in your trained agent into Simulink
2) Leverage the Plain C Code generation feature to generate code for your Reinforcement Learning Agent


Note:
To create a function that can be used within the 'MATLAB function' block to evaluate the learned policy (pre-trained agent), or to create ‘agentData’ that can be imported into the 'Predict' block, please refer to the 'generatePolicyFunction' API.
7 Comments
Thank you very much. When using the predict block for reinforcement learning, what is the input of the block, as we know that the RL block action is determined by the observations and reward calculation.
Kishen Mahadevan
on 2 Mar 2021
The input to the 'Predict' block is your observations as it performs inference on the trained policy uisng the observations. The 'reward' input is not required in this case as that is used only during RL training.
Note: To load the pre-trained RL agent into simulink using the "predict' block, use the 'Load Network from mat file' option in the 'Block Parameters' and select the 'agentData.mat' file created using the 'generatePolicyFunction'.
Thank you for the reply.
- I did exactly as you said while it will pop out the dimension problem, which indicates Invalid setting for input port dimensions due to the total number of input and output elements are not the same. While it worked well with RL block and I just replace the block with Predict block. It seems the output of Predict is the whole elements sets of actions rather than single element, which is the actual output of RL agent.
- As for the other way, I implement the RL agent with the user defined function, while at the end of the simulation, it will pop out the error 'unable to save operating point because the fucntion block contains state variables that are not campatiable with simulation state save and restore'. and the function in the error is actually the interface function you mentioned above.
- And In the code generation, the error is 'saveing the operating point is only supported for models in Normal or Acclerator mode, and for model blocks in Normal or Accelerator mode'. Is this caused by the error in 2?
How should I solve it? Thank you very much for your help.
Kishen Mahadevan
on 15 Mar 2021
Hello,
The issue mentioned in point 1 is very similar to this MATLAB Answer. Please refer to it for more information.
Based on that MATLAB Answers post, using the MATLAB Function block in place of the predict block resolved the issue. Since you are facing issues with the MATLAB Function block setup as well, we might need to take a deeper look into the model.
Mirjan Heubaum
on 19 Jan 2023
@Kishen Mahadevan are you still working on the code generation support of the RL agent? In that case: only for inference or for the training?
Mayank
on 12 Nov 2025
I am facing issue in generating c code , can someone help

Mayank
on 12 Nov 2025
This is for an Opal RT based HIL using RL agent
More Answers (0)
Categories
Find more on Reinforcement Learning in Help Center and File Exchange
See Also
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)