Training of RL DDPG Agent is not working (Control of an Inverted pendulum)

4 views (last 30 days)
This project initially started with a Mathworks example: Train DDPG Agent to swing up and balance pendulum.
The pendulum block in the model has been replaced with simscape components. Also the following have been added: a DC electric motor, and a controllable voltage supply. See my_simscape_pendulum_model.slx
I trained the agent is using the settings in training.m
The session was stopped after 17 hours and 796 episodes. Early on I could see the pendulum rising up to about 30 degrees above the downward hanging position before it stalled. This indicates to me that there was enough torque being applied to enable the agent use a back and forth rocking motion to raise the pendulum. However, after many hours the agent had not learned to do the back and forth rocking motion, and seemed to be stalled in a bad policy. See the screenshot of the RL episode manager after it was stopped.
My research indicates that my learning rate or exploration options may need to be modified. However I have not been able to find documentation on how to do this.
Do you have any suggestions ?

Accepted Answer

Yash Sharma
Yash Sharma on 24 Nov 2023
I understand that you have a reinforcement learning DDPG agent and wants to set learning rate and exploration options of that agent. You can set learning rate using “rlOptimizerOptions”.
Here is the documentation for optimizer options and different exploration policies.
Hope this helps!

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!