Community Profile

photo

Sayak Mukherjee


Last seen: 1 year ago Active since 2020

Programming Languages:
Python, MATLAB
Spoken Languages:
Bengali, English, Hindi

Statistics

  • Revival Level 1
  • Thankful Level 1

View badges

Content Feed

View by

Question


Mirror symmetry in actions in reinforcement learning
I am training a RL control problem to perforem neck kinematics. I want the action space to have mirror symmetry as explained in ...

1 year ago | 0 answers | 0

0

answers

Question


Control the exploration in soft actor-critic
What is the best way to control the exploration in SAC agent. For TD3 agent I used to control the exploration by adjusting the v...

2 years ago | 1 answer | 1

1

answer

Question


Reinforcement learning agent not being saved during training
I am trying to train my model using TD3 agent. During the training process I am trying to save the agent above a certain episode...

2 years ago | 1 answer | 0

1

answer

Question


Dont need to save 'savedAgentResultStruct' with RL agent
When I am saving agents during RL iterations using 'EpisodeReward' criteria, matlab is also saving 'savedAgentResultStruct' alon...

3 years ago | 0 answers | 0

0

answers

Question


Change revolute joint parameter in env.ResetFcn during reinforcement learning
What is the best way to randomize the initial revolute joint angle during eacg episode of reinforcement learning right now I am...

3 years ago | 0 answers | 0

0

answers

Question


What is the best activation function to get action between 0 and 1 in DDPG network?
I am using DDPG network to run a control algorithm which has inputs (actions of RL agent, 23 in total) varying between 0 and 1. ...

3 years ago | 1 answer | 0

1

answer

Question


Expected reward blows up while training (DDPG agent, reinforcement learning)
I am training a DDPG network and after training for around 5000 iterations, the model seems doesnot seem to converge while the e...

3 years ago | 1 answer | 0

1

answer

Question


Use saved reinforcement learning DDPG agent
I have saved DDPG agent using the optiopn rlTrainingOptions.SaveAgentValue = 3000 During the simulations number of agents are ...

3 years ago | 1 answer | 0

1

answer