# Federico Toso

Followers: 0 Following: 0

**Statistics**

RANK**11,338**

of 294,020

REPUTATION**4**

CONTRIBUTIONS

**28** Questions

**2** Answers

ANSWER ACCEPTANCE **
57.14%
**

VOTES RECEIVED**4**

RANK

of 20,067

REPUTATION**N/A**

AVERAGE RATING**0.00**

CONTRIBUTIONS**0** Files

DOWNLOADS **0**

ALL TIME DOWNLOADS**0**

RANK

of 150,395

CONTRIBUTIONS

**0** Problems

**0** Solutions

SCORE**0**

NUMBER OF BADGES**0**

CONTRIBUTIONS**0 Posts**

CONTRIBUTIONS**0** Public Channels

AVERAGE RATING

CONTRIBUTIONS**0** Highlights

AVERAGE NO. OF LIKES

**Feeds**

Question

Programmatically draw action signal line in a Simulink model

I have a Simulink model with two blocks: a Switch Case Action Subsystem block a Switch Case block I would like to programmati...

14 days ago | 1 answer | 0

### 1

answerDisable logging to disk from Simulink, during Reinforcement Learning training

Hello, thank you for the suggestions. Unfortunately I haven't been able to solve the problem so far. Actually I would like to...

2 months ago | 0

Question

Disable logging to disk from Simulink, during Reinforcement Learning training

I'm using the train function to run a Reinforcement Learning training using a PPO agent, with a rlSimulinkEnv object defining th...

2 months ago | 2 answers | 0

### 2

answersQuestion

Assertion block does not stop simulation if I run the model with "sim" function

Hi, I'm having issues with the Assertion block in Simulink when it comes to pause the current simulation. Please refer to the...

3 months ago | 1 answer | 0

### 1

answerI cannot evaluate "pauseFcn" callback by using "sim" command

Hi, I have the same problem, did you find a solution?

3 months ago | 0

Question

Learning rate schedule - Reinforcement Learning Toolbox

The current version of Reinforcement Learning Toolbox requires to set a fixed learning rate for both the actor and critic neural...

6 months ago | 1 answer | 0

### 1

answerQuestion

PPO Agent training - Is it possible to control the number of epochs dynamically?

In the deault implementation of PPO agent in Matlab, the number of epochs is a static property that must be selected before the ...

6 months ago | 1 answer | 0

### 1

answerQuestion

PPO Agent - Initialization of actor and critic newtorks

Whenever a PPO agent is initialized in Matlab, according to the documentation the parameters of both the actor and the critic ar...

6 months ago | 1 answer | 0

### 1

answerQuestion

Use current simulation data to initialize new simulation - RL training

In the context of PPO Agent training, I would like to use Welford algorithm to calculate the runninig average & and standard dev...

6 months ago | 1 answer | 0

### 1

answerQuestion

Minibatches construction for PPO agent in parallel syncronous mode

If I understood correctly the documentation, when a PPO agent is trained in parallel syncronous mode each worker sends its own e...

7 months ago | 1 answer | 0

### 1

answerQuestion

PPO minibatch size for parallel training with variable number of steps

I'm training a PPO Agent in sync parallelization mode. Because of the nature of my environment, the number of steps is not the ...

7 months ago | 1 answer | 0

### 1

answerQuestion

Parallel Training of Multiple RL Agents in same environment

In the context of Reinforcement Learning Toolbox, it is possible to set "UseParallel" to "true" within "rlTrainingOptions" in or...

7 months ago | 1 answer | 0

### 1

answerQuestion

Advantage normalization for PPO Agent

When dealing with PPO Agents, it is possibile to set a "NormalizedAdvantageMethod" to normalize the advantage function values fo...

7 months ago | 1 answer | 0

### 1

answerQuestion

Training Reinforcement Learning Agents --> Use ResetFcn to delay the agent's behaviour in the environment

I would like to train my RL Agent in an environment which is represented by an FMU block in Simulink. Unfortunately whenever a ...

8 months ago | 1 answer | 0

### 1

answerQuestion

FMU Cosimulation using imported variable-step solver

I have a model in Dymola which runs properly (in terms of speed & accuracy) if I use a local variable-step solver. I imported i...

8 months ago | 1 answer | 0

### 1

answerQuestion

Simulink Code Generation Workflow for Subsystem

In my understanding, if all blocks in a Simulink subsystem support Code Generation, than it is possible to treat the whole subsy...

10 months ago | 1 answer | 0

### 1

answerQuestion

Maximixe output of Neural Network After training

Suppose that I've successfully trained a neural network. Given that the weights are now fixed, is there a way to find the input ...

11 months ago | 2 answers | 0

### 2

answersQuestion

Documentation about centralized Learning for Multi Agent Reinforcement Learning

I know that it is now possibile in Mathworks to train multiple agents within the same environment for a collaborative task, usin...

11 months ago | 1 answer | 1

### 1

answerQuestion

Reinforcement Learning - PPO agent with hybrid action space

I have a task which involves both discrete and continuous actions. I would like to use PPO since it seems suitable in my case. ...

11 months ago | 1 answer | 0

### 1

answerQuestion

Reinforcement Learning - SAC with hybrid action spaces

Current implementation of Soft Actor Critic algorithm (SAC) in Matlab only applies to problems with continuous action spaces. I...

12 months ago | 1 answer | 0

### 1

answerQuestion

Access variable names for Simscape block through code

I would like to access the name of the variables of a generic Simscape block which is used in my model. The function "get_param...

1 year ago | 1 answer | 0

### 1

answerQuestion

Stateflow states ordering in Data Inspector

When you use a Stateflow chart within Simulink framework, there is the possibility to log the active state. Then, once the simul...

1 year ago | 1 answer | 0

### 1

answerQuestion

Number of variables vs number of equations in Simscape components

When I define a new custom component in Simscape, as a general rule I take care that the number of equations in the "equations" ...

1 year ago | 1 answer | 0

### 1

answerQuestion

Corrective action after Newton iteration exception

During a typical Simulink simulation, if a variable-step solver is used, when the error tolerances are not satisfied the solver ...

2 years ago | 1 answer | 0

### 1

answerQuestion

Details of daessc solver

Matlab has a lot of ODE solvers available and each of them is properly documented. However, when it comes to the "daessc" solve...

2 years ago | 1 answer | 2

### 1

answerQuestion

Why should I tighten error tolerances if I am violating minimum stepsize?

The followiing is a typical warning message of Simulink that can be displayed after a model has been simulated: "Solver was u...

2 years ago | 1 answer | 0

### 1

answerQuestion

Simscape - Transient initialization vs Transient Solve

According to the Workflow presented here, Transient Initialization and Transient Solve are the last phases of Simscape Simulatio...

2 years ago | 1 answer | 0

### 1

answerQuestion

Access Simscape data in Simulation Manager

I performed multiple simulations of my model using the "Multiple simulations" option in Simulink. My "Design study" is very simp...

2 years ago | 0 answers | 0

### 0

answersQuestion

Simulink keyboard shortcut for connection

I searched the documentation but could not find any keyboard shortcut to connect Simulink blocks without the aid of the mouse. ...

2 years ago | 1 answer | 1

### 1

answerQuestion

Proper system linearization for tracking problems

I am familiar with the Model linearization tool provided by Matlab, which allows to linearize systems around a specific operatin...

2 years ago | 1 answer | 0