all input coordinates must have the same type. input coordinates must be of type double or single.

I have created a custom environment with 7by1 Observation and 2by1 Action. When I try to train the DDPG agent, training doesn't start. Instead, there's a pop-up window with the following message:
"Invalid setting
all input coordinates must have the same type. input coordinates must be of type double or single."
Could someone please help me with this? What are the input coordinates and how do I have to change them?
I'm really looking forward to an answer, thanks for your help!

2 Comments

Read about Class. Check what calss your inputs belongs to. You can convert them into single or double using the functions single, double. If this is not help ful, share your code.
Hi KSSV,
I don't really understand what the 'inputs' are. Are they the actions and observations that are exchanged between agent and environment or are they everything I define in the environment's class definition (for example the properties)?
Maybe I also have to say that I use the reinforcement learning designer app.
Maybe my code for the environment class definition is helpful. Please note that the helper functions (ecms, getnewsoc, getresistance, getreward) are not defined in the methods section at the end. I have them stored in the same folder as this class definition, because they could not be found when I defined them in the methods block within the class definition. Matlab always said
"Error using rl.env.MATLABEnvironment/validateEnvironment (line 42)
There was an error evaluating the step function.
Caused by:
Undefined function 'getresistance' for input arguments of type 'double'."
Maybe it has to do with the helper function definitions? If yes, how do I define them properly in the methods block?
Thanks for your help!
classdef Environment < rl.env.MATLABEnvironment
%ENVIRONMENT: Template for defining custom environment in MATLAB.
%% Properties (set properties' attributes accordingly)
properties
t = 1;
cycle = zeros(6,1);
cyclename = 'abc';
sigma_0 = 0;
last_ess = 0;
P_batt_10 = zeros(1,10);
end
properties (Constant)
dr1 = load('dr1');
dr2 = load('dr2');
% gear ratios
i_gear = [5.354, 3.243, 2.252, 1.636, 1.211, 1.000, 0.865, 0.717, 0.601]';
% final drive ratio
i_final = 3.07;
% mass factors
k_m = [1.32, 1.16, 1.11, 1.08, 1.07, 1.07, 1.07, 1.06, 1.06]';
% [W] max e-motor power
P_e = 25000;
% [W] max battery power
P_batt_max = 33000;
% rolling resistance coefficient
f_roll = 0.011;
% [kg] vehicle mass
m = 1625;
% [kg] additional mass
m_a = 0;
% [m/s^2] gravitational acceleration
g = 9.81;
% [kg/m^3] air density at 20°C
rho_air = 1.2;
% [m^2] air drag coefficient multiplied by face area
cda = 0.54;
% [m] wheel radius
r_wheel = 0.32;
% [W] Power of attachements
P_att = 800;
% drivetrain efficiency
eff_dt = 0.9;
% [J/kg] fuel lower heat value
H_f = 42.5e+6;
% [kg/J] reciprocal fuel lower heat value
e_ICE = 2.35294e-8;
% battery efficiency
eff_bat = 0.95;
% SOC list
soc_list = [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0];
% [As] battery capacity
Q_b = 20*3600;
% [Ohm] internal resistance of battery
R_int = 0.334 / 20;
% [V] open circuit voltage at SOC from SOC list
V_oc = [37.78, 45.30, 46.01, 46.56, 46.94, 47.38, 48.00, 48.78, 49.68, 50.66, 51.74];
% reference SOC
SOC_ref = 0.55;
% minimal SOC
SOC_min = 0.3;
% maximal SOC
SOC_max = 0.8;
% [rad/s] ICE speed list (0.10472=2pi/60)
w_ICE_list = 0.10472*[1100, 1500, 2000, 2500, 3000, 3500, 4000, 4500, 5000, 5500, 6000, 6300];
% [Nm] ICE torque list
t_ICE_list = [0, 30, 50, 70, 90, 110, 130, 150, 170, 190, 210, 230, 250, 270, 290, 310, 330, 350];
% ICE efficiency map at w_ICE and t_ICE
eff_ICE_table = 0.01*[...
0.01 21.18 26.06 28.71 30.80 32.33 33.35 34.16 34.57 34.16 33.22 32.70 32.21 31.96 31.96 31.96 31.96 31.96
0.01 21.18 26.47 29.72 31.84 33.09 34.16 34.86 35.29 35.00 35.00 34.72 34.02 33.22 32.96 32.33 32.21 32.09
0.01 21.18 26.47 29.72 31.73 32.96 34.02 34.86 35.29 35.29 35.29 35.29 35.15 34.72 34.16 33.48 33.22 33.09
0.01 21.18 26.47 29.41 31.61 32.83 34.02 34.86 35.29 35.29 35.29 35.29 35.29 35.00 34.57 34.29 34.02 33.48
0.01 20.17 25.67 28.42 30.91 32.33 33.48 34.29 35.00 35.29 35.29 35.29 35.29 34.86 34.29 33.88 32.58 32.21
0.01 20.17 25.29 28.24 30.36 31.84 32.83 33.75 34.29 34.72 34.86 34.72 34.43 33.88 33.22 32.33 31.37 30.25
0.01 20.17 24.20 27.50 29.41 30.80 31.73 32.70 33.35 33.88 33.88 33.35 32.58 32.58 31.84 30.80 30.04 29.72
0.01 18.82 23.53 26.47 28.52 29.72 30.80 31.61 31.73 31.61 31.26 31.37 31.14 31.26 30.47 30.04 29.72 29.72
0.01 18.82 23.21 25.51 27.32 28.52 29.72 30.25 30.36 30.25 30.04 30.04 29.62 29.51 29.41 29.41 29.41 29.41
0.01 18.82 21.44 24.20 26.22 27.50 28.62 28.62 28.52 28.52 28.52 28.42 28.52 28.71 28.71 28.71 28.71 28.71
0.01 18.82 20.17 23.02 24.62 25.67 26.31 26.64 26.64 25.82 25.51 24.55 24.55 24.55 24.55 24.55 24.55 24.55
0.01 18.82 20.17 22.00 23.33 24.20 24.91 25.29 25.67 24.91 24.20 24.20 24.20 24.20 24.20 24.20 24.20 24.20];
% [kg/s] consumption rate map at w_ICE and t_ICE
consrate_ICE_table = cell2mat(struct2cell(load('consrate_ICE_table')));
% [rad/s] ICE drag speed list
w_ICE_drag_list = 0.10472*[0, 1000, 2000, 3000, 4000, 5000, 6000];
% [Nm] ICE drag torque list
t_ICE_drag_list = [6.3, 6.3, 8.4, 10.7, 13.1, 17.0, 20.7];
% [rad/s] e-motor speed list
w_em_list = 0.10472*[0, 500, 1000, 1500, 2000, 2500, 3000, 3500, 4000, 4500, 5000, 5500, 6000];
% [Nm] e-motor torque list
t_em_list = 15000 / 40000 *[-260, -230, -200, -170, -140, -110, -80, -50, -20, -11, -5, 0, 5, 10 ,20 ,50, 80, 110, 140, 170, 200, 230, 260];
% e-motor efficiency map at w_em and t_em
eff_em_table = 0.01*[...
0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0 50.0
0.01 10.0 25.0 40.0 62.0 73.6 80.0 85.0 85.0 80.0 70.0 50.0 70.0 80.0 85.0 86.0 82.5 78.7 74.6 71.6 68.8 66.4 64.8
70.0 73.2 77.0 80.0 83.5 86.5 89.4 91.5 90.0 85.0 70.0 50.0 70.0 82.0 90.0 91.6 90.0 87.5 85.5 83.1 81.1 78.8 76.4
80.4 82.2 84.5 86.4 88.3 90.3 91.7 93.0 90.0 83.0 70.0 50.0 70.0 82.0 90.0 92.9 92.3 91.2 89.5 87.5 86.3 85.0 84.0
84.5 86.1 87.8 89.5 91.1 92.7 94.0 94.2 92.0 87.0 70.0 50.0 70.0 85.0 91.2 94.0 93.1 92.2 90.9 59.5 88.0 88.0 88.0
89.5 89.5 89.5 91.0 92.4 93.8 94.5 94.7 93.0 89.0 70.0 50.0 70.0 85.0 91.0 93.4 93.1 92.2 91.0 90.0 90.0 90.0 90.0
91.5 91.5 91.5 91.5 92.8 94.1 94.8 95.3 93.0 88.0 70.0 50.0 70.0 84.0 90.5 93.1 92.8 92.1 91.0 91.0 91.0 91.0 91.0
92.8 92.8 92.8 92.8 92.8 94.1 95.0 95.7 92.7 87.0 70.0 50.0 70.0 82.0 90.0 92.7 92.4 91.8 91.8 91.8 91.8 91.8 91.8
92.6 92.6 92.6 92.6 92.6 93.8 94.7 95.3 92.0 85.0 70.0 50.0 70.0 81.0 89.0 92.2 92.0 91.0 91.0 91.0 91.0 91.0 91.0
93.2 93.2 93.2 93.2 93.2 93.2 94.3 94.7 92.0 85.0 70.0 50.0 70.0 80.0 87.5 91.6 91.0 91.0 91.0 91.0 91.0 91.0 91.0
93.0 93.0 93.0 93.0 93.0 93.0 94.0 94.4 90.4 82.0 70.0 50.0 70.0 80.0 86.5 91.0 90.5 90.5 90.5 90.5 90.5 90.5 90.5
93.5 93.5 93.5 93.5 93.5 93.5 93.5 93.8 90.0 82.0 70.0 50.0 70.0 78.0 85.6 90.3 90.0 90.0 90.0 90.0 90.0 90.0 90.0
92.8 92.8 92.8 92.8 92.8 92.8 92.8 93.0 90.0 80.0 70.0 50.0 70.0 75.0 85.0 88.0 88.0 88.0 88.0 88.0 88.0 88.0 88.0
];
end
properties
% Initialize system state [v, a, alpha, gear, soc, sigma, t_ICE]'
State = zeros(7,1)
end
properties(Access = protected)
% Initialize internal flag to indicate episode termination
IsDone = false
end
%% Necessary Methods
methods
% Contructor method creates an instance of the environment
% Change class name and constructor name accordingly
function this = Environment()
% Initialize Observation settings
ObservationInfo = rlNumericSpec([7 1]);
ObservationInfo.Name = 'Vehicle State';
ObservationInfo.Description = 'v, a, alpha, gear, soc, sigma, t_ICE';
% Initialize Action settings
ActionInfo = rlNumericSpec([2 1],'LowerLimit',0,'UpperLimit',1);
ActionInfo.Name = 'Vehicle Action';
ActionInfo.Description = 'lambda, sigma';
% The following line implements built-in functions of RL env
this = this@rl.env.MATLABEnvironment(ObservationInfo,ActionInfo);
% Initialize property values and pre-compute necessary values
% updateActionInfo(this);
end
% Apply system dynamics and simulates the environment with the
% given action for one step.
function [Observation,Reward,IsDone,LoggedSignals] = step(this,Action)
LoggedSignals = [];
v = this.State(1);
a = this.State(2);
alpha = this.State(3);
gear = this.State(4);
soc = this.State(5);
lastsigma = this.State(6);
lastt_ICE = this.State(7);
% calculate driving resistance
[F_w, w_wheel, w_crank, t_crank] = getresistance(v, a, alpha, gear);
% ECMS
lambda = Action(1); % equivalence factor
sigma = round(Action(2)); % ICE on/off
[t_em_opt, fuel_opt, w_ICE, t_ICE, sigmapenalty] = ecms(t_crank, w_crank, lambda, sigma, soc);
% calculate the new SOC
[volt, P_em, P_bat, I_b, newsoc] = getnewsoc(soc, t_em_opt, w_crank);
% save variables
results.soc(this.t) = soc;
results.F_w(this.t) = F_w;
results.w_crank(this.t) = w_crank;
results.t_crank(this.t) = t_crank;
results.t_em_opt(this.t) = t_em_opt;
results.w_ICE(this.t) = w_ICE;
results.t_ICE(this.t) = t_ICE;
results.fuel_opt(this.t) = fuel_opt;
results.volt(this.t) = volt;
results.P_em(this.t) = P_em;
results.P_bat(this.t) = P_bat;
results.I_b(this.t) = I_b;
results.lambda(this.t) = lambda;
results.sigma(this.t) = sigma;
filename = this.cyclename + ".results.mat";
save(filename ,"results")
% calculate Reward
delta_ess = sigma - lastsigma;
if delta_ess == 0
this.last_ess = this.last_ess + 1;
elseif abs(delta_ess) == 1
this.last_ess = 0;
end
for k = 10:1:2 % push old battery power values 1 step away
this.P_batt_10(k) = this.P_batt_10(k-1);
end
this.P_batt_10(1) = P_bat; % save last battery power value in array
Reward = getreward(sigma, sigmapenalty, w_ICE, t_ICE, delta_ess, this.last_ess, soc, this.P_batt_10);
if this.t < length(this.cycle.v)
new_t = this.t + 1;
this.t = new_t;
IsDone = false;
elseif this.t == length(this.cycle.v)
IsDone = true;
end
v = this.cycle.v(this.t);
a = this.cycle.a(this.t);
alpha = this.cycle.alpha(this.t);
gear = this.cycle.gear(this.t);
Observation = [v; a; alpha; gear; newsoc; sigma; t_ICE];
this.State = Observation;
end
% Reset environment to initial state and output initial observation
function [InitialObservation, t, cycle, cyclename] = reset(this)
this.t = 1;
n = mod(round(1200000*rand(1)),12);
switch n
case 1
this.cycle = this.dr1.dr1.dr1_1;
this.cyclename = 'dr1_1';
case 2
this.cycle = this.dr1.dr1.dr1_3;
this.cyclename = 'dr1_3';
case 3
this.cycle = this.dr1.dr1.dr1_4;
this.cyclename = 'dr1_4';
case 4
this.cycle = this.dr1.dr1.dr1_5;
this.cyclename = 'dr1_5';
case 5
this.cycle = this.dr1.dr1.dr1_6;
this.cyclename = 'dr1_6';
case 6
this.cycle = this.dr1.dr1.dr1_7;
this.cyclename = 'dr1_7';
case 7
this.cycle = this.dr1.dr1.dr1_8;
this.cyclename = 'dr1_8';
case 8
this.cycle = this.dr1.dr1.dr1_9;
this.cyclename = 'dr1_9';
case 9
this.cycle = this.dr1.dr1.dr1_11;
this.cyclename = 'dr1_11';
case 10
this.cycle = this.dr1.dr1.dr1_12;
this.cyclename = 'dr1_13';
case 11
this.cycle = this.dr1.dr1.dr1_13;
this.cyclename = 'dr1_13';
case 12
this.cycle = this.dr1.dr1.dr1_14;
this.cyclename = 'dr1_14';
end
v = double(this.cycle.v(1));
a = double(this.cycle.a(1));
alpha = double(this.cycle.alpha(1));
gear = double(this.cycle.gear(1));
soc = double(this.SOC_ref); % random SOC between 0.3 and 0.8 at the start of each cycle
sigma = double(0); % start with ICE off
t_ICE = double(0); % start with ICE off
this.P_batt_10 = double(zeros(1,10));
InitialObservation = [v; a; alpha; gear; soc; sigma; t_ICE];
this.State = InitialObservation;
this.t = 1;
% (optional) use notifyEnvUpdated to signal that the
% environment has been updated (e.g. to update visualization)
% notifyEnvUpdated(this);
end
end
%% Optional Methods (set methods' attributes accordingly)
methods (Access = public)
end
methods (Access = protected)
% (optional) update visualization everytime the environment is updated
% (notifyEnvUpdated is called)
% function envUpdatedCallback(this)
% end
end
end

Sign in to comment.

Answers (0)

Categories

Find more on Environment and Settings in Help Center and File Exchange

Products

Release

R2021a

Asked:

on 12 Jul 2021

Edited:

on 14 Jul 2021

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!