Ensemble Neural Network in Matlab

9 views (last 30 days)
Dear All, I am creating ensemble neural network comprises 3 component neural networks (NNs) with different number of hidden neurons. There are 3 input neurons and 2 output neurons with 1 hidden layer where hidden neurons are varied set as 3, 2, and 1 for each component NN. The 3 input neurons correspond with on-site displacement measurement at 3 different locations while the 2 output neurons correspond with Young’s modulus (E) and horizontal to vertical stress ratio (k). Total training data is 39 and testing data is 10.
These are the testing results:
E = [77.78224 42.33231 59.33344 53.02192 49.52091 85.40332 56.09939 53.33929 56.81571 16.95089];
target E = [80 70 60 30 50 80 40 70 30 20];
error E = [-2.21776 -27.6677 -0.66656 23.02192 -0.47909 5.403321 16.09939 -16.6607 26.81571 -3.04911];
k = [2.104783 2.32816 2.943708 3.00457 2.723133 2.291231 3.009603 3.024749 3.021532 3.002399];
target k = [1 1.5 2 2 2.5 2.5 3 3 3.5 4];
error k = [1.104783 0.82816 0.943708 1.00457 0.223133 -0.20877 0.009603 0.024749 -0.47847 -0.9976];
error E = E – target E
The results are not good especially for k value.
I worried that the correlation between input and output is not that good; hence I checked using simple model. I used 3 inputs (x1, x2, x3) and 2 outputs (y, z) and they have these relationships: y = x1^3-x2^2+x3; z = 1/x1-1/x2^2+1/(x3^2)
Total training data is 10 and testing data is 5. But again the testing result is not good especially for z.
y = [2831.213 5304.791 6902.318 12844.01 24449.48];
target y = [2088 4758 6650 11896 24048];
error y = [743.2125 546.7907 252.318 948.0119 401.4825];
z = [-0.18233 -0.18192 -0.18165 -0.16924 -0.16051];
target z = [-0.00704 -0.013 -0.01013 -0.01226 -0.01565];
error z = [-0.17528 -0.16893 -0.17153 -0.15698 -0.14486]
Please advise how to improve the ensemble neural networks.
Below is the code of my neural network for your reference:
clc % clear the command window clear; rand('seed', 1); randn('seed', 1);
ensemblesize =3; maxepoch=30;
for runno=1:1:10
fprintf('ENN dda 3 different number of hidden nodes epoch = %g hidden node no = var', maxepoch);
fprintf('\n');
% training data set
E=[20 30 40 50 60 70 20 30 40 50 60 80 20 40 50 70 80 20 30 40 60 70 20 30 50 60 80 20 40 50 60 70 80 30 40 50 60 70 80]; k=[1 1 1 1 1 1 1.5 1.5 1.5 1.5 1.5 1.5 2 2 2 2 2 2.5 2.5 2.5 2.5 2.5 3 3 3 3 3 3.5 3.5 3.5 3.5 3.5 3.5 4 4 4 4 4 4];
ux1=[0.82 0.76 0.75 0.69 0.74 0.83 1.31 1.35 1.1 1.14 1.39 1.09 1.84 1.62 1.66 1.6 1.56 2 1.95 1.96 1.98 1.89 2.79 2.68 2.41 2.51 2.55 3.63 2.92 2.91 2.93 3 2.86 3.38 3.41 4.16 3.31 3.32 3.27]; ux2=[-0.83 -0.83 -0.79 -0.77 -0.77 -0.89 -1.3 -1.6 -1.28 -1.3 -1.6 -1.26 -1.93 -1.84 -1.91 -1.86 -1.78 -2.28 -2.29 -2.28 -2.27 -2.29 -2.88 -2.97 -2.78 -2.81 -2.92 -3.51 -3.44 -3.38 -3.43 -3.46 -3.36 -3.92 -3.97 -3.83 -3.97 -3.9 -3.89]; ux3=[-0.53 -0.58 -0.53 -0.53 -0.52 -0.62 -0.9 -1.11 -0.86 -0.88 -1.09 -0.85 -1.39 -1.26 -1.32 -1.27 -1.2 -1.56 -1.6 -1.61 -1.55 -1.59 -2.06 -2.1 -1.89 -1.94 -2.06 -2.5 -2.44 -2.35 -2.38 -2.38 -2.36 -2.75 -2.75 -2.65 -2.74 -2.72 -2.72]; ux4=[-0.36 -0.39 -0.37 -0.36 -0.36 -0.42 -0.61 -0.81 -0.6 -0.61 -0.78 -0.6 -0.96 -0.89 -0.94 -0.9 -0.86 -1.12 -1.14 -1.14 -1.11 -1.12 -1.46 -1.5 -1.37 -1.42 -1.5 -1.76 -1.72 -1.69 -1.7 -1.74 -1.69 -1.97 -1.99 -1.9 -1.94 -1.94 -2]; ux5=[-0.25 -0.27 -0.25 -0.25 -0.25 -0.3 -0.42 -0.57 -0.42 -0.43 -0.56 -0.41 -0.69 -0.61 -0.66 -0.63 -0.6 -0.79 -0.81 -0.79 -0.78 -0.79 -1.02 -1.06 -0.96 -1 -1.05 -1.25 -1.21 -1.19 -1.19 -1.24 -1.19 -1.39 -1.42 -1.35 -1.36 -1.37 -1.41]; ux6=[-0.17 -0.19 -0.17 -0.17 -0.17 -0.2 -0.28 -0.39 -0.29 -0.29 -0.38 -0.28 -0.48 -0.41 -0.46 -0.44 -0.41 -0.55 -0.57 -0.56 -0.54 -0.54 -0.7 -0.73 -0.66 -0.69 -0.73 -0.87 -0.85 -0.82 -0.83 -0.85 -0.82 -0.97 -1 -0.92 -0.95 -0.96 -0.96]; ux7=[-0.1 -0.11 -0.11 -0.11 -0.11 -0.13 -0.18 -0.25 -0.18 -0.17 -0.23 -0.16 -0.3 -0.25 -0.28 -0.26 -0.24 -0.33 -0.35 -0.34 -0.33 -0.33 -0.43 -0.46 -0.41 -0.43 -0.45 -0.54 -0.53 -0.5 -0.51 -0.53 -0.49 -0.61 -0.62 -0.55 -0.6 -0.59 -0.58]; ux8=[-0.04 -0.05 -0.05 -0.05 -0.05 -0.05 -0.07 -0.1 -0.07 -0.07 -0.1 -0.07 -0.12 -0.11 -0.12 -0.12 -0.1 -0.14 -0.14 -0.15 -0.14 -0.14 -0.17 -0.19 -0.17 -0.18 -0.19 -0.23 -0.23 -0.21 -0.22 -0.22 -0.21 -0.26 -0.26 -0.23 -0.26 -0.25 -0.25]; ux9=[-0.01 0 -0.01 0 -0.01 -0.01 -0.01 -0.02 -0.02 -0.02 -0.02 -0.02 -0.03 -0.02 -0.02 -0.02 -0.02 -0.03 -0.03 -0.03 -0.03 -0.03 -0.04 -0.04 -0.04 -0.04 -0.04 -0.05 -0.05 -0.04 -0.04 -0.04 -0.04 -0.05 -0.05 -0.05 -0.05 -0.05 -0.05];
uy10=[-2.83 -0.93 -0.55 -0.58 -0.47 -1.24 -2.56 -2.55 -0.54 -0.49 -2.75 0.93 -3.5 -0.36 -1.92 -1.23 0.54 -1.33 -0.74 -0.62 -0.46 -0.43 -2.64 -2.91 -0.5 -1.45 0.45 -2.47 -1.65 -1.48 -1.48 -1.85 -1.5 -1.57 -1.36 -1.57 -1.41 -1.47 -1.43]; uy11=[-2.68 -0.77 -0.37 -0.38 -0.31 -0.25 -2.4 -0.45 -0.35 -0.3 -0.14 1.07 -1.56 0.41 -0.57 -0.07 0.67 -1.09 -0.48 -0.36 -0.23 -0.2 -2.35 -0.21 -0.23 -0.02 1.95 0.13 0.2 0.21 0.13 0.36 0.25 0.17 0.2 -0.23 0.21 0.25 0.2]; uy12=[-2.65 -0.65 -0.27 -0.28 -0.23 -0.18 -2.34 -0.38 -0.27 -0.21 -0.08 1.09 -1.51 0.36 -0.46 -0.03 0.64 -1 -0.38 -0.21 -0.14 -0.09 -2.13 -0.17 -0.11 0.03 1.85 0.25 0.24 0.21 0.18 0.33 0.24 0.22 0.24 -0.1 0.25 0.28 0.23]; uy13=[-2.63 -0.57 -0.17 -0.21 -0.16 -0.13 -2.3 -0.29 -0.18 -0.15 -0.03 1.09 -1.53 0.31 -0.36 -0.04 0.6 -0.93 -0.26 -0.15 -0.06 -0.03 -1.91 -0.19 -0.04 0.08 1.46 0.3 0.23 0.21 0.19 0.29 0.24 0.22 0.24 -0.02 0.24 0.26 0.23]; uy14=[-2.64 -0.47 -0.05 -0.2 -0.09 -0.07 -2.28 -0.19 -0.11 -0.04 -0.03 1.05 -1.53 0.22 -0.22 0.03 0.46 -0.86 -0.18 -0.09 -0.02 0.01 -1.13 -0.2 0.01 0.07 1.01 0.28 0.17 0.17 0.15 0.22 0.18 0.18 0.18 0.02 0.18 0.2 0.17]; uy15=[-2.67 -0.38 -0.04 -0.07 -0.13 -0.05 -2.08 -0.09 -0.04 -0.08 0.02 0.82 -1.37 0.09 -0.09 0.07 0.21 -0.7 -0.08 -0.05 0.02 -0.01 -0.81 0 -0.03 0.05 0.41 0.26 0.08 0.08 0.07 0.11 0.08 0.08 0.1 0.02 0.1 0.1 0.1]; uy16=[-2.62 -0.34 0.01 -0.03 0.01 -0.05 -2.03 -0.02 -0.01 0.02 -0.05 0.71 -1.4 0.04 -0.05 0 0.09 -0.65 -0.01 -0.03 0.01 0 -0.64 -0.03 -0.01 0 0.03 0.23 0 0 0 0.01 0.01 0 0.01 -0.03 0 0.01 0];
% test data set
teE=[80 70 60 30 50 80 40 70 30 20]; tek=[1 1.5 2 2 2.5 2.5 3 3 3.5 4]; teux1=[0.77 1.16 1.61 1.54 2.25 2.02 2.48 2.53 2.92 3.56]; teux2=[-0.79 -1.31 -1.93 -1.78 -2.56 -2.29 -2.87 -2.89 -3.38 -3.91]; teux3=[-0.52 -0.88 -1.31 -1.22 -1.79 -1.61 -2.01 -2 -2.33 -2.79]; teux4=[-0.37 -0.62 -0.93 -0.89 -1.29 -1.12 -1.47 -1.43 -1.69 -2.02]; teux5=[-0.25 -0.44 -0.64 -0.62 -0.91 -0.77 -1.05 -1.01 -1.19 -1.41]; teux6=[-0.17 -0.3 -0.44 -0.43 -0.64 -0.53 -0.74 -0.68 -0.83 -0.97]; teux7=[-0.11 -0.18 -0.27 -0.27 -0.4 -0.33 -0.45 -0.42 -0.52 -0.59]; teux8=[-0.04 -0.07 -0.12 -0.11 -0.18 -0.14 -0.18 -0.18 -0.22 -0.27]; teux9=[-0.01 -0.02 -0.03 -0.02 -0.03 -0.02 -0.04 -0.04 -0.05 -0.05];
teuy10=[1.12 -0.97 -0.83 -1.49 -2.5 0.86 -1.69 -1.91 -1.93 -2.08]; teuy11=[1.27 -0.19 0.25 -1.2 0.08 1.71 -0.05 0.15 0.15 0.28]; teuy12=[1.33 -0.16 0.29 -1.02 0.1 1.66 0.02 0.17 0.19 0.29]; teuy13=[1.34 -0.13 0.23 -0.81 0.09 1.53 0.03 0.15 0.19 0.25]; teuy14=[1.31 -0.1 0.18 -0.61 0.08 1.25 0.03 0.1 0.16 0.17]; teuy15=[1.22 -0.02 0.04 -0.31 0.04 0.71 0.07 0.06 0.09 0.08]; teuy16=[1.11 0.01 0.01 -0.1 0 0.43 0.01 0 0 -0.03];
% generate the component neural networks
for i = 1:ensemblesize
% generate the component training sets
input = [ux1;ux2;uy10];
comptarget = [E;k];
testinput = [teux1;teux2;teuy10];
testtarget = [teE;tek];
if i==1 % hidden nodes=3
hiddenno1=3;
fprintf('ENN dda component NN 1 epoch = %g hidden node no=%g ', maxepoch, hiddenno1);
fprintf('\n');
net = newff(input,comptarget,hiddenno1,{'tansig' 'purelin'});
net.divideFcn='';
net.trainParam.epochs = maxepoch;
net.trainParam.goal = 0.0;
net.trainParam.mc=0.7;
net.trainParam.lr=0.05;
net.trainParam.show=NaN;
net = train(net,input,comptarget);
f1 = sim(net,input);
output1 = sim(net,testinput); % now 'output' stores the real-va
elseif i==2 % hidden nodes=2
hiddenno2=2;
fprintf('ENN dda component NN 2 epoch = %g hidden node no=%g ', maxepoch, hiddenno2);
fprintf('\n');
net = newff(input,comptarget,hiddenno2,{'tansig' 'purelin'});
net.divideFcn='';
net.trainParam.epochs = maxepoch;
net.trainParam.goal = 0.0;
net.trainParam.mc=0.7;
net.trainParam.lr=0.05;
net.trainParam.show=NaN;
net = train(net,input,comptarget);
f2 = sim(net,input);
output2 = sim(net,testinput);
elseif i==3 % hidden nodes=1
hiddenno3=1;
fprintf('ENN dda component NN 3 epoch = %g hidden node no=%g ', maxepoch, hiddenno3);
fprintf('\n');
net = newff(input,comptarget,hiddenno3,{'tansig' 'purelin'});
net.divideFcn='';
net.trainParam.epochs = maxepoch;
net.trainParam.goal = 0.0;
net.trainParam.mc=0.7;
net.trainParam.lr=0.05;
net.trainParam.show=NaN;
net = train(net,input,comptarget);
f3 = sim(net,input);
output3 = sim(net,testinput);
else
fprintf('wrong input--ensemble size ');
fprintf('\n');
end
end
enoutput_training= (f1+f2+f3)/ensemblesize; enoutput_tes= (output1+output2+output3)/ensemblesize;
fprintf('ENN dda 3 different number of hidden neurons epoch = %g hidden node no = var', maxepoch);
fprintf('\n');
mse_training = mse(enoutput_training - comptarget) ;
mse_tes = mse(enoutput_tes - testtarget) ;
fprintf('dda ENN simple average mse_training = %12.5g \n', mse_training );
fprintf('dda ENN simple average mse_testing = %12.5g \n\n', mse_tes );
% % fprintf(' dda E value = \n'); % % fprintf(' target = '); % % fprintf(1,' %12.4g', teE1(1,:)); % % fprintf(' \n '); % % % % fprintf(' simple enn = '); % % fprintf(1,' %12.4g',enoutput_tes(1,:)); % % fprintf(' \n '); % % % % fprintf(' better enn = '); % % fprintf(1,' %12.4g',enoutput_better(1,:)); % % fprintf(' \n\n ');
end % end of function
  1 Comment
Greg Heath
Greg Heath on 23 Jan 2013
I don't understand why you are posting data for ux1 to ux9 and uy10 to uy16
when your input is only 3-dimensional.

Sign in to comment.

Accepted Answer

Greg Heath
Greg Heath on 23 Jan 2013
Edited: Greg Heath on 23 Jan 2013
I did not look closely at your code. However, I have several comments:
1. If you are going to post data and/or code, make sure that they will cause no errors when cut and pasted into the command line.
2. If you are not going to use validation stopping or regularization (msereg and/ or trainbr) then choose H just as large as necessary to obtain a practical MSE training goal (e.g., ~ 0.01 or 0.005 * mean(var(target',1)). Adjustments for loss of degrees of freedom due to the estimation of Nw weights is considered below.
3. Convergence to a good local minimum depends on the random initial weights. Therefore, design many nets for each trial value of H and choose a good design using either nontraining validation error or adjusted training error (as shown below).
4. Your guesses of H=3 and maxepoch = 30 are woefully inadequate. Study my structured search over 130 designs (10 random weight initialiations each for 13 candidate values of H) where I discover that 9 <= H <= 12 and maxepoch = 1000(default) are more appropriate.
5. I recommend accepting default training values unless you have a specific reason for using your own. The only ones that I overwrote were net.divide, net.trainParam.goal and H.
6. If you are going to use an ensemble, you might be able to use fewer hidden nodes and larger training goals. However, it probably is not worth the effort.
close all, clear all,clc, plt=0;
tic
%TRAINING DATA
ux1 = [ 0.82 0.76 0.75 0.69 0.74 0.83 1.31 1.35 1.1 1.14 ...
1.39 1.09 1.84 1.62 1.66 1.6 1.56 2 1.95 1.96 ...
1.98 1.89 2.79 2.68 2.41 2.51 2.55 3.63 2.92 2.91 ...
2.93 3 2.86 3.38 3.41 4.16 3.31 3.32 3.27 ];
ux2 = [ -0.83 -0.83 -0.79 -0.77 -0.77 -0.89 -1.3 -1.6 -1.28 -1.3 ...
-1.6 -1.26 -1.93 -1.84 -1.91 -1.86 -1.78 -2.28 -2.29 -2.28 ...
-2.27 -2.29 -2.88 -2.97 -2.78 -2.81 -2.92 -3.51 -3.44 -3.38 ...
-3.43 -3.46 -3.36 -3.92 -3.97 -3.83 -3.97 -3.9 -3.89 ];
uy10 = [-2.83 -0.93 -0.55 -0.58 -0.47 -1.24 -2.56 -2.55 -0.54 -0.49 ...
-2.75 0.93 -3.5 -0.36 -1.92 -1.23 0.54 -1.33 -0.74 -0.62 ...
-0.46 -0.43 -2.64 -2.91 -0.5 -1.45 0.45 -2.47 -1.65 -1.48 ...
-1.48 -1.85 -1.5 -1.57 -1.36 -1.57 -1.41 -1.47 -1.43 ];
E = [ 20 30 40 50 60 70 20 30 40 50 60 80 20 40 50 70 80 20 30 40 ...
60 70 20 30 50 60 80 20 40 50 60 70 80 30 40 50 60 70 80];
k = [ 1 1 1 1 1 1 1.5 1.5 1.5 1.5 1.5 1.5 2 2 2 2 2 2.5 2.5 2.5 ...
2.5 2.5 3 3 3 3 3 3.5 3.5 3.5 3.5 3.5 3.5 4 4 4 4 4 4];
xtrn = [ ux1; ux2 ;uy10 ];
ttrn = [ E; k ];
% TEST DATA
teux1 = [ 0.77 1.16 1.61 1.54 2.25 2.02 2.48 2.53 2.92 3.56 ];
teux2 = [ -0.79 -1.31 -1.93 -1.78 -2.56 -2.29 -2.87 -2.89 -3.38 -3.91 ];
teuy10 = [ 1.12 -0.97 -0.83 -1.49 -2.5 0.86 -1.69 -1.91 -1.93 -2.08 ];
teE = [ 80 70 60 30 50 80 40 70 30 20 ];
tek = [ 1 1.5 2 2 2.5 2.5 3 3 3.5 4 ];
xtst = [ teux1; teux2; teuy10 ];
ttst = [ teE; tek ];
[ I Ntrn ] = size(xtrn) % [ 3 39 ]
[ O Ntrn ] = size(ttrn) % [ 2 39 ]
[ I Ntst ] = size(xtst) % [ 2 10 ]
[ O Ntst ] = size(ttst) % [ 2 10 ]
Ntrneq = Ntrn*O % 78 Training Equations
% NAIVE CONSTANT MODEL ( "a" => Degree of freedom "a"djustment)
y00 = repmat(mean(ttrn,2),1,Ntrn);
Nw00 = O % 2 weights
e00 = ttrn-y00;
MSE00 = sse(e00)/Ntrneq % 193.8259
MSE00a = sse(e00)/(Ntrneq-Nw00) % 198.9266 DOFA
MSE00 = mean(var(ttrn',1)) % 193.8259 Target Variance
MSE00a = mean(var(ttrn')) % 198.9266
%LINEAR MODEL (R2 => Fraction of target variance modeled)
W = ttrn / [ ones(1,Ntrn) ; xtrn ] % 52.8493 -23.1829 -24.9930 10.7487
% 0.2567 0.0956 -0.8929 0.0556
Nw0 = numel(W) % 8 weights
y0 = W*[ ones(1,Ntrn) ; xtrn ];
e0 = ttrn-y0;
MSE0 = sse(e0)/Ntrneq % 115.9305
MSE0a = sse(e0)/(Ntrneq-Nw0) % 129.1797
NMSE0 = MSE0/MSE00 % 0.5981
NMSE0a = MSE0a/MSE00a % 0.6494
R20 = 1-NMSE0 % 0.4019 Same as H = 0 below
R20a = 1-NMSE0a % 0.3506
%NEURAL NET MODEL
% Nw = (I+1)*H+(H+1)*O % No.of estimated weights
Hub = (Ntrneq-O)/(I+O+1) % 12.6667 Ntrneq = Nw
Hmax = ceil(Hub)-1 % 12 Ntrneq > Nw
Ntrials = 10
rng(0)
j=0
for h = 0:Hmax
j = j+1
for i = 1:Ntrials
if h==0
net = fitnet([]);
Nw = (I+1)*O
else
net = fitnet(h);
Nw = (I+1)*h+(h+1)*O
end
net.divideFcn = ''; % No val or test data
net.trainParam.goal = 0.01*(Ntrneq-Nw)*MSE00a/Ntrneq; %R2a >= 0.99
[ net tr Ytrn Etrn] = train( net, xtrn, ttrn);
Nepochs(i,j) = tr.epoch(end);
MSE = tr.perf(end);
MSEa = Ntrneq*MSE/(Ntrneq-Nw);
R2(i,j) = 1-MSE/MSE00;
R2a(i,j) = 1- MSEa/MSE00a;
end
end
time = toc % 694.84 sec (11.58 min)
H = 0:Hmax
Nepochs = Nepochs
R2 = R2
R2a = R2a
% H =
%
% 0 1 2 3 4 5 6 7 8 9 10 11 12
%
% Nepochs =
%
% 2 1000 29 1000 1000 1000 1000 1000 685 230 123 78 67
% 2 1000 1000 9 1000 1000 1000 1000 1000 177 876 72 133
% 2 8 166 345 1000 1000 1000 1000 1000 97 295 16 263
% 2 1000 1000 1000 1000 1000 1000 1000 1000 1000 322 112 167
% 2 1000 1000 1000 48 1000 1000 1000 1000 145 189 117 335
% 2 1000 46 1000 262 229 1000 1000 1000 119 353 494 303
% 2 1000 1000 1000 1000 1000 1000 1000 1000 214 195 378 102
% 2 1000 1000 1000 1000 1000 1000 1000 1000 509 134 542 104
% 2 1000 208 1000 168 1000 1000 1000 1000 1000 175 83 167
% 2 1000 1000 1000 931 1000 1000 389 1000 1000 78 211 244
%
% R2 =
%
% 0.4019 0.4054 0.5996 0.7503 0.8250 0.7970 0.8678 0.9506 0.9802 0.9971 0.9979 0.9987 0.9995
% 0.4019 0.4054 0.4314 0.3411 0.6335 0.7708 0.9052 0.9904 0.9873 0.9971 0.9979 0.9988 0.9996
% 0.4019 0.2417 0.5181 0.5998 0.7224 0.8452 0.8749 0.8640 0.9646 0.9974 0.9979 0.9989 0.9995
% 0.4019 0.4054 0.5956 0.6478 0.7224 0.7115 0.8095 0.9645 0.9429 0.9531 0.9979 0.9987 0.9995
% 0.4019 0.4054 0.4054 0.6313 0.7001 0.7655 0.8609 0.8560 0.9807 0.9971 0.9979 0.9988 0.9995
% 0.4019 0.4054 0.5995 0.5212 0.6768 0.7132 0.8032 0.9620 0.9692 0.9971 0.9980 0.9987 0.9995
% 0.4019 0.4054 0.5989 0.5687 0.6967 0.7421 0.8060 0.8933 0.9898 0.9971 0.9983 0.9987 0.9995
% 0.4019 0.4043 0.5281 0.5770 0.6837 0.7449 0.8574 0.9774 0.8911 0.9973 0.9979 0.9987 0.9995
% 0.4019 0.4054 0.5181 0.5994 0.6960 0.8263 0.8377 0.9766 0.9675 0.8793 0.9988 0.9987 0.9995
% 0.4019 0.4054 0.5961 0.5701 0.6809 0.7450 0.8698 0.9327 0.9606 0.9941 0.9980 0.9987 0.9995
%
% R2a =
%
% 0.3506 0.3545 0.5245 0.6727 0.7442 0.6647 0.7489 0.8895 0.9462 0.9900 0.9900 0.9901 0.9911
% 0.3506 0.3545 0.3247 0.1366 0.4643 0.6213 0.8198 0.9785 0.9656 0.9900 0.9900 0.9908 0.9915
% 0.3506 0.1767 0.4277 0.4756 0.5942 0.7442 0.7624 0.6961 0.9040 0.9908 0.9900 0.9914 0.9903
% 0.3506 0.3545 0.5197 0.5385 0.5942 0.5234 0.6381 0.9206 0.8451 0.8379 0.9902 0.9901 0.9904
% 0.3506 0.3545 0.2940 0.5169 0.5617 0.6126 0.7358 0.6781 0.9476 0.9901 0.9901 0.9908 0.9900
% 0.3506 0.3545 0.5244 0.3726 0.5276 0.5262 0.6260 0.9150 0.9164 0.9900 0.9903 0.9901 0.9907
% 0.3506 0.3545 0.5237 0.4349 0.5568 0.5740 0.6314 0.7616 0.9722 0.9900 0.9919 0.9900 0.9902
% 0.3506 0.3532 0.4397 0.4458 0.5377 0.5785 0.7291 0.9495 0.7044 0.9907 0.9900 0.9902 0.9908
% 0.3506 0.3545 0.4277 0.4751 0.5557 0.7130 0.6916 0.9477 0.9117 0.5832 0.9944 0.9901 0.9913
% 0.3506 0.3545 0.5203 0.4367 0.5336 0.5788 0.7525 0.8497 0.8931 0.9796 0.9907 0.9900 0.9907
Hope this helps.
Thank you for formally accepting my answer.
Greg

More Answers (0)

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!