Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
The problem about the neural network training

Subject: The problem about the neural network training

From: Harbin Brightman

Date: 3 Oct, 2010 03:50:06

Message: 1 of 11

I built a neural network and set net.trainParam.epochs=10;

I built another neural network, and set net.trainParam.epochs=1;But I call this neural network 10 times,

These initialized weights of the two neural networks are the same
I hope this two neural network of forecasting results are the same,
But the program running result is different. I want to know why?
The fellow is the program

%%%%%%%%%%%%%%%%%%%%%%%%%%%%
clear
TrainNumber=10;
x=[-2:0.01:2];
y=(exp(-1.9.*(x+0.5))).*sin(10*x);

% set net.trainParam.epochs=10
net=newff(minmax(x),[20,1],{'tansig','purelin'},'trainlm');
W1=net.iw{1,1};
W2=net.lw{2,1};
B1=net.b{1,1};
B2=net.b{2,1};
net.trainParam.epochs=TrainNumber;
net.trainParam.goal=0.001;
net=train(net,x,y);
SimY=sim(net,x);

%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%net.trainParam.epochs=1, but call 10 times
for Count=1:TrainNumber
    
    net1=newff(minmax(x),[20,1],{'tansig','purelin'},'trainlm');
    
    net1.iw{1,1}=W1;
    net1.lw{2,1}=W2;
    net1.b{1,1}=B1;
    net1.b{2,1}=B2;
    
    net1.trainParam.epochs=1;
    net1.trainParam.goal=0.001;
    
    net1=train(net1,x,y);
    
    W1=net1.iw{1,1};
    W2=net1.lw{2,1};
    B1=net1.b{1,1};
    B2=net1.b{2,1};
    
end;

SimY1=sim(net1,x);

figure(1)
hold on
plot(y);
plot(SimY,'r');
plot(SimY1,'y');
hold off
% the forecasting results are not the same, I want know why?

Subject: The problem about the neural network training

From: Greg Heath

Date: 3 Oct, 2010 13:27:41

Message: 2 of 11

On Oct 2, 11:50 pm, "Harbin Brightman" <chx...@tom.com> wrote:
> I built a neural network and set net.trainParam.epochs=10;
>
> I built another neural network, and set net.trainParam.epochs=1;
> But I call this neural network 10 times,
>
> These initialized weights of the two neural networks are the same
> I hope this two neural network of forecasting results are the same,
> But the program running result is different. I want to know why?
> The fellow is the program
>
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%
> clear
> TrainNumber=10;
> x=[-2:0.01:2];
> y=(exp(-1.9.*(x+0.5))).*sin(10*x);
>
> % set net.trainParam.epochs=10

In general, initialize rand before calling newff. That allows
you to duplicate runs without having to store initial weights.

state0 = 0;
rand('state',state0)

> net=newff(minmax(x),[20,1],{'tansig','purelin'},'trainlm');

> W1=net.iw{1,1};
> W2=net.lw{2,1};
> B1=net.b{1,1};
> B2=net.b{2,1};
> net.trainParam.epochs=TrainNumber;
> net.trainParam.goal=0.001;

net.trainParam.show = 1; % To compare with results below

> net=train(net,x,y);
> SimY=sim(net,x);
>
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%
> %net.trainParam.epochs=1, but call 10 times
> for Count=1:TrainNumber
>
> net1=newff(minmax(x),[20,1],{'tansig','purelin'},'trainlm');

You will get the same result if this statement is outside of the loop.

> net1.iw{1,1}=W1;
> net1.lw{2,1}=W2;
> net1.b{1,1}=B1;
> net1.b{2,1}=B2;
>
> net1.trainParam.epochs=1;
> net1.trainParam.goal=0.001;

      net1.trainParam.show = 1; % for consistency

> net1=train(net1,x,y);
>
> W1=net1.iw{1,1};
> W2=net1.lw{2,1};
> B1=net1.b{1,1};
> B2=net1.b{2,1};
>
> end;
>
> SimY1=sim(net1,x);
>
> figure(1)
> hold on
> plot(y);
> plot(SimY,'r');
> plot(SimY1,'y');
> hold off
> % the forecasting results are not the same, I want know why?

Me too. I found no errors in your code.

Maybe Mark Beale can help.

General coding suggestions:

1.A practical training goal is MSEgoal = MSE00/100 where
MSE00 is the MSE for the naive model with constant output
y00 = repmat(mean(y,2),1,N) for N training vectors. In
this case, it will yield MSEgoal = 0.09
2. Use as few hidden nodes as possible to avoid the
phenomena of overfitting and overtraining (See the
comp.ai.neural-nets FAQ).

Hope this helps.

Greg

Subject: The problem about the neural network training

From: Harbin Brightman

Date: 4 Oct, 2010 01:20:05

Message: 3 of 11

Greg Heath <heath@alumni.brown.edu> wrote in message <fc77d394-1828-4812-a307-8635e0033ad6@p26g2000yqb.googlegroups.com>...
> On Oct 2, 11:50 pm, "Harbin Brightman" <chx...@tom.com> wrote:
> > I built a neural network and set net.trainParam.epochs=10;
> >
> > I built another neural network, and set net.trainParam.epochs=1;
> > But I call this neural network 10 times,
> >
> > These initialized weights of the two neural networks are the same
> > I hope this two neural network of forecasting results are the same,
> > But the program running result is different. I want to know why?
> > The fellow is the program
> >
> > %%%%%%%%%%%%%%%%%%%%%%%%%%%%
> > clear
> > TrainNumber=10;
> > x=[-2:0.01:2];
> > y=(exp(-1.9.*(x+0.5))).*sin(10*x);
> >
> > % set net.trainParam.epochs=10
>
> In general, initialize rand before calling newff. That allows
> you to duplicate runs without having to store initial weights.
>
> state0 = 0;
> rand('state',state0)
>
> > net=newff(minmax(x),[20,1],{'tansig','purelin'},'trainlm');
>
> > W1=net.iw{1,1};
> > W2=net.lw{2,1};
> > B1=net.b{1,1};
> > B2=net.b{2,1};
> > net.trainParam.epochs=TrainNumber;
> > net.trainParam.goal=0.001;
>
> net.trainParam.show = 1; % To compare with results below
>
> > net=train(net,x,y);
> > SimY=sim(net,x);
> >
> > %%%%%%%%%%%%%%%%%%%%%%%%%%%%
> > %net.trainParam.epochs=1, but call 10 times
> > for Count=1:TrainNumber
> >
> > net1=newff(minmax(x),[20,1],{'tansig','purelin'},'trainlm');
>
> You will get the same result if this statement is outside of the loop.
>
> > net1.iw{1,1}=W1;
> > net1.lw{2,1}=W2;
> > net1.b{1,1}=B1;
> > net1.b{2,1}=B2;
> >
> > net1.trainParam.epochs=1;
> > net1.trainParam.goal=0.001;
>
> net1.trainParam.show = 1; % for consistency
>
> > net1=train(net1,x,y);
> >
> > W1=net1.iw{1,1};
> > W2=net1.lw{2,1};
> > B1=net1.b{1,1};
> > B2=net1.b{2,1};
> >
> > end;
> >
> > SimY1=sim(net1,x);
> >
> > figure(1)
> > hold on
> > plot(y);
> > plot(SimY,'r');
> > plot(SimY1,'y');
> > hold off
> > % the forecasting results are not the same, I want know why?
>
> Me too. I found no errors in your code.
>
> Maybe Mark Beale can help.
>
> General coding suggestions:
>
> 1.A practical training goal is MSEgoal = MSE00/100 where
> MSE00 is the MSE for the naive model with constant output
> y00 = repmat(mean(y,2),1,N) for N training vectors. In
> this case, it will yield MSEgoal = 0.09
> 2. Use as few hidden nodes as possible to avoid the
> phenomena of overfitting and overtraining (See the
> comp.ai.neural-nets FAQ).
>
> Hope this helps.
>
> Greg

Subject: The problem about the neural network training

From: Harbin Brightman

Date: 4 Oct, 2010 01:22:03

Message: 4 of 11

Greg Heath <heath@alumni.brown.edu> wrote in message <fc77d394-1828-4812-a307-8635e0033ad6@p26g2000yqb.googlegroups.com>...
> On Oct 2, 11:50 pm, "Harbin Brightman" <chx...@tom.com> wrote:
> > I built a neural network and set net.trainParam.epochs=10;
> >
> > I built another neural network, and set net.trainParam.epochs=1;
> > But I call this neural network 10 times,
> >
> > These initialized weights of the two neural networks are the same
> > I hope this two neural network of forecasting results are the same,
> > But the program running result is different. I want to know why?
> > The fellow is the program
> >
> > %%%%%%%%%%%%%%%%%%%%%%%%%%%%
> > clear
> > TrainNumber=10;
> > x=[-2:0.01:2];
> > y=(exp(-1.9.*(x+0.5))).*sin(10*x);
> >
> > % set net.trainParam.epochs=10
>
> In general, initialize rand before calling newff. That allows
> you to duplicate runs without having to store initial weights.
>
> state0 = 0;
> rand('state',state0)
>
> > net=newff(minmax(x),[20,1],{'tansig','purelin'},'trainlm');
>
> > W1=net.iw{1,1};
> > W2=net.lw{2,1};
> > B1=net.b{1,1};
> > B2=net.b{2,1};
> > net.trainParam.epochs=TrainNumber;
> > net.trainParam.goal=0.001;
>
> net.trainParam.show = 1; % To compare with results below
>
> > net=train(net,x,y);
> > SimY=sim(net,x);
> >
> > %%%%%%%%%%%%%%%%%%%%%%%%%%%%
> > %net.trainParam.epochs=1, but call 10 times
> > for Count=1:TrainNumber
> >
> > net1=newff(minmax(x),[20,1],{'tansig','purelin'},'trainlm');
>
> You will get the same result if this statement is outside of the loop.
>
> > net1.iw{1,1}=W1;
> > net1.lw{2,1}=W2;
> > net1.b{1,1}=B1;
> > net1.b{2,1}=B2;
> >
> > net1.trainParam.epochs=1;
> > net1.trainParam.goal=0.001;
>
> net1.trainParam.show = 1; % for consistency
>
> > net1=train(net1,x,y);
> >
> > W1=net1.iw{1,1};
> > W2=net1.lw{2,1};
> > B1=net1.b{1,1};
> > B2=net1.b{2,1};
> >
> > end;
> >
> > SimY1=sim(net1,x);
> >
> > figure(1)
> > hold on
> > plot(y);
> > plot(SimY,'r');
> > plot(SimY1,'y');
> > hold off
> > % the forecasting results are not the same, I want know why?
>
> Me too. I found no errors in your code.
>
> Maybe Mark Beale can help.
>
> General coding suggestions:
>
> 1.A practical training goal is MSEgoal = MSE00/100 where
> MSE00 is the MSE for the naive model with constant output
> y00 = repmat(mean(y,2),1,N) for N training vectors. In
> this case, it will yield MSEgoal = 0.09
> 2. Use as few hidden nodes as possible to avoid the
> phenomena of overfitting and overtraining (See the
> comp.ai.neural-nets FAQ).
>
> Hope this helps.
>
> Greg

Subject: The problem about the neural network training

From: Harbin Brightman

Date: 4 Oct, 2010 07:02:05

Message: 5 of 11

"Harbin Brightman" <chxyzj@tom.com> wrote in message <i8ba7r$rt5$1@fred.mathworks.com>...
> Greg Heath <heath@alumni.brown.edu> wrote in message <fc77d394-1828-4812-a307-8635e0033ad6@p26g2000yqb.googlegroups.com>...
> > On Oct 2, 11:50 pm, "Harbin Brightman" <chx...@tom.com> wrote:
> > > I built a neural network and set net.trainParam.epochs=10;
> > >
> > > I built another neural network, and set net.trainParam.epochs=1;
> > > But I call this neural network 10 times,
> > >
> > > These initialized weights of the two neural networks are the same
> > > I hope this two neural network of forecasting results are the same,
> > > But the program running result is different. I want to know why?
> > > The fellow is the program
> > >
> > > %%%%%%%%%%%%%%%%%%%%%%%%%%%%
> > > clear
> > > TrainNumber=10;
> > > x=[-2:0.01:2];
> > > y=(exp(-1.9.*(x+0.5))).*sin(10*x);
> > >
> > > % set net.trainParam.epochs=10
> >
> > In general, initialize rand before calling newff. That allows
> > you to duplicate runs without having to store initial weights.
> >
> > state0 = 0;
> > rand('state',state0)
> >
> > > net=newff(minmax(x),[20,1],{'tansig','purelin'},'trainlm');
> >
> > > W1=net.iw{1,1};
> > > W2=net.lw{2,1};
> > > B1=net.b{1,1};
> > > B2=net.b{2,1};
> > > net.trainParam.epochs=TrainNumber;
> > > net.trainParam.goal=0.001;
> >
> > net.trainParam.show = 1; % To compare with results below
> >
> > > net=train(net,x,y);
> > > SimY=sim(net,x);
> > >
> > > %%%%%%%%%%%%%%%%%%%%%%%%%%%%
> > > %net.trainParam.epochs=1, but call 10 times
> > > for Count=1:TrainNumber
> > >
> > > net1=newff(minmax(x),[20,1],{'tansig','purelin'},'trainlm');
> >
> > You will get the same result if this statement is outside of the loop.
> >
> > > net1.iw{1,1}=W1;
> > > net1.lw{2,1}=W2;
> > > net1.b{1,1}=B1;
> > > net1.b{2,1}=B2;
> > >
> > > net1.trainParam.epochs=1;
> > > net1.trainParam.goal=0.001;
> >
> > net1.trainParam.show = 1; % for consistency
> >
> > > net1=train(net1,x,y);
> > >
> > > W1=net1.iw{1,1};
> > > W2=net1.lw{2,1};
> > > B1=net1.b{1,1};
> > > B2=net1.b{2,1};
> > >
> > > end;
> > >
> > > SimY1=sim(net1,x);
> > >
> > > figure(1)
> > > hold on
> > > plot(y);
> > > plot(SimY,'r');
> > > plot(SimY1,'y');
> > > hold off
> > > % the forecasting results are not the same, I want know why?
> >
> > Me too. I found no errors in your code.
> >
> > Maybe Mark Beale can help.
> >
> > General coding suggestions:
> >
> > 1.A practical training goal is MSEgoal = MSE00/100 where
> > MSE00 is the MSE for the naive model with constant output
> > y00 = repmat(mean(y,2),1,N) for N training vectors. In
> > this case, it will yield MSEgoal = 0.09
> > 2. Use as few hidden nodes as possible to avoid the
> > phenomena of overfitting and overtraining (See the
> > comp.ai.neural-nets FAQ).
> >
> > Hope this helps.
> >
> > Greg

But I don't know his email address, can you tell me? I want to send a message to him

Subject: The problem about the neural network training

From: Greg Heath

Date: 4 Oct, 2010 09:06:15

Message: 6 of 11

On Oct 3, 9:27 am, Greg Heath <he...@alumni.brown.edu> wrote:
>
> General coding suggestions:
>
> 1.A practical training goal is MSEgoal = MSE00/100 where
> MSE00 is the MSE for the naive model with constant output
> y00 = repmat(mean(y,2),1,N) for N training vectors. In
> this case, it will yield MSEgoal = 0.09

The rationale for this choice is that the R-square statistic
R^2 = 1-MSE/MSE00 will exceed 0.99. The interpretation
is that the net is successfully modelling more than 99%
of the target data variance.

> 2. Use as few hidden nodes as possible to avoid the
> phenomena of overfitting and overtraining (See the
> comp.ai.neural-nets FAQ).

For the above function choosing H = 20 overfits the
function in the sense that the function can be
successfully modelled with significantly smaller nets.
The danger of overfitting is that if the data were
obtained from a noisy experiment instead of an exact
equation, the excess number of degrees of freedom
would allow the net to incorporate those particular noise
fluctuations into the model. Therefore, the usefulness
of the model for predicting the output for noisy
nontraining input data can be severely degraded.

Below is an example of a simple quick search for a
parsimonious model using the above data. In essence,
the search is for the minimum value of H that will yield
R^2 >= 0.99. The result is Hmin = 6.

A more illustrative example would
1. Include noise
2. Use fewer training data points
3. Compare training set and test set performance

Notice the curious result that only 30% of the
random initial weight designs were successful when
H = 7 and 9 whereas the success rates for H =
[6 8 10] are [70% 60% 90%]

clear all, close all, clc

x = [-2:0.01:2];
y = exp(-1.9*(x+0.5)).*sin(10*x);

N = length(x) % 401
y00 = repmat(mean(y,2),1,N);
MSE00 = mse(y-y00) % 9.006
MSEgoal = MSE00/100 % 0.09006

% MSE <= MSEgoal ==> R^2 >= 0.99 (R-square statistic)

rand('state',0)
Hmax = 10
Ntrials = 10
for H = 1:Hmax
    for n = 1:Ntrials
        net=newff(minmax(x),[H,1],{'tansig','purelin'});
        net.trainParam.goal = MSEgoal;
        net.trainParam.show = inf;
        [net tr ] = train(net,x,y);
        MSE(n,H) = tr.perf(end);
    end
end
R2s = sort(1-MSE/MSE00); % sorted Rsq statistic
R2 = floor(1e3*R2s)/1e3; % formatting
S = 1:5; % small H index
L = 6:10; % large H index

R2tab1 = [S; R2(:,S)] % small H tabulation
R2tab2 = [L; R2(:,L)] % large H tabulation

% R^2 Statistic Tabulation (Goal: R^2 >= 0.99)
%
% 1 2 3 4 5
% 0.014 0.001 0.014 0.015 0.982
% 0.014 0.001 0.016 0.019 0.982
% 0.016 0.008 0.016 0.893 0.982
% 0.016 0.013 0.016 0.966 0.982
% 0.017 0.017 0.019 0.966 0.982
% 0.017 0.019 0.019 0.966 0.982
% 0.017 0.024 0.039 0.968 0.982
% 0.024 0.047 0.088 0.989 0.982
% 0.075 0.893 0.807 0.989 0.982
% 0.075 0.893 0.938 0.989 0.989
%
% 6 7 8 9 10
% 0.922 0.018 0.004 0.004 0.062
% 0.982 0.018 0.818 0.020 0.990
% 0.982 0.020 0.960 0.020 0.990
% 0.990 0.214 0.989 0.057 0.990
% 0.990 0.817 0.990 0.214 0.990
% 0.990 0.817 0.990 0.400 0.990
% 0.990 0.968 0.990 0.982 0.990
% 0.991 0.990 0.990 0.990 0.991
% 0.991 0.990 0.991 0.990 0.991
% 0.993 0.991 0.991 0.990 0.993

return

Hope this helps.

Greg

Subject: The problem about the neural network training

From: Harbin Brightman

Date: 5 Oct, 2010 08:08:22

Message: 7 of 11

Thanks for your advises, but I still don't know why the two neural worknets have different outputs, they have the same weights, after the same training times. According to the neural worknets theory, they should have the same outputs, but it is different in matlab which make me confused.

Subject: The problem about the neural network training

From: Greg Heath

Date: 8 Oct, 2010 14:22:38

Message: 8 of 11

On Oct 5, 4:08 am, "Harbin Brightman" <chx...@tom.com> wrote:
> Thanks for your advises, but I still don't know why the twoneuralworknets have different outputs, they have the same weights, after the same training times. According to the  neuralworknets theory, they >should have the same outputs, but it is different in matlab which make me confused.

I can find no coding errors.

I can duplicate your results.

There must be a bug in trainlm.

Which of the other training functions also exhibit this behavior?

Hope this helps.

Greg

Subject: The problem about the neural network training

From: jimit shah

Date: 18 Oct, 2010 17:43:03

Message: 9 of 11

Hello,
hey can you please send me the topology diagram of dis code plaese..

Thank you,
Jimit shah

Subject: The problem about the neural network training

From: Greg Heath

Date: 12 Nov, 2010 17:03:02

Message: 10 of 11

On Oct 2, 10:50 pm, "Harbin Brightman" <chx...@tom.com> wrote:
> I built a neural network and set net.trainParam.epochs=10;
>
> I built another neural network, and set net.trainParam.epochs=1;But I call this neural network 10 times,
>
> These initialized weights of the two neural networks are the same
> I hope this two neural network of forecasting results are the same,
> But the program running result is different. I want to know why?
> The fellow is the program
>
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%
> clear
> TrainNumber=10;
> x=[-2:0.01:2];
> y=(exp(-1.9.*(x+0.5))).*sin(10*x);
>
> % set net.trainParam.epochs=10
> net=newff(minmax(x),[20,1],{'tansig','purelin'},'trainlm');
> W1=net.iw{1,1};
> W2=net.lw{2,1};
> B1=net.b{1,1};
> B2=net.b{2,1};
> net.trainParam.epochs=TrainNumber;
> net.trainParam.goal=0.001;
> net=train(net,x,y);
> SimY=sim(net,x);
>
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%
> %net.trainParam.epochs=1, but call 10 times
> for Count=1:TrainNumber
>
> net1=newff(minmax(x),[20,1],{'tansig','purelin'},'trainlm');
>
> net1.iw{1,1}=W1;
> net1.lw{2,1}=W2;
> net1.b{1,1}=B1;
> net1.b{2,1}=B2;
>
> net1.trainParam.epochs=1;
> net1.trainParam.goal=0.001;
>
> net1=train(net1,x,y);
>
> W1=net1.iw{1,1};
> W2=net1.lw{2,1};
> B1=net1.b{1,1};
> B2=net1.b{2,1};
>
> end;
>
> SimY1=sim(net1,x);
>
> figure(1)though
> hold on
> plot(y);
> plot(SimY,'r');
> plot(SimY1,'y');
> hold off
> % the forecasting results are not the same, I want know why?

Even though the second net is assigned the initial weights of
the first net,

isequal(net,net1) = 0

However, if both nets are created with the same state of rand
before the redundanr weight reassignment

isequal(net,net1) = 1

clear all, clc

x=[-2:0.01:2];
rand('state',0)
net = newff(minmax(x),[20,1],{'tansig','purelin'},'trainlm');
W1 = net.iw{1,1};
W2 = net.lw{2,1};
B1 = net.b{1,1};
B2 = net.b{2,1};

for i = 1:2
    if i == 1
        rand('state',0)
    end
    net1=newff(minmax(x),[20,1],{'tansig','purelin'},'trainlm');
    net1.iw{1,1} = W1;
    net1.lw{2,1} = W2;
    net1.b{1,1} = B1;
    net1.b{2,1} = B2;
    result(i) = isequal(net,net1);
end

result = result

Please post a reply after looking at the innards of net and net1 to
find out which properties are different.

Hope this helps.

Greg

Subject: The problem about the neural network training

From: Greg Heath

Date: 1 Aug, 2013 04:45:12

Message: 11 of 11

"Harbin Brightman" <chxyzj@tom.com> wrote in message <i88uhe$4nk$1@fred.mathworks.com>...
> I built a neural network and set net.trainParam.epochs=10;
>
> I built another neural network, and set net.trainParam.epochs=1;But I call this neural network 10 times,
>
> These initialized weights of the two neural networks are the same
> I hope this two neural network of forecasting results are the same,
> But the program running result is different. I want to know why?
> The fellow is the program

When train is called, some of the training parameters (e.g. mu) are set to starting values.

In the original case, train is only called once. Whereas in the last case, train is called
10 times

Hope this helps

Greg

Tags for this Thread

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us