Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
Bug in Disable the Randomization of Weights and Bias in Neural Network

Subject: Bug in Disable the Randomization of Weights and Bias in Neural Network

From: Subodh Paudel

Date: 28 Feb, 2013 08:42:08

Message: 1 of 8

Hi Matlab Expert,
I am using Matlab 2009a and 2009b. After my finding the weights and biases, i used the neural network; but it gives the different R2 value. I have used different methodologies to solve it. Because, my target is to fixed only the same weights for the learning phase.

1) When i obtain the weight and bias, after the newff .............., i disable the random initialization value and initialize my own weight. I did just like this:

    net=newff(P,T,H)
 stream = RandStream.getDefaultStream;
        reset(stream);
        net.IW{1,1}=input_weight;
        net.LW{2,1}=hidden_weight;
        net.b{1}=bias_hiddenneurons;
        net.b{2}=bias_output;

Also, i found the randomization of the data using dividerand, then i also used below command to disable any randomization, but it does not work.

                   net.divideFcn='';

2) Finally, i used as rand state because in my matlab it does not support rng(0).......... It does not again work

       rand ('state',0)
        net=newff(P,T,H)
        net.IW{1,1}=input_weight;
        net.LW{2,1}=hidden_weight;
        net.b{1}=bias_hiddenneurons;
        net.b{2}=bias_out;

when i calculate my R¨^2 value from several trials and the best trails, my R¨^2 value is 0.95 and obtain the weights and bias from it; and put in th
after the rand('state',0) and newff............, my R¨^2 value is 0.87, its value is decreasing, which i don't expect. Please, help me is there is bug in matlab or something i don't know the tricks to solve it.

Subodh

Subject: Bug in Disable the Randomization of Weights and Bias in Neural Network

From: Greg Heath

Date: 1 Mar, 2013 07:40:12

Message: 2 of 8

"Subodh Paudel" <subodhpaudel@gmail.com> wrote in message <kgn590$2uj$1@newscl01ah.mathworks.com>...
> Hi Matlab Expert,
> I am using Matlab 2009a and 2009b. After my finding the weights and biases, i used the neural network; but it gives the different R2 value. I have used different methodologies to solve it. Because, my target is to fixed only the same weights for the >learning phase.

I don't understand , especially the last sentence.

Are you saying that when you insert trained weights into a new net, you get a different answer? If so, did you account for the fact that there is an automatic default normalization of variables that the weights act on?

 
> 1) When i obtain the weight and bias, after the newff .............., i disable the random initialization value and initialize my own weight. I did just like this:
>
> net=newff(P,T,H)
> stream = RandStream.getDefaultStream;
> reset(stream);
> net.IW{1,1}=input_weight;
> net.LW{2,1}=hidden_weight;
> net.b{1}=bias_hiddenneurons;
> net.b{2}=bias_output;
>
> Also, i found the randomization of the data using dividerand, then i also used below command to disable any randomization, but it does not work.
>
> net.divideFcn='';

This is the same as net.divideFcn = 'dividetrain' so there are no validationor training sets.
 
> 2) Finally, i used as rand state because in my matlab it does not support rng(0).......... It does not again work
>
> rand ('state',0)
> net=newff(P,T,H)
> net.IW{1,1}=input_weight;
> net.LW{2,1}=hidden_weight;
> net.b{1}=bias_hiddenneurons;
> net.b{2}=bias_out;

I assume that next you train the net?
 
> when i calculate my R¨^2 value from several trials and the best trails, my R¨^2 value is 0.95 and obtain the weights and bias from it; and put in th
> after the rand('state',0) and newff............, my R¨^2 value is 0.87, its value is decreasing, which i don't expect. Please, help me is there is bug in matlab or something i don't know the tricks to solve it.
>
 I don't fully understand what you are doing.

Why don't you post your full code with comments and the results of a run on one of the MATLAB demo datasets.

help nndatasets.

Greg

Subject: Bug in Disable the Randomization of Weights and Bias in Neural Network

From: Muna Adhikari

Date: 3 Mar, 2013 16:11:08

Message: 3 of 8

sdfa

Subject: Bug in Disable the Randomization of Weights and Bias in Neural Network

From: Subodh Paudel

Date: 3 Mar, 2013 16:27:08

Message: 4 of 8

Dear Greg,
The problem is first, first i use different randomization of trials to find the best and different cases of initial weight and bias, and i checked for different cases, however, i achieved the same result every time, which is the best cases.

[I N]=size(P); %[5 1826]
[O N]=size(T); %[1 1826]

MSE00=mean(var(T));

Hmin=12;
Ntrials=5;

%Initialization for the best
best=888888888;
Nepochs_best=66666666;
R2train_best=23556888;
HiddenNeurons_best=846;

Initialization of best weights and biases
WI_new=0;
WH_new=0;
bH_new=0;
bO_new=0;

rand('state',32)
for i=Hmin
    H=i;
    for j=1:Ntrials
        
        net=newff(minmax(P),minmax(T),[H O]);
        
        WI=net.IW{1,1};
        WH=net.LW{2,1};
        bH=net.b{1};
        bO=net.b{2};
        
               
        WIsize=size(WI);
        WHsize=size(WH);
        bHsize=size(bH);
        bOsize=size(bO);
        
        WI11(j,1:WIsize(1),1:WIsize(2))=WI;
        WH11(j,1:WHsize(1),1:WHsize(2))=WH;
       bH11(j,1:bHsize(1),1:bHsize(2))=bH;
       bO11(j,1:bOsize(1),1:bOsize(2))=bO;
        
          
        net.trainParam.goal=0.01*MSE00;
        net.divideFcn='';
        net.performFcn = 'mse';
        net.plotFcns= {'plotperform','plottrainstate','plotregression'};
        
            
        [net tr] = train(net,P,T);
        MSETrain=tr.perf(end);
        NMSETrain=MSETrain/MSE00;
        NEpochs=tr.epoch(end);
        R2TRain=1-NMSETrain;
        Nepochs(j,i)=tr.epoch(end);
        R2Train(j,i)=1-NMSETrain;
        
        if (MSETrain < best)
            best=MSETrain;
            net_new=net;
            HiddenNeurons_best=H;
            R2train_best=R2TRain;
            Nepochs_best=NEpochs;
            WI_new=WI;
            WH_new=WH;
            bH_new=bH;
            bO_new=bO;
            y=sim(net,P);
        end
            
    end
    disp(' H Nepochs R2Train');
    disp([repmat(H,5,1) Nepochs(:,i) R2Train(:,i)])
    
end

From this, I obtain:
i obtain
            12 6 0.071077
           12 860 0.89968
           12 1000 0.89576
           12 1000 0.88771
           12 1000 0.50481

The best case is: 0.89968. I store the initial weights and biases of five cases in the above matrix WI11, WH11. bH11 and bO11.
Now i check for 5 no of cases which should have R2=0.50481, however, for any cases the result always come 0.89968, but, i want the last case of R2 values.

And my final code becomes:

H=12;
WI=[ 0.018495 1.1048 0.90905 -1.0169 -1.4881
      0.83446 -0.46106 -1.6662 -0.29081 -1.2354
      0.30333 0.96744 -0.051326 -1.8656 -0.88593
     -0.68223 0.95386 0.58698 -1.1842 1.4743
      0.73676 -0.72799 -0.19964 -1.4516 -1.4409
     -0.78769 1.5387 -0.19239 -0.44063 -1.441
      -1.7317 0.29769 -1.1159 0.42624 0.88403
      -1.2142 -1.1572 -0.63363 1.4219 0.2431
      -1.2016 1.4392 0.65195 1.1577 -0.12331
   -0.0063187 -0.95933 -1.6721 -0.93475 0.84017
      0.35126 -0.5058 -1.1204 -1.6268 1.0074
       -1.559 0.70765 -0.22605 -1.1136 1.0359];
  WH=[ 0.51422 -0.57229 -0.57477 -0.2661 0.28874 -0.45965 -0.06474 -0.21366 0.53005 -0.3005 0.11094 -0.48873];
bH=[ -2.3013
      -1.8828
      -1.4644
        1.046
     -0.62761
       0.2092
      -0.2092
     -0.62761
       -1.046
      -1.4644
       1.8828
      -2.3013];
  bO=0;

 rand('state',32)
        net=newff(minmax(P),minmax(T),[H O]);

    
        net.IW{1,1}=WI;
        net.LW{2,1}=WH;
        net.b{1}=bH;
        net.b{2}=bO;

        net=init(net);

     
        net.trainParam.goal=0.01*MSE00;
        net.divideFcn='';
        %net.initFcn='';
        net.performFcn = 'mse';
        
        net.plotFcns= {'plotperform','plottrainstate','plotregression'};
        [net tr] = train(net,P,T);
        MSETrain=tr.perf(end);
        NMSETrain=MSETrain/MSE00;
        NEpochs=tr.epoch(end);
        R2train=1-NMSETrain

But, always it comes the best case R2=0.89968, but i like to have other number five cases result? Why it becomes?

Thank You. Waiting for your reply

Subject: Bug in Disable the Randomization of Weights and Bias in Neural Network

From: Muna Adhikari

Date: 3 Mar, 2013 19:30:11

Message: 5 of 8

I have the same problem.
Use initFcn='';

Subject: Bug in Disable the Randomization of Weights and Bias in Neural Network

From: Greg Heath

Date: 4 Mar, 2013 01:42:08

Message: 6 of 8

"Subodh Paudel" <subodhpaudel@gmail.com> wrote in message <kgvtks$t1g$1@newscl01ah.mathworks.com>...
------SNIP

Your email address doesn't work for me.

Either choose a MATLAB data set or send me your data in *.m or *.txt format.

Greg

Subject: Bug in Disable the Randomization of Weights and Bias in Neural Network

From: Greg Heath

Date: 4 Mar, 2013 05:25:08

Message: 7 of 8

% Subject: Bug in Disable the Randomization of Weights and Bias
% in Neural Network
% From: Subodh Paudel
% Date: 3 Mar, 2013 16:27:08
% Message: 4 of 6
% Dear Greg,
% The problem is first, first i use different randomization of trials
% to find the best and different cases of initial weight and bias,
% and i checked for different cases, however, i achieved the
% same result every time, which is the best cases.

IF YOU ARE GETTING EXACTLY THE SAME RESULT, YOU
ARE OBVIOUSLY USING THE SAME INITIAL WEIGHTS.

DELETE SEMICOLONS ON SHORT ANSWER COMMANDS
WHILE DEBUGGING SO THAT YOU CAN MONITOR RESULTS

[I N]=size(P); % [5 1826 ]
[O N]=size(T); % [1 1826 ]

%IF USING VALIDATION STOPPING:

Ntst = round(0.15*N) % 274
Nval = Ntst % 274
Ntrn = N-2*Ntst % 1278
Ntrneq = Ntrn*O % 1278

indtrn =
indval =
indtst =

Ttrn = T(indtrn);
Tval =
Ttst =
 
% MSE00=mean(var(T));
% Hmin=12;
% Ntrials=5;

MSEtrn00 = mean(var(Ttrn',1))
MSEtrn00a = mean(var(Ttrn'))
MSEval00 =
MSEtst00 =

% H <= Hub <==> Nw < Ntrneq
Hub = -1+ceil((Ntrn*O-O)/(I+O+1)) % 182 UPPER BOUND
Hmax = round(Hub/10) % 18 GOOD GENERALIZATION
Hmin=0
dH = 2

% for h = Hmin:dH:Hmax % numH = 10
% Ntrials = 10

%Initialization for the best
% best=888888888;
% Nepochs_best=66666666;
% R2train_best=23556888;
% HiddenNeurons_best=846;

No, No, No! Where did you get these from ???

% Initialization of best weights and biases
% WI_new=0;
% WH_new=0;
% bH_new=0;
% bO_new=0;

VERY UNNECESSARY. FIND THE BEST NET AND STORE
THE NET, NOT THE WEIGHTS (IF YOU REALLY WANTED
TO SAVE WEIGHTS, SAVE THE FINAL, NOT INITIAL)

YOU COULD ALSO SAVE THE INITIAL RNG STATE AND THE
(i,j) INDICES FOR THE BEST NET; BUT SAVING THE BEST
NET IS THE EASIEST (even SO, YOU MAY STILL WANT TO
SAVE THE INITIAL RNG STATE AND (i,j)opt.)

rand('state',32)
for i=Hmin % ONE H FOR DEBUGGING; GOOD
    H=i;
    for j=1:Ntrials
        
% net=newff(minmax(P),minmax(T),[H O]);

======>> FATAL ERROR SEE DOCUMENTATION
                             help newff
                             doc newff
                            type newff
                            
USE ONE OF THE FOLLOWING

         % net=newff(minmax(P), [H O] ); % VERY OBSOLETE
         % net=newff( P, T, H ); % OBSOLETE
         % net = newfit( P, T, H ); % OBSOLETE
        % net = fitnet( P, T, H ); % CURRENT
        % If H = 0 use H = [] above
        
% WI=net.IW{1,1};
% WH=net.LW{2,1};
% bH=net.b{1};
% bO=net.b{2};
%
% WIsize=size(WI);
% WHsize=size(WH);
% bHsize=size(bH);
% bOsize=size(bO);
%
% WI11(j,1:WIsize(1),1:WIsize(2))=WI;
% WH11(j,1:WHsize(1),1:WHsize(2))=WH;
% bH11(j,1:bHsize(1),1:bHsize(2))=bH;
% bO11(j,1:bOsize(1),1:bOsize(2))=bO;
%
% net.trainParam.goal=0.01*MSE00;
% net.divideFcn='';
% net.performFcn = 'mse';
% net.plotFcns= {'plotperform','plottrainstate','plotregression'};
   
if H = 0
    Nw = (I+1)*O
else
    Nw = (I+1)*H+(H+1)*O
end
Ndof = Ntrneq-Nw
MSEgoal = 0.01*Ndof*MSEtrn00a/Ntrneq % R2trna >= 0.99
net.trainParam.goal = MSEgoal;
net.divideFcn = 'dividetrain';

 [net tr] = train(net,P,T);
MSETrain=tr.perf(end);
NMSETrain=MSETrain/MSE00;
% NEpochs=tr.epoch(end);
% R2TRain=1-NMSETrain;
Nepochs(j,i)=tr.epoch(end);
R2Train(j,i)=1-NMSETrain;

IF YOU USE TRAINING SET PERFORMANCE FOR
EVALUATION, YOU SHOULD ADJUST FOR LOSS OF THE
DEGREES OF FREEDOM WHEN YOU USE THE SAME DATA
TO TRAIN AND EVALUATE

MSEtrna = Ntrneq*MSEtrn/Ndof
NMEtrna = MSEtrna/MSEtrn00a
R2trna(j,i) = 1-NMSEtrna % MAXIMUM DETERMINES BEST NET

    ======== I STOPPED HERE, GREG ================
        
         
 > if (MSETrain < best)
> best=MSETrain;
> net_new=net;
> HiddenNeurons_best=H;
> R2train_best=R2TRain;
> Nepochs_best=NEpochs;
> WI_new=WI;
> WH_new=WH;
> bH_new=bH;
> bO_new=bO;
> y=sim(net,P);
> end
>
> end
> disp(' H Nepochs R2Train');
> disp([repmat(H,5,1) Nepochs(:,i) R2Train(:,i)])
>
> end
>
> From this, I obtain:
> i obtain
> 12 6 0.071077
> 12 860 0.89968
> 12 1000 0.89576
> 12 1000 0.88771
> 12 1000 0.50481
>
> The best case is: 0.89968. I store the initial weights and biases of five cases in the above matrix WI11, WH11. bH11 and bO11.
> Now i check for 5 no of cases which should have R2=0.50481, however, for any cases the result always come 0.89968, but, i want the last case of R2 values.
>
> And my final code becomes:
>
> H=12;
> WI=[ 0.018495 1.1048 0.90905 -1.0169 -1.4881
> 0.83446 -0.46106 -1.6662 -0.29081 -1.2354
> 0.30333 0.96744 -0.051326 -1.8656 -0.88593
> -0.68223 0.95386 0.58698 -1.1842 1.4743
> 0.73676 -0.72799 -0.19964 -1.4516 -1.4409
> -0.78769 1.5387 -0.19239 -0.44063 -1.441
> -1.7317 0.29769 -1.1159 0.42624 0.88403
> -1.2142 -1.1572 -0.63363 1.4219 0.2431
> -1.2016 1.4392 0.65195 1.1577 -0.12331
> -0.0063187 -0.95933 -1.6721 -0.93475 0.84017
> 0.35126 -0.5058 -1.1204 -1.6268 1.0074
> -1.559 0.70765 -0.22605 -1.1136 1.0359];
> WH=[ 0.51422 -0.57229 -0.57477 -0.2661 0.28874 -0.45965 -0.06474 -0.21366 0.53005 -0.3005 0.11094 -0.48873];
> bH=[ -2.3013
> -1.8828
> -1.4644
> 1.046
> -0.62761
> 0.2092
> -0.2092
> -0.62761
> -1.046
> -1.4644
> 1.8828
> -2.3013];
> bO=0;
>
> rand('state',32)
> net=newff(minmax(P),minmax(T),[H O]);
>
>
> net.IW{1,1}=WI;
> net.LW{2,1}=WH;
> net.b{1}=bH;
> net.b{2}=bO;
>
> net=init(net);
>
>
> net.trainParam.goal=0.01*MSE00;
> net.divideFcn='';
> %net.initFcn='';
> net.performFcn = 'mse';
>
> net.plotFcns= {'plotperform','plottrainstate','plotregression'};
> [net tr] = train(net,P,T);
> MSETrain=tr.perf(end);
> NMSETrain=MSETrain/MSE00;
> NEpochs=tr.epoch(end);
> R2train=1-NMSETrain
>
> But, always it comes the best case R2=0.89968, but i like to have other number five cases result? Why it becomes?
>
> Thank You. Waiting for your reply

Subject: Bug in Disable the Randomization of Weights and Bias in Neural Network

From: Greg Heath

Date: 4 Mar, 2013 06:20:08

Message: 8 of 8

"Greg Heath" <heath@alumni.brown.edu> wrote in message <kh1b7k$qe9$1@newscl01ah.mathworks.com>...

> VERY UNNECESSARY. FIND THE BEST NET AND STORE
> THE NET, NOT THE WEIGHTS (IF YOU REALLY WANTED
> TO SAVE WEIGHTS, SAVE THE FINAL, NOT INITIAL)
>
> YOU COULD ALSO SAVE THE INITIAL RNG STATE AND THE
> (i,j) INDICES FOR THE BEST NET; BUT SAVING THE BEST
> NET IS THE EASIEST (even SO, YOU MAY STILL WANT TO
> SAVE THE INITIAL RNG STATE AND (i,j)opt.)

ALSO SAVE THE TRAINING RECORD tr.

[ net tr ] = train(...);

Greg

Tags for this Thread

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us