MATLAB Newsgroup

Stack Exchange

?bilal zafar 1 chat meta about help

Stack Overflow

Questions

Tags

Users

Badges

Unanswered

Ask Question

How to improve ANN results by reducing error through hidden layer size, through MSE, or by using while loop?

up vote 0 down vote favorite

This is my source code and I want to reduce the possible errors. When running this code there is a lot of difference between trained output to target. I have tried different ways but didn't work so please help me reducing it.

a=[31 9333 2000;31 9500 1500;31 9700 2300;31 9700 2320;31 9120 2230;31 9830 2420;31 9300 2900;31 9400 2500]'

g=[35000;23000;3443;2343;1244;9483;4638;4739]'

h=[31 9333 2000]'

inputs =(a);

targets =[g];

% Create a Fitting Network

hiddenLayerSize = 1;

net = fitnet(hiddenLayerSize);

% Choose Input and Output Pre/Post-Processing Functions

% For a list of all processing functions type: help nnprocess

net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};

net.outputs{2}.processFcns = {'removeconstantrows','mapminmax'};

% Setup Division of Data for Training, Validation, Testing

% For a list of all data division functions type: help nndivide

net.divideFcn = 'dividerand'; % Divide data randomly

net.divideMode = 'sample'; % Divide up every sample

net.divideParam.trainRatio = 70/100;

net.divideParam.valRatio = 15/100;

net.divideParam.testRatio = 15/100;

% For help on training function 'trainlm' type: help trainlm

% For a list of all training functions type: help nntrain

net.trainFcn = 'trainlm'; % Levenberg-Marquardt

% Choose a Performance Function

% For a list of all performance functions type: help nnperformance

net.performFcn = 'mse'; % Mean squared error

% Choose Plot Functions

% For a list of all plot functions type: help nnplot

net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ...

'plotregression','plotconfusion' 'plotfit','plotroc'};

% Train the Network

[net,tr] = train(net,inputs,targets);

plottrainstate(tr)

% Test the Network

outputs = net(inputs)

errors = gsubtract(targets,outputs)

fprintf('errors = %4.3f\t',errors);

performance = perform(net,targets,outputs);

% Recalculate Training, Validation and Test Performance

trainTargets = targets .* tr.trainMask{1};

valTargets = targets .* tr.valMask{1};

testTargets = targets .* tr.testMask{1};

trainPerformance = perform(net,trainTargets,outputs);

valPerformance = perform(net,valTargets,outputs);

testPerformance = perform(net,testTargets,outputs);

% View the Network

view(net);

sc=sim(net,h

"chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l3ebuu$evs$1@newscl01ah.mathworks.com>...

> How to improve ANN results by reducing error through hidden layer size, through MSE, or by using while loop?

Your data is not a good learning example. (Small size, constant x(1,:), weak relationship between input and target )

1. Practice on MATLAB data (e.g., simplefit_dataset)

help nndata

2. Size the data

[ I N ] = size(x)

[ O N ] = size(t)

% Default Data Division

Ntst = round(0.15*N)

Nval = Ntst

Ntrn = N-2*Ntst;

Ntrneq = Ntrn*O % No. of training equations

3. Standardize (zscore or mapstd) and plot data.

zx= zscore(x',1);

zt = zscore(t',1);

4. Remove, modify or delete outliers. If necessary, repeat 3.

5. Start with the examples in the help documentation and accept all defaults.

6. Ignore the GUI obtained code. It is too confusing because it lists all options.

7. Use a for loop to design Ntrials (>=10) different nets to mitigate using default random data divisions and random initial weights. To obtain accurate generalization estimate statistics, Ntrials*Ntst should be sufficiently large.

a. Ntrials >= max( 10, 30/Ntst )

b. Initialize the random number generator before the loop

c. Use configure at the top of the loop to randomize initial weights

d. Obtain the training record tr via

[ net tr ] = train( net, zx, zt );

8. Rank the nets w.r.t. the validation set R-squared (see http://en.wikipedia.org/wiki/R-squared) and ignore very poor designs. If not enough designs survive, design more.

MSEval = tr.best_vperf;

R2val = 1-MSEval; % tval assumed standardized

9. To obtain an UNBIASED estimate of performance on unseen data, obtain the mean and stdv of the MSEtst = tr.best_tperf (or R2tst) values for the surviving nets.

10. If results are unsatisfactory consider increasing the number of hidden nodes.

11. Otherwise, try to reduce the number of hidden nodes to increase robustness

w.r.t. noise, measurement error and outliers (although an outlier check should

always be performed before using any net)

12. Search my posts in the NEWSGROUP and ANSWERS posts for multi-loop examples.

Good search words are:

neural greg fitnet Ntrials (for regression and curve-fitting)

neural greg patternnet Ntrials (for classification and pattern-recognition)

13. If you are designing timeseries (e.g., timedelaynet ,narnet or narxnet)

a. Consider the delays of the significant correlations in the target/target

autocorrelation function and/or the target/input crosscorrelation function.

b. Keep timesteps uniform by using divideblock and or divideind.

Hope this helps.

Greg

"Greg Heath" <heath@alumni.brown.edu> wrote in message <l3khqr$1fe$1@newscl01ah.mathworks.com>...

> "chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l3ebuu$evs$1@newscl01ah.mathworks.com>...

>

> > How to improve ANN results by reducing error through hidden layer size, through MSE, or by using while loop?

>

> Your data is not a good learning example. (Small size, constant x(1,:), weak relationship between input and target )

>

> 1. Practice on MATLAB data (e.g., simplefit_dataset)

close all, clear all, clc

format short

x = [31 9333 2000;31 9500 1500;31 9700 2300;31 9700 2320;...

31 9120 2230;31 9830 2420;31 9300 2900;31 9400 2500]'

t = [35000;23000;3443;2343;1244;9483;4638;4739]'

xnew = [31 9333 2000]'

% [ x, t ] = simplefit_dataset; % Better learning example

[ I N ] = size( x ) % [ 3 8 ]

[ O N ] = size( t ) % [ 1 8 ]

%Standardization?

varx = var( x') % 1e5 * [ 0 0.585 1.63 ] %Huge

vart = var( t ) % 1.47e8 % Ditto

% Delete x(1,:) and standardize

x = x(2:3,:); %Omit for simplefit_dataset

zx = zscore(x',1)';

zt = zscore(t',1)';

MSE00 = var(t',1) % = 1 Reference MSE

Ntst = round(0.15*N) % = 1 default

Ntrials = max(10,30/Ntst) % 30

% Use default No. of hidden nodes (10)

net = fitnet;

rng(0)

for i=1:Ntrials

net = configure(net,x,t);

[net tr ] = train(net,x,t);

R2trn(i,1) = 1 - tr.best_perf/MSE00;

R2val(i,1) = 1 - tr.best_vperf/MSE00;

R2tst(i,1) = 1 - tr.best_tperf/MSE00;

end

R2s = [ R2trn R2val R2tst ]

minR2s = min(R2s) % -13.2021 -17.1237 -22.9422

medR2s = median(R2s) % 0.7096 0.4177 0.1100

meanR2s = mean(R2s) % -0.8757 -1.2760 -1.8358

stdR2s = std(R2s) % 3.4060 3.9508 4.8567

maxR2s = max(R2s) % 1.0000 1.0000 0.9965

sortR2s = sort(R2s)

% sortR2s = -13.2021 -17.1237 -22.9422

% -10.4006 -9.8592 -8.5426

% -4.5224 -6.3693 -6.8485

% -4.1019 -5.0681 -6.5369

% -3.7170 -3.6865 -5.7350

% -2.0340 -2.4595 -5.0384

% -1.7259 -2.4565 -4.0096

% -1.3995 -1.6079 -3.6787

% -1.2526 -1.6025 -0.5167

% -0.2069 -1.0937 -0.3695

% -0.1213 -0.6480 -0.2804

% 0.1603 -0.2390 -0.2782

% 0.4618 0.1275 -0.2124

% 0.6146 0.1944 -0.1174

% 0.6782 0.2760 0.0623

% 0.7410 0.5594 0.1577

% 0.9138 0.7007 0.2807

% 0.9301 0.7012 0.3786

% 0.9488 0.7098 0.4315

% 0.9736 0.7654 0.4789

% 0.9917 0.9362 0.4834

% 0.9999 0.9819 0.6334

% 1.0000 0.9890 0.7886

% 1.0000 0.9957 0.8014

% 1.0000 0.9982 0.8439

% 1.0000 0.9996 0.9090

% 1.0000 0.9996 0.9091

% 1.0000 0.9997 0.9253

% 1.0000 0.9998 0.9510

% 1.0000 1.0000 0.9965

% Note that only 2 of 30 designs have R2tst >= 0.95 !!!

% In contrast, for the simplefit_data set (x(1,:) NOT deleted)

%

% Ntrials = 10

% R2s = 1.0000 1.0000 1.0000

% 1.0000 1.0000 1.0000

% 1.0000 1.0000 1.0000

% 1.0000 1.0000 1.0000

% 1.0000 1.0000 1.0000

% 1.0000 1.0000 1.0000

% 1.0000 1.0000 1.0000

% 1.0000 1.0000 1.0000

% 1.0000 0.9997 1.0000

% 1.0000 1.0000 0.9999

Now try minimizing the number of hidden nodes for the simplefit example.

Hope this helps.

Greg

thnx alot 4 YOUR very sincere cooperation

thnx sir greg

"Greg Heath" <heath@alumni.brown.edu> wrote in message <l3kj60$mlt$1@newscl01ah.mathworks.com>...

> "Greg Heath" <heath@alumni.brown.edu> wrote in message <l3khqr$1fe$1@newscl01ah.mathworks.com>...

> > "chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l3ebuu$evs$1@newscl01ah.mathworks.com>...

> >

> > > How to improve ANN results by reducing error through hidden layer size, through MSE, or by using while loop?

> >

> > Your data is not a good learning example. (Small size, constant x(1,:), weak relationship between input and target )

> >

> > 1. Practice on MATLAB data (e.g., simplefit_dataset)

>

> close all, clear all, clc

> format short

>

> x = [31 9333 2000;31 9500 1500;31 9700 2300;31 9700 2320;...

> 31 9120 2230;31 9830 2420;31 9300 2900;31 9400 2500]'

> t = [35000;23000;3443;2343;1244;9483;4638;4739]'

> xnew = [31 9333 2000]'

>

> % [ x, t ] = simplefit_dataset; % Better learning example

>

> [ I N ] = size( x ) % [ 3 8 ]

> [ O N ] = size( t ) % [ 1 8 ]

>

> %Standardization?

> varx = var( x') % 1e5 * [ 0 0.585 1.63 ] %Huge

> vart = var( t ) % 1.47e8 % Ditto

>

> % Delete x(1,:) and standardize

>

> x = x(2:3,:); %Omit for simplefit_dataset

> zx = zscore(x',1)';

> zt = zscore(t',1)';

> MSE00 = var(t',1) % = 1 Reference MSE

> Ntst = round(0.15*N) % = 1 default

> Ntrials = max(10,30/Ntst) % 30

>

> % Use default No. of hidden nodes (10)

>

> net = fitnet;

>

> rng(0)

> for i=1:Ntrials

> net = configure(net,x,t);

> [net tr ] = train(net,x,t);

> R2trn(i,1) = 1 - tr.best_perf/MSE00;

> R2val(i,1) = 1 - tr.best_vperf/MSE00;

> R2tst(i,1) = 1 - tr.best_tperf/MSE00;

> end

> R2s = [ R2trn R2val R2tst ]

>

> minR2s = min(R2s) % -13.2021 -17.1237 -22.9422

> medR2s = median(R2s) % 0.7096 0.4177 0.1100

> meanR2s = mean(R2s) % -0.8757 -1.2760 -1.8358

> stdR2s = std(R2s) % 3.4060 3.9508 4.8567

> maxR2s = max(R2s) % 1.0000 1.0000 0.9965

> sortR2s = sort(R2s)

>

> % sortR2s = -13.2021 -17.1237 -22.9422

> % -10.4006 -9.8592 -8.5426

> % -4.5224 -6.3693 -6.8485

> % -4.1019 -5.0681 -6.5369

> % -3.7170 -3.6865 -5.7350

> % -2.0340 -2.4595 -5.0384

> % -1.7259 -2.4565 -4.0096

> % -1.3995 -1.6079 -3.6787

> % -1.2526 -1.6025 -0.5167

> % -0.2069 -1.0937 -0.3695

> % -0.1213 -0.6480 -0.2804

> % 0.1603 -0.2390 -0.2782

> % 0.4618 0.1275 -0.2124

> % 0.6146 0.1944 -0.1174

> % 0.6782 0.2760 0.0623

> % 0.7410 0.5594 0.1577

> % 0.9138 0.7007 0.2807

> % 0.9301 0.7012 0.3786

> % 0.9488 0.7098 0.4315

> % 0.9736 0.7654 0.4789

> % 0.9917 0.9362 0.4834

> % 0.9999 0.9819 0.6334

> % 1.0000 0.9890 0.7886

> % 1.0000 0.9957 0.8014

> % 1.0000 0.9982 0.8439

> % 1.0000 0.9996 0.9090

> % 1.0000 0.9996 0.9091

> % 1.0000 0.9997 0.9253

> % 1.0000 0.9998 0.9510

> % 1.0000 1.0000 0.9965

>

> % Note that only 2 of 30 designs have R2tst >= 0.95 !!!

>

> % In contrast, for the simplefit_data set (x(1,:) NOT deleted)

> %

> % Ntrials = 10

> % R2s = 1.0000 1.0000 1.0000

> % 1.0000 1.0000 1.0000

> % 1.0000 1.0000 1.0000

> % 1.0000 1.0000 1.0000

> % 1.0000 1.0000 1.0000

> % 1.0000 1.0000 1.0000

> % 1.0000 1.0000 1.0000

> % 1.0000 1.0000 1.0000

> % 1.0000 0.9997 1.0000

> % 1.0000 1.0000 0.9999

>

> Now try minimizing the number of hidden nodes for the simplefit example.

>

> Hope this helps.

>

> Greg

SIR THNX ALOT.//I WILL NOW WORK UPON THAT. previously i jst generate the script and then do m alterations such as import my dataset jst alter inputs and targets.........rest every thing remain same.....what is the problm with that code.......now i will work upon ur advice ....thnx alot for answer

"Greg Heath" <heath@alumni.brown.edu> wrote in message <l3kj60$mlt$1@newscl01ah.mathworks.com>...

> "Greg Heath" <heath@alumni.brown.edu> wrote in message <l3khqr$1fe$1@newscl01ah.mathworks.com>...

> > "chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l3ebuu$evs$1@newscl01ah.mathworks.com>...

> >

> > > How to improve ANN results by reducing error through hidden layer size, through MSE, or by using while loop?

> >

> > Your data is not a good learning example. (Small size, constant x(1,:), weak relationship between input and target )

> >

> > 1. Practice on MATLAB data (e.g., simplefit_dataset)

>

> close all, clear all, clc

> format short

>

> x = [31 9333 2000;31 9500 1500;31 9700 2300;31 9700 2320;...

> 31 9120 2230;31 9830 2420;31 9300 2900;31 9400 2500]'

> t = [35000;23000;3443;2343;1244;9483;4638;4739]'

> xnew = [31 9333 2000]'

>

> % [ x, t ] = simplefit_dataset; % Better learning example

>

> [ I N ] = size( x ) % [ 3 8 ]

> [ O N ] = size( t ) % [ 1 8 ]

>

> %Standardization?!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! why standardizing

> varx = var( x') % 1e5 * [ 0 0.585 1.63 ] %Huge

> vart = var( t ) % 1.47e8 % Ditto

>

> % Delete x(1,:) and standardize!!!!!!!!we have deleted beacuse itz variance=0

>

> x = x(2:3,:); %Omit for simplefit_dataset!!!!!!!?

> zx = zscore(x',1)';!!!!!!!!!!!!!!!!!!what is this i cant understnd tht

> zt = zscore(t',1)';!!!!!!!!!!!!!!!!!!!

!!!!!!!!!!!!!!!!!!what is this i cant understnd tht

> MSE00 = var(t',1) % = 1 Reference Mse

> Ntst = round(0.15*N) % = 1 default

> Ntrials = max(10,30/Ntst) % 30 !!!!!what is this criteria for trials

>

> % Use default No. of hidden nodes (10)

>

> net = fitnet;

>

> rng(0)

> for i=1:Ntrials

> net = configure(net,x,t);

> [net tr ] = train(net,x,t);

> R2trn(i,1) = 1 - tr.best_perf/MSE00; !!!!!!!!!!!!!!!!!!what is this i cant understnd tht ..what R2TRN(i,1)

> R2val(i,1) = 1 - tr.best_vperf/MSE00;

> R2tst(i,1) = 1 - tr.best_tperf/MSE00;

> end

> R2s = [ R2trn R2val R2tst ]

> %why finding min.med...means....std.....maxs.....

> minR2s = min(R2s) % -13.2021 -17.1237 -22.9422

> medR2s = median(R2s) % 0.7096 0.4177 0.1100

> meanR2s = mean(R2s) % -0.8757 -1.2760 -1.8358

> stdR2s = std(R2s) % 3.4060 3.9508 4.8567

> maxR2s = max(R2s) % 1.0000 1.0000 0.9965

> sortR2s = sort(R2s)

>

> % sortR2s = -13.2021 -17.1237 -22.9422

> % -10.4006 -9.8592 -8.5426

> % -4.5224 -6.3693 -6.8485

> % -4.1019 -5.0681 -6.5369

> % -3.7170 -3.6865 -5.7350

> % -2.0340 -2.4595 -5.0384

> % -1.7259 -2.4565 -4.0096

> % -1.3995 -1.6079 -3.6787

> % -1.2526 -1.6025 -0.5167

> % -0.2069 -1.0937 -0.3695

> % -0.1213 -0.6480 -0.2804

> % 0.1603 -0.2390 -0.2782

> % 0.4618 0.1275 -0.2124

> % 0.6146 0.1944 -0.1174

> % 0.6782 0.2760 0.0623

> % 0.7410 0.5594 0.1577

> % 0.9138 0.7007 0.2807

> % 0.9301 0.7012 0.3786

> % 0.9488 0.7098 0.4315

> % 0.9736 0.7654 0.4789

> % 0.9917 0.9362 0.4834

> % 0.9999 0.9819 0.6334

> % 1.0000 0.9890 0.7886

> % 1.0000 0.9957 0.8014

> % 1.0000 0.9982 0.8439

> % 1.0000 0.9996 0.9090

> % 1.0000 0.9996 0.9091

> % 1.0000 0.9997 0.9253

> % 1.0000 0.9998 0.9510

> % 1.0000 1.0000 0.9965

>

> % Note that only 2 of 30 designs have R2tst >= 0.95 !!!

>

> % In contrast, for the simplefit_data set (x(1,:) NOT deleted)

> %

> % Ntrials = 10

> % R2s = 1.0000 1.0000 1.0000

> % 1.0000 1.0000 1.0000

> % 1.0000 1.0000 1.0000

> % 1.0000 1.0000 1.0000

> % 1.0000 1.0000 1.0000

> % 1.0000 1.0000 1.0000

> % 1.0000 1.0000 1.0000

> % 1.0000 1.0000 1.0000

> % 1.0000 0.9997 1.0000

> % 1.0000 1.0000 0.9999

>

> Now try minimizing the number of hidden nodes for the simplefit example.

>

> Hope this helps.

>

> Greg

sir greg....what should i conclude ...which is to use ....delete row 1 of dataset or not

sir why u didnt counter weights and sir by giving loop to trials...what happens...is that not better if v give loop for mse value.......

such that system will train untill the mse value is at its minimum(what v have given)

sir i am using dataset of excel sheet of 79 cross 30 ..matrix.....which i have divided as inputs and targets....

sir in the code above i have mentioned the lines which i didnt understnd so plz kindly explain me that

"Greg Heath" <heath@alumni.brown.edu> wrote in message <l3kj60$mlt$1@newscl01ah.mathworks.com>...

> "Greg Heath" <heath@alumni.brown.edu> wrote in message <l3khqr$1fe$1@newscl01ah.mathworks.com>...

> > "chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l3ebuu$evs$1@newscl01ah.mathworks.com>...

> >

> > > How to improve ANN results by reducing error through hidden layer size, through MSE, or by using while loop?

> >

> > Your data is not a good learning example. (Small size, constant x(1,:), weak relationship between input and target )

> >

> > 1. Practice on MATLAB data (e.g., simplefit_dataset)

>

> close all, clear all, clc

> format short

>

> x = [31 9333 2000;31 9500 1500;31 9700 2300;31 9700 2320;...

> 31 9120 2230;31 9830 2420;31 9300 2900;31 9400 2500]'

> t = [35000;23000;3443;2343;1244;9483;4638;4739]'

> xnew = [31 9333 2000]'

>

> % [ x, t ] = simplefit_dataset; % Better learning example

>

> [ I N ] = size( x ) % [ 3 8 ]

> [ O N ] = size( t ) % [ 1 8 ]

>

> %Standardization?

> varx = var( x') % 1e5 * [ 0 0.585 1.63 ] %Huge

> vart = var( t ) % 1.47e8 % Ditto

>

> % Delete x(1,:) and standardize

>

> x = x(2:3,:); %Omit for simplefit_dataset

> zx = zscore(x',1)';

> zt = zscore(t',1)';

> MSE00 = var(t',1) % = 1 Reference MSE

> Ntst = round(0.15*N) % = 1 default

> Ntrials = max(10,30/Ntst) % 30

>

> % Use default No. of hidden nodes (10)

>

> net = fitnet;

>

> rng(0)

> for i=1:Ntrials

> net = configure(net,x,t);

> [net tr ] = train(net,x,t);

> R2trn(i,1) = 1 - tr.best_perf/MSE00;

> R2val(i,1) = 1 - tr.best_vperf/MSE00;

> R2tst(i,1) = 1 - tr.best_tperf/MSE00;

> end

> R2s = [ R2trn R2val R2tst ]

>

> minR2s = min(R2s) % -13.2021 -17.1237 -22.9422

> medR2s = median(R2s) % 0.7096 0.4177 0.1100

> meanR2s = mean(R2s) % -0.8757 -1.2760 -1.8358

> stdR2s = std(R2s) % 3.4060 3.9508 4.8567

> maxR2s = max(R2s) % 1.0000 1.0000 0.9965

> sortR2s = sort(R2s)

>

> % sortR2s = -13.2021 -17.1237 -22.9422

> % -10.4006 -9.8592 -8.5426

> % -4.5224 -6.3693 -6.8485

> % -4.1019 -5.0681 -6.5369

> % -3.7170 -3.6865 -5.7350

> % -2.0340 -2.4595 -5.0384

> % -1.7259 -2.4565 -4.0096

> % -1.3995 -1.6079 -3.6787

> % -1.2526 -1.6025 -0.5167

> % -0.2069 -1.0937 -0.3695

> % -0.1213 -0.6480 -0.2804

> % 0.1603 -0.2390 -0.2782

> % 0.4618 0.1275 -0.2124

> % 0.6146 0.1944 -0.1174

> % 0.6782 0.2760 0.0623

> % 0.7410 0.5594 0.1577

> % 0.9138 0.7007 0.2807

> % 0.9301 0.7012 0.3786

> % 0.9488 0.7098 0.4315

> % 0.9736 0.7654 0.4789

> % 0.9917 0.9362 0.4834

> % 0.9999 0.9819 0.6334

> % 1.0000 0.9890 0.7886

> % 1.0000 0.9957 0.8014

> % 1.0000 0.9982 0.8439

> % 1.0000 0.9996 0.9090

> % 1.0000 0.9996 0.9091

> % 1.0000 0.9997 0.9253

> % 1.0000 0.9998 0.9510

> % 1.0000 1.0000 0.9965

>

> % Note that only 2 of 30 designs have R2tst >= 0.95 !!!

>

> % In contrast, for the simplefit_data set (x(1,:) NOT deleted)

> %

> % Ntrials = 10

> % R2s = 1.0000 1.0000 1.0000

> % 1.0000 1.0000 1.0000

> % 1.0000 1.0000 1.0000

> % 1.0000 1.0000 1.0000

> % 1.0000 1.0000 1.0000

> % 1.0000 1.0000 1.0000

> % 1.0000 1.0000 1.0000

> % 1.0000 1.0000 1.0000

> % 1.0000 0.9997 1.0000

> % 1.0000 1.0000 0.9999

>

> Now try minimizing the number of hidden nodes for the simplefit example.

>

> Hope this helps.

>

> Greg

HI GREG THNX ALOT ....I HAVE FEW Questions regarding the above code MY QUESTIONS MAY SEEMS STUPID BECAUSE IAM BEGINNER . I HAVE database of around 80 cross 30 parameters.

1.SIR y u have chose Ntrials parameter for forloop for training purpose.....can v use for example

such that errors=targets-outputs

while errors~=0 ..........

train(net,x).................

some thing like that.....or

while mse~=0

train(net,x)

is it right or wrong i dont knw.......if wrong kindly tell me the reason

2. why sir u havent considered weights ,,biases..epochs...etc.. and learning rate value....trainlm is independent of lr BUT Y then how he make system learn...from example

3.sir y v standardize

4.kindly explain these lines

zx = zscore(x',1)';

> zt = zscore(t',1)';

> MSE00 = var(t',1) % = 1 Reference MSE

> Ntst = round(0.15*N) % = 1 default

> Ntrials = max(10,30/Ntst) % 30

R2trn(i,1) = 1 - tr.best_perf/MSE00;

> R2val(i,1) = 1 - tr.best_vperf/MSE00;

> R2tst(i,1) = 1 - tr.best_tperf/MSE00;

why v have found medians..means....etc

"chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l3omcv$bcu$1@newscl01ah.mathworks.com>...

> "Greg Heath" <heath@alumni.brown.edu> wrote in message <l3kj60$mlt$1@newscl01ah.mathworks.com>...

> > "Greg Heath" <heath@alumni.brown.edu> wrote in message <l3khqr$1fe$1@newscl01ah.mathworks.com>...

> > > "chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l3ebuu$evs$1@newscl01ah.mathworks.com>...

> > >

> > > > How to improve ANN results by reducing error through hidden layer size, through MSE, or by using while loop?

> > >

> > > Your data is not a good learning example. (Small size, constant x(1,:), weak relationship between input and target )

> > >

> > > 1. Practice on MATLAB data (e.g., simplefit_dataset)

> >

-----SNIP

> SIR THNX ALOT.//I WILL NOW WORK UPON THAT. previously i jst generate the script and then do m alterations such as import my dataset jst alter inputs and targets.........rest every thing remain same.....what is the problm with that code

The code is too long for beginners to deal with. It puts all choices on an equal level instead of emphasizing what is important and what default inputs can, usually, ALWAYS be accepted. It prompts them to make to consider making too many choices that they shouldn't have to worry about.

On the other hand, the short examples in the help and doc documentation are too extreme in the opposite direction.

It would be preferable to

1. Improve the help and doc examples so that they yield a better understanding of what is really important, given defaults. For examples

a. Nothing is ever said about initializing the RNG so that designs can be duplicated

b. What does a result of perf = 13.72 tell the user? .... ABSOLUTELY NOTHING. Why?

Because perf is target scale dependent and needs to be compared with the average

target variance.

c. Because of a, the level of the target variances should, by default, be equal. If the

user wishes to weight some targets more than others, it can be done via the explicit

weighting input, EW, in the train and perform functions.

2. Illustrate how to obtain classification error rates directly without having to squint at a confusion matrix.

3. Give the GUI user the choice of a short or long version of command line code.

Hope this helps,

Greg

"chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l3omd4$bhb$1@newscl01ah.mathworks.com>...

> "Greg Heath" <heath@alumni.brown.edu> wrote in message <l3kj60$mlt$1@newscl01ah.mathworks.com>...

> > "Greg Heath" <heath@alumni.brown.edu> wrote in message <l3khqr$1fe$1@newscl01ah.mathworks.com>...

> > > "chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l3ebuu$evs$1@newscl01ah.mathworks.com>...

> > >

> > > > How to improve ANN results by reducing error through hidden layer size, through MSE, or by using while loop?

> > >

> > > Your data is not a good learning example. (Small size, constant x(1,:), weak relationship between input and target )

> > >

> > > 1. Practice on MATLAB data (e.g., simplefit_dataset)

> >

> > close all, clear all, clc

> > format short

> >

> > x = [31 9333 2000;31 9500 1500;31 9700 2300;31 9700 2320;...

> > 31 9120 2230;31 9830 2420;31 9300 2900;31 9400 2500]'

> > t = [35000;23000;3443;2343;1244;9483;4638;4739]'

> > xnew = [31 9333 2000]'

> >

> > % [ x, t ] = simplefit_dataset; % Better learning example

> >

> > [ I N ] = size( x ) % [ 3 8 ]

> > [ O N ] = size( t ) % [ 1 8 ]

> >

> > %Standardization?

> > varx = var( x') % 1e5 * [ 0 0.585 1.63 ] %Huge

> > vart = var( t ) % 1.47e8 % Ditto

> >

> > % Delete x(1,:) and standardize

> >

> > x = x(2:3,:); %Omit for simplefit_dataset

> > zx = zscore(x',1)';

> > zt = zscore(t',1)';

> > MSE00 = var(t',1) % = 1 Reference MSE

> > Ntst = round(0.15*N) % = 1 default

> > Ntrials = max(10,30/Ntst) % 30

> >

> > % Use default No. of hidden nodes (10)

> >

> > net = fitnet;

> >

> > rng(0)

> > for i=1:Ntrials

> > net = configure(net,x,t);

> > [net tr ] = train(net,x,t);

> > R2trn(i,1) = 1 - tr.best_perf/MSE00;

> > R2val(i,1) = 1 - tr.best_vperf/MSE00;

> > R2tst(i,1) = 1 - tr.best_tperf/MSE00;

> > end

> > R2s = [ R2trn R2val R2tst ]

> >

> > minR2s = min(R2s) % -13.2021 -17.1237 -22.9422

> > medR2s = median(R2s) % 0.7096 0.4177 0.1100

> > meanR2s = mean(R2s) % -0.8757 -1.2760 -1.8358

> > stdR2s = std(R2s) % 3.4060 3.9508 4.8567

> > maxR2s = max(R2s) % 1.0000 1.0000 0.9965

> > sortR2s = sort(R2s)

> >

> > % sortR2s = -13.2021 -17.1237 -22.9422

> > % -10.4006 -9.8592 -8.5426

> > % -4.5224 -6.3693 -6.8485

> > % -4.1019 -5.0681 -6.5369

> > % -3.7170 -3.6865 -5.7350

> > % -2.0340 -2.4595 -5.0384

> > % -1.7259 -2.4565 -4.0096

> > % -1.3995 -1.6079 -3.6787

> > % -1.2526 -1.6025 -0.5167

> > % -0.2069 -1.0937 -0.3695

> > % -0.1213 -0.6480 -0.2804

> > % 0.1603 -0.2390 -0.2782

> > % 0.4618 0.1275 -0.2124

> > % 0.6146 0.1944 -0.1174

> > % 0.6782 0.2760 0.0623

> > % 0.7410 0.5594 0.1577

> > % 0.9138 0.7007 0.2807

> > % 0.9301 0.7012 0.3786

> > % 0.9488 0.7098 0.4315

> > % 0.9736 0.7654 0.4789

> > % 0.9917 0.9362 0.4834

> > % 0.9999 0.9819 0.6334

> > % 1.0000 0.9890 0.7886

> > % 1.0000 0.9957 0.8014

> > % 1.0000 0.9982 0.8439

> > % 1.0000 0.9996 0.9090

> > % 1.0000 0.9996 0.9091

> > % 1.0000 0.9997 0.9253

> > % 1.0000 0.9998 0.9510

> > % 1.0000 1.0000 0.9965

> >

> > % Note that only 2 of 30 designs have R2tst >= 0.95 !!!

> >

> > % In contrast, for the simplefit_data set (x(1,:) NOT deleted)

> > %

> > % Ntrials = 10

> > % R2s = 1.0000 1.0000 1.0000

> > % 1.0000 1.0000 1.0000

> > % 1.0000 1.0000 1.0000

> > % 1.0000 1.0000 1.0000

> > % 1.0000 1.0000 1.0000

> > % 1.0000 1.0000 1.0000

> > % 1.0000 1.0000 1.0000

> > % 1.0000 1.0000 1.0000

> > % 1.0000 0.9997 1.0000

> > % 1.0000 1.0000 0.9999

> >

> > Now try minimizing the number of hidden nodes for the simplefit example.

> >

> > Hope this helps.

> >

> > Greg

>

>

> HI GREG THNX ALOT ....I HAVE FEW Questions regarding the above code MY QUESTIONS MAY SEEMS STUPID BECAUSE IAM BEGINNER . I HAVE database of around 80 cross 30 parameters.

>

> 1.SIR y u have chose Ntrials parameter for forloop for training purpose.....can v use for example

> such that errors=targets-outputs

> while errors~=0 ..........

> train(net,x).................

> some thing like that.....or

> while mse~=0

> train(net,x)

> is it right or wrong i dont knw.......if wrong kindly tell me the reason

>

>

> 2. why sir u havent considered weights ,,biases..epochs...etc.. and learning rate value....trainlm is independent of lr BUT Y then how he make system learn...from example

>

> 3.sir y v standardize

> 4.kindly explain these lines

>

> zx = zscore(x',1)';

> > zt = zscore(t',1)';

> > MSE00 = var(t',1) % = 1 Reference MSE

> > Ntst = round(0.15*N) % = 1 default

> > Ntrials = max(10,30/Ntst) % 30

>

> R2trn(i,1) = 1 - tr.best_perf/MSE00;

> > R2val(i,1) = 1 - tr.best_vperf/MSE00;

> > R2tst(i,1) = 1 - tr.best_tperf/MSE00;

>

>

> why v have found medians..means....etc

>

heelllo sir greg

two more questions regarding the above question

1. > > % Note that only 2 of 30 designs have R2tst >= 0.95 !!! this thing you have told me after the results. sir did this mean that our data is not appropiate because 2 out 30 designs were fine. so i will train it again with other technique.

2. what is the problem with default data division which is generated through advanced script.

3. sit you have used FOR LOOP for Ntrials1:10.sir why dont u have use it with while loop such

while %certain best performance or least mse is achieved ....and it will strat loop from ntrial=1 then 2 then 3 then 4 etc and it will stop when certain mse or error is achieved

>

"chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l47mac$40$1@newscl01ah.mathworks.com>...

> "chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l3omd4$bhb$1@newscl01ah.mathworks.com>...

> > "Greg Heath" <heath@alumni.brown.edu> wrote in message <l3kj60$mlt$1@newscl01ah.mathworks.com>...

> > > "Greg Heath" <heath@alumni.brown.edu> wrote in message <l3khqr$1fe$1@newscl01ah.mathworks.com>...

> > > > "chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l3ebuu$evs$1@newscl01ah.mathworks.com>...

> > > >

> > > > > How to improve ANN results by reducing error through hidden layer size, through MSE, or by using while loop?

> > > >

> > >

> > > sir my question is why we are finding mean,median, etc,, what does they tell and what we can deduce from them about training.

"chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l47mac$40$1@newscl01ah.mathworks.com>...

> "chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l3omd4$bhb$1@newscl01ah.mathworks.com>...

> > "Greg Heath" <heath@alumni.brown.edu> wrote in message <l3kj60$mlt$1@newscl01ah.mathworks.com>...

> > > "Greg Heath" <heath@alumni.brown.edu> wrote in message <l3khqr$1fe$1@newscl01ah.mathworks.com>...

> > > > "chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l3ebuu$evs$1@newscl01ah.mathworks.com>...

> > > >

> > > > > How to improve ANN results by reducing error through hidden layer size, through MSE, or by using while loop?

> > > >

> > > >

Ntrials = max(10,30/Ntst) % 30

sir tell me what is this relation because on the basis of it we train our network for specific number of time (FOR LOOP)

"chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l4a24h$33$1@newscl01ah.mathworks.com>...

%Why not Gui code?

Bewildering no of choices for the newbie. Better to concentrate on the important

ones and accept the remaining defaults.

% sir greg....what should i conclude ...which is to use ....delete row 1 of

% dataset or not

Greg, not sir greg or sir

Although a MATLAB default function will delete constant-row/zero-variance

variables, why add to the confusion by using them in the 1st place???

% sir why u didnt counter weights and sir by giving loop to trials...what

% happens...is that not better if v give loop for mse value....... such

% that system will train untill the mse value is at its minimum(what v have

% given)

Given the number of hidden nodes and a set of initial weights, the training

algorithm does the best it can. Therefore, those are the only parameters that

need to be changed.

However, to prevent the algorithm from wasting time on insignificant improvements

I do use higher values for MSEgoal and MinGrad.

% HI GREG THNX ALOT ....I HAVE FEW Questions regarding the above code MY

% QUESTIONS MAY SEEMS STUPID BECAUSE IAM BEGINNER . I HAVE database of

% around 80 cross 30 parameters.

Do you mean a 80 x 30 data matrix of variables???

% 1.SIR y u have chose Ntrials parameter for forloop for training

% purpose.....can v use for example such that errors=targets-outputs while

% errors~=0 .......... train(net,x)................. some thing like

% that.....or while mse~=0 train(net,x) is it right or wrong i dont

% knw.......if wrong kindly tell me the reason

train already has a full complement of stopping criteria. However, zero error

and zero slope are not practical. That is why I specify my own nonzero goals

for MSEgoal and MinGrad

The important goal is to try to maximize performance on NONTRAINING data.

Trying to obtain zero error on training data is seldom acheivable without

overtraining an overfit net (Nw > Ntrneq). More importantly, past a certain point,

reducing training error does not result in the reduction of nontraining error.

That is why training is terminated when the error on the non-training validation

set reaches a minimum.

% 2. why sir u havent considered weights ,,biases..epochs...etc.. and

% learning rate value....trainlm is independent of lr BUT Y then how he

% make system learn...from example

The maximum allowable number of epochs is specified.

What is there to consider about weights and biases?

What made you ask this question?

From experience, the only parameters I have to change are

1. Time series ID, FD and net.divideFcn

2. MSEgoal and MinGrad

3. Hmin,dH,Hmax

4. Ntrials

% 3.sir y v standardize

To make my life easier. For the thousands of important nets I've

designed, means are zero and standard deviations are one. Result?

a. Comparison plotting is easier

b. Outliers are easy to spot.

c. Training is not compromised by sigmoid saturation

d. Weight/bias sizes are not affected by widely diverse I/O scaling

e. MSE values instantly mean something because the reference

MSE is 1.

%4.kindly explain these lines

The only performance measure that makes sense to me is Normalized

MSE and R^2 = 1-NMSE. For example, a result of MSE = 19.41, by itself,

means absolutely nothing. However, if MSE00 = 1941, then NMSE = 0.01 ,

R^2 = 0.99, and if Ntrneq >> Nw, then I get a warm feeling all over.

> Ntrials = max(10,30/Ntst)

Erorrs are assumed to be zero-mean Gaussian distributed. It is common

knowledge that Gaussian statistics estimates tend to be reliable when

the sample size is at least 30.

So, replace with

Ntrials >= 30/min(Ntrn,Nval,Ntst)

% why v have found medians..means....etc

Understanding of results and confirmation of robustness.

Statistical results are more believable to a sponsor or customer if they are

accompanied by error bars and/or confidence levels.

%two more questions regarding the above question

1. > > Note that only 2 of 30 designs have R2tst >= 0.95 !!!

% this thing you have told me after the results. sir did this mean

% that our data is not appropiate because 2 out 30 designs were

% fine. so i will train it again with other technique.

I would be very surprised if you could obtain results that are significantly

better. If this was important you would be forced to obtain more data

to obtain more convincing error bars and/or confidence levels..

% 2. what is the problem with default data division which is generated through

% advanced script.

You don't have nearly enough data to obtain a convincing default

(0.7/0/15/0.15)*8 = 6/1/1 datadivision design model for use with unseen

nontraining data.

% 3. sit you have used FOR LOOP for Ntrials1:10.sir why dont u have use it with

% while loop such while certain best performance or least mse is achieved ....

% and it will strat loop from ntrial=1 then 2 then 3 then 4 etc and it will stop when

% certain mse or error is achieved.

There is no certainty that a training goal can be reached for a design that

will work well on nontraing data.

Hope this helps.

Greg

"Greg Heath" <heath@alumni.brown.edu> wrote in message <l4f7fl$dv5$1@newscl01ah.mathworks.com>...

> "chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l4a24h$33$1@newscl01ah.mathworks.com>...

GREG

chk ur mail....i have send u my excel database.....

and i have snt u my code also..

so see whats the problem in it and how it can solved

because my mse value coming is very large

give me solution to my problem kindly as early as possible

"chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l55259$jmo$1@newscl01ah.mathworks.com>...

> "Greg Heath" <heath@alumni.brown.edu> wrote in message <l4f7fl$dv5$1@newscl01ah.mathworks.com>...

> > "chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l4a24h$33$1@newscl01ah.mathworks.com>...

> GREG

>

> chk ur mail....i have send u my excel database.....

>

> and i have snt u my code also..

> so see whats the problem in it and how it can solved

>

> because my mse value coming is very large

>

> give me solution to my problem kindly as early as possible

Unable to open. Send *.txt or *.m

"Greg Heath" <heath@alumni.brown.edu> wrote in message <l56r3v$1c9$1@newscl01ah.mathworks.com>...

> "chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l55259$jmo$1@newscl01ah.mathworks.com>...

> > "Greg Heath" <heath@alumni.brown.edu> wrote in message <l4f7fl$dv5$1@newscl01ah.mathworks.com>...

> > > "chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l4a24h$33$1@newscl01ah.mathworks.com>...

> > GREG

> >

> > chk ur mail....i have send u my excel database.....

> >

> > and i have snt u my code also..

> > so see whats the problem in it and how it can solved

> >

> > because my mse value coming is very large

> >

> > give me solution to my problem kindly as early as possible

>

> Unable to open. Send *.txt or *.m

Thanks.

1. Using minmax I found that x1 is constant. Why in the world would you send me data with a constant input???

2. Removing the constant input, the correlation coefficient matrix for [x ; t ]' indicates

that only x2 is significantly linearly correlated with t.

% minmaxxt = 8600 11666

% 76 105000

% 841 334960

%

% CC = 1.0000 -0.0612 -0.1399

% -0.0612 1.0000 0.7929

% -0.1399 0.7929 1.0000

%

% P = 1.0000 0.5924 0.2189

% 0.5924 1.0000 0.0000

% 0.2189 0.0000 1.0000

%

% SIGMASK = 0 0 0

% 0 0 1 ==> only x2 and t

% 0 1 0

3. Plotting x1 , x2, t, t vs x1 and t vs x2 indicate that there is very little hope of getting

a good nonlinear model from this data unless outliers are removed.

4. Standardize the data using zscore or mapstd. Remove outliers and start over.

Hope this helps.

Greg

"Greg Heath" <heath@alumni.brown.edu> wrote in message <l5cm8f$ak9$1@newscl01ah.mathworks.com>...

> "Greg Heath" <heath@alumni.brown.edu> wrote in message <l56r3v$1c9$1@newscl01ah.mathworks.com>...

> > "chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l55259$jmo$1@newscl01ah.mathworks.com>...

> > > "Greg Heath" <heath@alumni.brown.edu> wrote in message <l4f7fl$dv5$1@newscl01ah.mathworks.com>...

> > > > "chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l4a24h$33$1@newscl01ah.mathworks.com>...

> > > GREG

> > >

> > > chk ur mail....i have send u my excel database.....

> > >

> > > and i have snt u my code also..

> > > so see whats the problem in it and how it can solved

> > >

> > > because my mse value coming is very large

> > >

> > > give me solution to my problem kindly as early as possible

> >

> > Unable to open. Send *.txt or *.m

>

> Thanks.

>

> 1. Using minmax I found that x1 is constant. Why in the world would you send me data with a constant input???

>

> 2. Removing the constant input, the correlation coefficient matrix for [x ; t ]' indicates

> that only x2 is significantly linearly correlated with t.

>

> % minmaxxt = 8600 11666

> % 76 105000

> % 841 334960

> %

> % CC = 1.0000 -0.0612 -0.1399

> % -0.0612 1.0000 0.7929

> % -0.1399 0.7929 1.0000

> %

> % P = 1.0000 0.5924 0.2189

> % 0.5924 1.0000 0.0000

> % 0.2189 0.0000 1.0000

> %

> % SIGMASK = 0 0 0

> % 0 0 1 ==> only x2 and t

> % 0 1 0

>

> 3. Plotting x1 , x2, t, t vs x1 and t vs x2 indicate that there is very little hope of getting

> a good nonlinear model from this data unless outliers are removed.

>

> 4. Standardize the data using zscore or mapstd. Remove outliers and start over.

I used the term 'outlier' because those points are nothing like the others. They may be valid measurements. However, they cannot contribute to a sensible model unless the gap is filled with more data.

Hope this helps.

Greg

greg

i have mailed you the figures of zscore..

but i dont know which are outliers and how to remove them......it is type of fluctuating curve , it is linked with transfer function or not or after analyzing this graph we choose our transfer function

and sir standardizing to zero mean n unity variance,, what tells and y is it neccessary..jst by using mapstd or zscore and deleting outliers data is standardize, without using them....still i can plot x2 vs targets and see data plot and recognize the outliers....

i have also mailed you my best validation performance ,it seems to me good result but mse is 10^9 around......is it right or wrong

You can think of your watch list as threads that you have bookmarked.

You can add tags, authors, threads, and even search results to your watch list. This way you can easily keep track of topics that you're interested in. To view your watch list, click on the "My Newsreader" link.

To add items to your watch list, click the "add to watch list" link at the bottom of any page.

To add search criteria to your watch list, search for the desired term in the search box. Click on the "Add this search to my watch list" link on the search results page.

You can also add a tag to your watch list by searching for the tag with the directive "tag:tag_name" where tag_name is the name of the tag you would like to watch.

To add an author to your watch list, go to the author's profile page and click on the "Add this author to my watch list" link at the top of the page. You can also add an author to your watch list by going to a thread that the author has posted to and clicking on the "Add this author to my watch list" link. You will be notified whenever the author makes a post.

To add a thread to your watch list, go to the thread page and click the "Add this thread to my watch list" link at the top of the page.

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Got questions?

Get answers.

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi test

Learn moreDiscover what MATLAB ^{®} can do for your career.

Opportunities for recent engineering grads.

Apply TodayThe newsgroups are a worldwide forum that is open to everyone. Newsgroups are used to discuss a huge range of topics, make announcements, and trade files.

Discussions are threaded, or grouped in a way that allows you to read a posted message and all of its replies in chronological order. This makes it easy to follow the thread of the conversation, and to see what’s already been said before you post your own reply or make a new posting.

Newsgroup content is distributed by servers hosted by various organizations on the Internet. Messages are exchanged and managed using open-standard protocols. No single entity “owns” the newsgroups.

There are thousands of newsgroups, each addressing a single topic or area of interest. The MATLAB Central Newsreader posts and displays messages in the comp.soft-sys.matlab newsgroup.

**MATLAB Central**

You can use the integrated newsreader at the MATLAB Central website to read and post messages in this newsgroup. MATLAB Central is hosted by MathWorks.

Messages posted through the MATLAB Central Newsreader are seen by everyone using the newsgroups, regardless of how they access the newsgroups. There are several advantages to using MATLAB Central.

**One Account**

Your MATLAB Central account is tied to your MathWorks Account for easy access.

**Use the Email Address of Your Choice**

The MATLAB Central Newsreader allows you to define an alternative email address as your posting address, avoiding clutter in your primary mailbox and reducing spam.

**Spam Control**

Most newsgroup spam is filtered out by the MATLAB Central Newsreader.

**Tagging**

Messages can be tagged with a relevant label by any signed-in user. Tags can be used as keywords to find particular files of interest, or as a way to categorize your bookmarked postings. You may choose to allow others to view your tags, and you can view or search others’ tags as well as those of the community at large. Tagging provides a way to see both the big trends and the smaller, more obscure ideas and applications.

**Watch lists**

Setting up watch lists allows you to be notified of updates made to postings selected by author, thread, or any search variable. Your watch list notifications can be sent by email (daily digest or immediate), displayed in My Newsreader, or sent via RSS feed.

- Use a newsreader through your school, employer, or internet service provider
- Pay for newsgroup access from a commercial provider
- Use Google Groups
- Mathforum.org provides a newsreader with access to the comp.soft sys.matlab newsgroup
- Run your own server. For typical instructions, see: http://www.slyck.com/ng.php?page=2