Asked by Young Tae
on 26 Feb 2013

Dear Matlab experts. Actaully, I'm not familiar with neural network analysis. I want to forecast outdoor air temperature with input set(ground temp, cloud, relative humidity). The training/validation/testing is okay with the input data set(1X2877) and target data(1X2877). However, I have trapped to evaluate the network with the new data set (1X960) (same input style). Would you light on for me? I'm lost my way to resolve the issue. I apprecaite your valuable time to concern on this issue.

===here is my code===

% Solve an Autoregression Problem with External Input with a NARX Neural Network % Script generated by NTSTOOL % Created Fri Feb 22 15:22:18 EST 2013 % % This script assumes these variables are defined: % % JULYTH - input time series. % JULYE - feedback time series.

%This is 1X2877 matrix data has [a;b;c] for each

inputSeries = tonndata(JYTH,false,false);

% This is 1X2877 matrix data has target output targetSeries = tonndata(JYE,false,false);

% Create a Nonlinear Autoregressive Network with External Input

inputDelays = 1:2;

feedbackDelays = 1:2;

hiddenLayerSize = 10;

net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize);

% Choose Input and Feedback Pre/Post-Processing Functions % Settings for feedback input are automatically applied to feedback output % For a list of all processing functions type: help nnprocess % Customize input parameters at: net.inputs{i}.processParam % Customize output parameters at: net.outputs{i}.processParam

net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};

net.inputs{2}.processFcns = {'removeconstantrows','mapminmax'};

%net.inputs{3}.processFcns = {'removeconstantrows','mapminmax'};

% Prepare the Data for Training and Simulation % The function PREPARETS prepares timeseries data for a particular network, % shifting time by the minimum amount to fill input states and layer states. % Using PREPARETS allows you to keep your original time series data unchanged, while % easily customizing it for networks with differing numbers of delays, with % open loop or closed loop feedback modes.

[inputs,inputStates,layerStates,targets] = preparets(net,inputSeries,{},targetSeries);

% Setup Division of Data for Training, Validation, Testing % The function DIVIDERAND randomly assigns target values to training, % validation and test sets during training. % For a list of all data division functions type: help nndivide

net.divideFcn = 'dividerand'; % Divide data randomly

% The property DIVIDEMODE set to TIMESTEP means that targets are divided % into training, validation and test sets according to timesteps. % For a list of data division modes type: help nntype_data_division_mode

net.divideMode = 'value'; % Divide up every value

net.divideParam.trainRatio = 70/100;

net.divideParam.valRatio = 15/100;

net.divideParam.testRatio = 15/100;

% Choose a Training Function % For a list of all training functions type: help nntrain % Customize training parameters at: net.trainParam

net.trainFcn = 'trainlm'; % Levenberg-Marquardt

% Choose a Performance Function % For a list of all performance functions type: help nnperformance % Customize performance parameters at: net.performParam

net.performFcn = 'mse'; % Mean squared error

% Choose Plot Functions

% For a list of all plot functions type: help nnplot % Customize plot parameters at: net.plotParam

net.plotFcns = {'plotperform','plottrainstate','plotresponse', ... 'ploterrcorr', 'plotinerrcorr'};

% Train the Network

[net,tr] = train(net,inputs,targets,inputStates,layerStates);

% Test the Network

outputs = net(inputs,inputStates,layerStates);

errors = gsubtract(targets,outputs);

performance = perform(net,targets,outputs)

% Recalculate Training, Validation and Test Performance

trainTargets = gmultiply(targets,tr.trainMask);

valTargets = gmultiply(targets,tr.valMask);

testTargets = gmultiply(targets,tr.testMask);

trainPerformance = perform(net,trainTargets,outputs)

valPerformance = perform(net,valTargets,outputs)

testPerformance = perform(net,testTargets,outputs)

% View the Network

view(net)

_% From this part I want to run a new test or forecast with new inputs % This is a new inputs 1X960. The maxrix has the same structure for the % testing [a;b;c]

inputSeries2 = tonndata(AUGTH,false,false);

[inputs2,inputStates2,layerStates2,targets2] = preparets(net,inputSeries2);

% When I want to generate a new output from the network all "output2"(1X960) has % NaN. I suspect that "inputStates2" has NaN value its second row. Would % you let me know how I resolve the issue and get the new output2?

outputs2 = net(inputs2,inputStates2,layerStates2);_

Answer by Greg Heath
on 26 Feb 2013

Accepted Answer

I want to forecast outdoor air temperature with input set(ground temp, cloud, relative humidity). The training/validation/testing is okay with the input data set(1X2877) and target data(1X2877).

What, exactly, does "okay" mean?... What are the val and test R^2 values?

How can you have a one dimensional input data set when you have 3 input variables?

This script assumes these variables are defined: JULYTH - input time series. JULYE - feedback time series. This is 1X2877 matrix data has [a;b;c] for each

for each series? That makes no sense.

If you have 3 inputs the input matrix dimension should be [ 3 2877]!

Unless it is cell data and not matrix data...

To make sure, type the 4 commands

iscell( [ JYTH ; JYE ] )

[ I N ] = size(JYTH)

[O N] = size(JYE)

whos

inputSeries = tonndata(JYTH,false,false); This is 1X2877 matrix data has target output targetSeries = tonndata(JYE,false,false);

You seem to be confused. tonndata produces cell data!

whos inputSeries targetSeries

Create a Nonlinear Autoregressive Network with External Input inputDelays = 1:2; feedbackDelays = 1:2; hiddenLayerSize = 10;

How do you know these are good inputs? To be sure, calculate the significant lags for the output autocorrelation function AND the input/output crosscorrelation function

Choose Input and Feedback Pre/Post-Processing Functions... net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'}; net.inputs{2}.processFcns = {'removeconstantrows','mapminmax'}; net.inputs{3}.processFcns = {'removeconstantrows','mapminmax'};

Delete these statements. These are defaults.

Prepare the Data for Training and Simulation [inputs,inputStates,layerStates,targets] = preparets(net,inputSeries,{},targetSeries);

Double check dimensions and data classes:

whos inputSeries targetSeries inputs inputStates layerStates targets

Setup Division of Data for Training, Validation, Testing The function DIVIDERAND randomly assigns target values to training, % validation and test sets during training...

net.divideFcn = 'dividerand'; % Divide data randomly

NO, NO, NO!

RANDOM DIVISION DESTROYS AUTO AND CROSS CORRELATIONS

USE 'divideblock'

Hard to believe previous val and test results are "okay" if you used 'dividerand'

The property DIVIDEMODE set to TIMESTEP means that targets are divided into training, validation and test sets ... net.divideParam.trainRatio = 70/100; net.divideParam.valRatio = 15/100; net.divideParam.testRatio = 15/100;

These are defaults. Delete these statements unless you want to change the percentages.

Choose a Training Function ... net.trainFcn = 'trainlm'; % Levenberg-Marquardt Choose a Performance Function ... mse

These are defaults! Delete these statements unless you want to use other choices.

Choose Plot Functions... net.plotFcns = {'plotperform','plottrainstate','plotresponse', ... 'ploterrcorr', 'plotinerrcorr'};

These are part of a seven plot default list. Delete and you will get these PLUS ploterrhist and plot regression

Train the Network

[net,tr] = train(net,inputs,targets,inputStates,layerStates);

You can delete most of the the following statements. A detailed performance summary is already contained in the training structure tr. Type the command

tr = tr

Test the Network outputs = net(inputs,inputStates,layerStates); errors = gsubtract(targets,outputs); performance = perform(net,targets,outputs) Recalculate Training, Validation and Test Performance trainTargets = gmultiply(targets,tr.trainMask); valTargets = gmultiply(targets,tr.valMask); testTargets = gmultiply(targets,tr.testMask); trainPerformance = perform(net,trainTargets,outputs) valPerformance = perform(net,valTargets,outputs) testPerformance = perform(net,testTargets,outputs)

Most important are the test and validation results. If they are not very small compared to mean(var(testtarget',1)) and mean(var(valtarget',1)), respectively, then you need to improve your design by using a better choice of delays, or checking out other random weight initializations or maybe even changing the number of hidden nodes from H=10.

Next you should close the loop and test the closed loop trn, val and tst results.

Now is a reasonable time to try new data on the closed loopnet

View the Network view(net)

Probably should view the net right after the train statement

From this part I want to run a new test or forecast with new inputs This is a new inputs 1X960. The maxrix has the same structure for the testing [a;b;c]

inputSeries2 = tonndata(AUGTH,false,false);

[inputs2,inputStates2,layerStates2,targets2] = preparets(net,inputSeries2);

How can you generate layerstates2 and targets2 without a targetSeries2?

You have to use a closed loop net.

ALWAYS check the preparets I/O

whos inputSeries2 inputs2 inputStates2 layerStates2 targets2

When I want to generate a new output from the network all"output2"(1X960) has NaN. I suspect that "inputStates2" has NaN value its second row. Would you let me know how I resolve the

See above

Hope this helps.

Thank you for formally accepting my answer

Greg

Greg Heath
on 26 Feb 2013

1. Your size notation looks backward. You probably quoted the EXCEL dimensions instead of the MATLAB dimensions. For MATLAB, ALWAYS use column samples:

[I N ] = size(JYTH) % [ 3 2877 ] or 3 X 2877

[ O N ] = size(JYE) % [ 1 2877 ] or 1 X 2877

So, what you have displayed has to be transposed...either before preparets or by preparets when creating the 3 X 2875 and 1 X 2875 cell series that are inputs to train.

I prefer to convert to row series IMMEDIATELY after reading from Excel. It ultimately avoids confusion (BELIEVE ME!).

Again, ALWAYS use whos after preparets to verify sizes and class.

2. After forming netc, evaluate it by using the OLD trn/val/tst data.

3. If the above val and test evaluation of netc is acceptable, predict using the NEW test inputs. There is no way to evaluate the prediction because no corresponding target data is available. Therefore do not use the target input or output in preparets.

Mei
on 16 Oct 2013

how DO you generate layerstates2 and targets2 without a targetSeries2?

he is testing the trained model on new data, so naturally he would not have a targetSeries for that new data. What is the solution to that?

Greg Heath
on 16 Oct 2013

If the new data immediately follows the data used to design and test the net, the following syntax should have been used

[ net tr Ys Es Xsf Asf ] =train(net,Xs,Ts,Xi,Ai);

Xinew = Xsf; Ainew = Asf;

Ysnew = net(Xsnew,Xinew,Ainew);

Otherwise

Xinew = Xnew(:,1:d); Xsnew = Xnew(:,d+1:end)

but Ainew is not known.

I would try the mean of the previously used test target data rather than use zeros. Perhaps several designs using values in the interval [mean-stdv,mean+stdv] would be useful.

Sign in to comment.

Answer by Mohan
on 26 Feb 2013

The testing is usually done as follows :

a = sim(net,testInput');

where net is the narx net in your program,

testInput is the new data set.

look for "sim" in Matlab help

Answer by abdulkader helwan
on 19 Dec 2013

Hello..i have created a backpropagation neural network in matlab for prediction of heart attack and i have trained it on a dataset and it worked out and gave the desired output..the problem is that i don't know hoe to test it then...if anyone can help plz don't hesitate this is my code for training network clear all close all clc case_number=151; PATTERNS = []; dataset = xlsread('dataset.xlsx','sheet1'); [row col] = size(dataset); PATTERNS = [ dataset];

% Desired Output Code D1=[1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]; D2=[0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]; D3=[0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]; %******************************************************** dis.out=[D1;D2 ;D3 ];

[g,h]=size(PATTERNS); [m,h]=size(dis.out); % CREATING AND INITIATING THE NETWORK net=newff(minmax(PATTERNS),[14 3],{'logsig','logsig'},'traingdx') net = init(net); net.LW{2,1} = net.LW{2,1}*0.01; %net.b{2} = net.b{2}*0.01; % TRAINING THE NETWORK net.trainParam.goal = 0.001; % Sum-squared error goal. net.trainParam.lr = 0.01; % Learning Rate. net.trainParam.alpha = 0.5; net.trainParam.show = 100; % Frequency of progress displays (in epochs). net.trainParam.epochs =1000;% Maximum number of epochs to train. net.trainParam.mc = 0.5; % Momentum Factor. k=case_number

for k=1:41

[net,tr] = train(net,PATTERNS,D1); % Normal....

end

actout.normal=sim(net,PATTERNS);

actout.normal

norm.test

%

for k=42:97

[net,tr] = train(net,PATTERNS,D2); % Abnormal....

end

act.abnormal=sim(net,PATTERNS);

act.abnormal

for k=98:151

[net,tr] = train(net,PATTERNS,D3); % Severe....

end

act.Severe=sim(net,PATTERNS);

act.Severe

Sign in to comment.

Answer by Ankur Dutt
on 7 May 2015

Sign in to comment.

Opportunities for recent engineering grads.

Apply Today
## 0 Comments

Sign in to comment.