Wired problem about NARX prediction

3 views (last 30 days)
Hi all,
Now I'm really confused about the NARX prediction. For example, I thought the NARX would provide y(t)=f(y(t-1),x(t-1)), however, it provides y(t-1)=f(y(t-1),x(t-1)). I used a sin function as test. When I use ao(1) as input, I thought it would give me a value similar with ao(2) as my prediction, actually, it gave me a similar value as ao(1). There's no prediction at all. I'm not sure if there's any misunderstanding here. Could anyone help me to get the real prediction work? Thank you so much.
See my below example:
t=1:1000;
a=sin(t*pi/10);
b=a;
ao=a(end-49:end);
bo=ao;
ap=a(1:end-50);
a1=mat2cell(ap,1,ones(1,length(ap)));
b1=mat2cell(ap,1,ones(1,length(ap)));
inputSeries = b1;%simplenarxInputs;
targetSeries =a1;%simplenarxTargets;
n=1;
% Create a Nonlinear Autoregressive Network with External Input
inputDelays = n:n;
feedbackDelays = n:n;
hiddenLayerSize = 10;
net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize);
% Choose Input and Feedback Pre/Post-Processing Functions
% Settings for feedback input are automatically applied to feedback output
% For a list of all processing functions type: help nnprocess
% Customize input parameters at: net.inputs{i}.processParam
% Customize output parameters at: net.outputs{i}.processParam
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
net.inputs{2}.processFcns = {'removeconstantrows','mapminmax'};
% Prepare the Data for Training and Simulation
% The function PREPARETS prepares timeseries data for a particular network,
% shifting time by the minimum amount to fill input states and layer states.
% Using PREPARETS allows you to keep your original time series data unchanged, while
% easily customizing it for networks with differing numbers of delays, with
% open loop or closed loop feedback modes.
[inputs,inputStates,layerStates,targets] = preparets(net,inputSeries,{},targetSeries);
% Setup Division of Data for Training, Validation, Testing
% The function DIVIDERAND randomly assigns target values to training,
% validation and test sets during training.
% For a list of all data division functions type: help nndivide
net.divideFcn = 'dividerand'; % Divide data randomly
% The property DIVIDEMODE set to TIMESTEP means that targets are divided
% into training, validation and test sets according to timesteps.
% For a list of data division modes type: help nntype_data_division_mode
net.divideMode = 'value'; % Divide up every value
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
% Choose a Training Function
% For a list of all training functions type: help nntrain
% Customize training parameters at: net.trainParam
net.trainFcn = 'trainlm'; % Levenberg-Marquardt
% Choose a Performance Function
% For a list of all performance functions type: help nnperformance
% Customize performance parameters at: net.performParam
net.performFcn = 'mse'; % Mean squared error
% Choose Plot Functions
% For a list of all plot functions type: help nnplot
% Customize plot parameters at: net.plotParam
net.plotFcns = {'plotperform','plottrainstate','plotresponse', ...
'ploterrcorr', 'plotinerrcorr'};
% Train the Network
[net,tr] = train(net,inputs,targets,inputStates,layerStates);
% Test the Network
outputs = net(inputs,inputStates,layerStates);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)
% Recalculate Training, Validation and Test Performance
trainTargets = gmultiply(targets,tr.trainMask);
valTargets = gmultiply(targets,tr.valMask);
testTargets = gmultiply(targets,tr.testMask);
trainPerformance = perform(net,trainTargets,outputs)
valPerformance = perform(net,valTargets,outputs)
testPerformance = perform(net,testTargets,outputs)
tmp=[];
for i=1:length(ao)
aao=[ao(i) nan];
inputSeries1=mat2cell(aao,1,ones(1,length(aao)));
targetSeries1=mat2cell(aao,1,ones(1,length(aao)));
% inputSeries1{end}=nan;
% targetSeries1{end}=nan;
% inputSeries1{end-1}=nan;
% targetSeries1{end-1}=nan;
%
nets=net;
%nets = removedelay(net);
%nets = closeloop(net);
[inputs,inputStates,layerStates,targets] = preparets(nets,inputSeries1,{},targetSeries1);
yp=nets(inputs,inputStates,layerStates);
yp=cell2mat(yp);
tmp=[tmp yp(1)];
end
plot(tmp(1:end-n),'b-*');
hold on
plot(ao(1+n:end),'r-o');
hold off
legend('prediction','obs');
return
% View the Network
% view(net)

Accepted Answer

Greg Heath
Greg Heath on 21 May 2014
Edited: Greg Heath on 21 May 2014
In general, 0<= ID <= idmax and 1<= FD <= fdmax.
For prediction, 1<= ID <= idmax and 1 <= FD <= fdmax.
In other words, for prediction, make sure ID >= 1.
I do not understand your definitions of input and target.
DO NOT USE 'dividerand' for timeseries. It destroys the correlations between input, delays and and target. Use 'divideblock'(my choice) or 'divideind' (with interleaved train/val/test signals).
Unfortunately, my computer is misbehaving and I cannot run any examples.
However, you can find some of my posted examples by searching
greg closeloop
  2 Comments
Xiangming
Xiangming on 21 May 2014
Hi Greg,
Thank you for your comments. In my example, both ID and FD are 1:1. Inputs and Targets are equal ( easy for test). t=1:150 are used for training/validation/test. The last 50 records are used for testing the prediction. My point here is if I use [a(151 end) NaN] as inputs, the model should give me a value similar as a(152). But it gave me a similar value as a(151). That's not prediction at all. How can I get the a(152) instead? Thanks.
Greg Heath
Greg Heath on 26 May 2014
It gave you a similar value because
"Inputs and Targets are equal ( easy for test)"
They should not be equal.
input = f(t)
target = f(t+d) % prediction
Also note that sin and cosine are solutions of 2nd order linear difference equations and can be easily reconstructed by knowing at least three or more points in a period. For example, add the following and rearrange to get a 2-pt prediction for cos(t+d_):
cos(t + d) = cos(t)*cos(d) - sin(t)*sin(d)
cos(t - d) = cos(t)*cos(d) + sin(t)*sin(d)

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!