This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English verison of the page.

Note: This page has been translated by MathWorks. Please click here
To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.


Elman neural network




Elman networks are feedforward networks (feedforwardnet) with the addition of layer recurrent connections with tap delays.

With the availability of full dynamic derivative calculations (fpderiv and bttderiv), the Elman network is no longer recommended except for historical and research purposes. For more accurate learning try time delay (timedelaynet), layer recurrent (layrecnet), NARX (narxnet), and NAR (narnet) neural networks.

Elman networks with one or more hidden layers can learn any dynamic input-output relationship arbitrarily well, given enough neurons in the hidden layers. However, Elman networks use simplified derivative calculations (using staticderiv, which ignores delayed connections) at the expense of less reliable learning.

elmannet(layerdelays,hiddenSizes,trainFcn) takes these arguments,


Row vector of increasing 0 or positive delays (default = 1:2)


Row vector of one or more hidden layer sizes (default = 10)


Training function (default = 'trainlm')

and returns an Elman neural network.


Here an Elman neural network is used to solve a simple time series problem.

[X,T] = simpleseries_dataset;
net = elmannet(1:2,10);
[Xs,Xi,Ai,Ts] = preparets(net,X,T);
net = train(net,Xs,Ts,Xi,Ai);
Y = net(Xs,Xi,Ai);
perf = perform(net,Ts,Y)

Introduced in R2010b

Was this topic helpful?