MATLAB Answers

0

How can I manually perform an elmannet neural network calculation?

Asked by Qinwan Rabbani on 30 Nov 2016
Latest activity Commented on by Greg Heath
on 6 Dec 2016

  0 Comments

Sign in to comment.

1 Answer

Answer by Greg Heath
on 30 Nov 2016
 Accepted Answer

My guess is
z(t) = B1 + IW * [ x(t); z(t-1); z(t-2)];
y(t) = B2 + LW * z(t);
Hope this helps
Thank you for formally accepting my answer
Greg

  4 Comments

Show 1 older comment
> So the network diagram is a little different than what you're suggesting.
Incorrect.
>Only the hidden layers have context layers, which feed in the previous hidden unit activation(s) into the same hidden layer.
That is exemplified in the expression for z(t) containing IW
>The output layer, while in theory could have a context layer, does not.
Incorrect. That is exemplified in the expression for y(t) containing LW
>Also, the context layer has its own weight matrix saved in net.LW separate from the regular connections.
???
>I tried saving the exact hidden unit activations and feeding them in using the appropriate connections, but the result was completely different. I'm guessing there's some sort of extra processing I am not aware of.
You have to normalize the data before training and unnormalize the output after training.
This is the diagram for a 4 layer network with two hidden layers and a time delay of only 1:
This is net.IW:
This is net.LW:
From what I can see, hidden layer 1 and 2 feed back into themselves via "context layers", but not the output layer. The IW matrix only has the forward connections from the input layer to hidden layer 1. In the LW matrix, it looks to me that element 2,1 stores the connections from hidden 1 to hidden 2 and 3,2 has the connections from hidden 2 to output. However, there are separate elements storing the "context weights." 1,1 has connections from hidden 1 to itself and 2,2 has connections from hidden 2 to itself judging by the dimensions being (size of hidden)^2.
The code you've given is giving me errors because of the dimension mismatch since there is a separate set of connections for the recurrent/context weights. Regardless, even if I use the correct weights, what do I normalize the hidden activations to? The solution I referenced in my original question used normalization parameters that only apply to output activation. (I've tested applying them to saved hidden activations, and normalizing them with those parameters does not give me the correct results.)
INCORRECT!
The input layer contains NON-NEURON fan-in units and is never counted when referring to a N-layer ( 1 output + N-1 hidden) neural net.
The equations I have posted are the equations for the DEFAULT 2-LAYER ELMAN net with 1 hidden layer.
As proof, just type in the code from the HELP OR DOC documentation and remove the ending semicolon from the view(net) command.
The diagram you have just shown is a NON-DEFAULT 3-layer ELMAN net with 2 hidden layers.
Hope this helps.
Thank you for formally accepting my answer
Greg

Sign in to comment.