Sadly under the current Neural Network toolbox (R2015b) custom function (for example performance function) implementation is undocumented. `help nncustom` instructs to use the vanilla functions as templates for writing your own; for a cost function it suggests `mse` and the accompanying subfunctions in the `+mse` folder. Unfortunately, code in these files is uncommented, and it is not intuitive as to how these functions are used by the nn toolkit or what is expected of them.
This is the contents of the `+mse` folder. What makes up the cost function seems to be structured in these atom functions. However, their exact roles are unclear.
>> ls +mse
. forwardprop.m perfwb.m
.. name.m type.m
Also, for reference, `apply`, `backprop`, and `forwardprop` functions are concatenated below:
function perfs = apply(t,y,e,param)
perfs = e .* e;
function dperf = forwardprop(dy,t,y,e,param)
dperf = bsxfun(@times,dy,-2*e);
function dy = backprop(t,y,e,param)
dy = -2*e;
I'm having a hard time trying to understand how these work together to compose a Mean Squared Error function. It seems like the classical implementation of MSE was decomposed into steps to achieve more flexibility, but combined with no documentation it's become very hard to understand.
As a note, through comparing file by file I found that the only difference between Mean Absolute Error and Sum Absolute Error implementations is that the file `normalize.m` in MAE has a flag that is set to `true`, while it is `false` on SAE, so it probably causes division by number of outputs.
What are the roles of the cost function's subfunctions, and how they should be implemented?