Main Content

Deep Learning Toolbox Data Conventions

Dimensions

The following code dimensions are used in describing both the network signals that users commonly see, and those used by the utility functions:

Ni = Number of network inputs

= net.numInputs

Ri = Number of elements in input i

= net.inputs{i}.size

Nl = Number of layers

= net.numLayers

Si = Number of neurons in layer i

= net.layers{i}.size

Nt = Number of targets

 

Vi = Number of elements in target i, equal to Sj, where j is the ith layer with a target. (A layer n has a target if net.targets(n) == 1.)

 

No = Number of network outputs

 

Ui = Number of elements in output i, equal to Sj, where j is the ith layer with an output (A layer n has an output if net.outputs(n) == 1.)

 

ID = Number of input delays

= net.numInputDelays

LD = Number of layer delays

= net.numLayerDelays

TS = Number of time steps

 

Q = Number of concurrent vectors or sequences

 

Variables

The variables a user commonly uses when defining a simulation or training session are

P

Network inputs

Ni-by-TS cell array, where each element P{i,ts} is an Ri-by-Q matrix

Pi

Initial input delay conditions

Ni-by-ID cell array, where each element Pi{i,k} is an Ri-by-Q matrix

Ai

Initial layer delay conditions

Nl-by-LD cell array, where each element Ai{i,k} is an Si-by-Q matrix

T

Network targets

Nt-by-TS cell array, where each element P{i,ts} is a Vi-by-Q matrix

These variables are returned by simulation and training calls:

Y

Network outputs

No-by-TS cell array, where each element Y{i,ts} is a Ui-by-Q matrix

E

Network errors

Nt-by-TS cell array, where each element P{i,ts} is a Vi-by-Q matrix

perf

Network performance

 

Utility Function Variables

These variables are used only by the utility functions.

Pc

Combined inputs

Ni-by-(ID+TS) cell array, where each element P{i,ts} is an Ri-by-Q matrix

Pc = [Pi P] = Initial input delay conditions and network inputs

Pd

Delayed inputs

Ni-by-Nj-by-TS cell array, where each element Pd{i,j,ts} is an (Ri*IWD(i,j))-by-Q matrix, and where IWD(i,j) is the number of delay taps associated with the input weight to layer i from input j

Equivalently,

IWD(i,j) = length(net.inputWeights{i,j}.delays)

Pd is the result of passing the elements of P through each input weight's tap delay lines. Because inputs are always transformed by input delays in the same way, it saves time to do that operation only once instead of for every training step.

BZ

Concurrent bias vectors

Nl-by-1 cell array, where each element BZ{i} is an Si-by-Q matrix

Each matrix is simply Q copies of the net.b{i} bias vector.

IWZ

Weighted inputs

Ni-by-Nl-by-TS cell array, where each element IWZ{i,j,ts} is an Si-by-???-by-Q matrix

LWZ

Weighted layer outputs

Ni-by-Nl-by-TS cell array, where each element LWZ{i,j,ts} is an Si-by-Q matrix

N

Net inputs

Ni-by-TS cell array, where each element N{i,ts} is an Si-by-Q matrix

A

Layer outputs

Nl-by-TS cell array, where each element A{i,ts} is an Si-by-Q matrix

Ac

Combined layer outputs

Nl-by-(LD+TS) cell array, where each element A{i,ts} is an Si-by-Q matrix

Ac = [Ai A] = Initial layer delay conditions and layer outputs.

Tl

Layer targets

Nl-by-TS cell array, where each element Tl{i,ts} is an Si-by-Q matrix

Tl contains empty matrices [] in rows of layers i not associated with targets, indicated by net.targets(i) == 0.

El

Layer errors

Nl-by-TS cell array, where each element El{i,ts} is an Si-by-Q matrix

El contains empty matrices [] in rows of layers i not associated with targets, indicated by net.targets(i) == 0.

X

Column vector of all weight and bias values