# dlarray

Deep learning array for customization

## Description

A deep learning array stores data with optional data format labels for custom training loops, and enables functions to compute and use derivatives through automatic differentiation.

**Tip**

For most deep learning tasks, you can use a pretrained neural network and adapt it to your own
data. For an example showing how to use transfer learning to retrain a convolutional neural
network to classify a new set of images, see Retrain Neural Network to Classify New Images. Alternatively, you can
create and train neural networks from scratch using the `trainnet`

and
`trainingOptions`

functions.

If the `trainingOptions`

function does not provide the
training options that you need for your task, then you can create a custom training loop
using automatic differentiation. To learn more, see Train Network Using Custom Training Loop.

If the `trainnet`

function does not provide the loss function that you need for your task, then you can
specify a custom loss function to the `trainnet`

as a function handle.
For loss functions that require more inputs than the predictions and targets (for example,
loss functions that require access to the neural network or additional inputs), train the
model using a custom training loop. To learn more, see Train Network Using Custom Training Loop.

If Deep Learning Toolbox™ does not provide the layers you need for your task, then you can create a custom layer. To learn more, see Define Custom Deep Learning Layers. For models that cannot be specified as networks of layers, you can define the model as a function. To learn more, see Train Network Using Model Function.

For more information about which training method to use for which task, see Train Deep Learning Model in MATLAB.

## Creation

### Description

### Input Arguments

### Output Arguments

## Usage

`dlarray`

data formats enable you to execute the functions in the following
table with assurance that the data has the appropriate shape.

Function | Operation | Validates Input Dimension | Affects Size of Input Dimension |
---|---|---|---|

`avgpool` | Compute the average of the input data over moving rectangular (or cuboidal)
spatial (`'S'` ) regions defined by a pool size parameter. | `'S'` | `'S'` |

`batchnorm` | Normalize the values contained in each channel (`'C'` ) of the
input data. | `'C'` | |

`crossentropy` | Compute the cross-entropy between estimates and target values, averaged by the
size of the batch (`'B'` ) dimension. | `'S'` , `'C'` , `'B'` ,
`'T'` , `'U'` (Estimates and target arrays must
have the same sizes.) | `'S'` , `'C'` , `'B'` ,
`'T'` , `'U'` (Output is an unformatted
scalar.) |

`dlconv` | Compute the deep learning convolution of the input data using an array of
filters, matching the number of spatial (`'S'` ) and (a function of
the) channel (`'C'` ) dimensions of the input, and adding a constant
bias. | `'S'` , `'C'` | `'S'` , `'C'` |

`dltranspconv` | Compute the deep learning transposed convolution of the input data using an array
of filters, matching the number of spatial (`'S'` ) and (a function of
the) channel (`'C'` ) dimensions of the input, and adding a constant
bias. | `'S'` , `'C'` | `'S'` , `'C'` |

`fullyconnect` | Compute a weighted sum of the input data and apply a bias for each batch
(`'B'` ) and time (`'T'` ) dimension. | `'S'` , `'C'` , `'U'` | `'S'` , `'C'` , `'B'` ,
`'T'` , `'U'` (Output always has data format
`'CB'` , `'CT'` , or
`'CTB'` .) |

`gru` | Apply a gated recurrent unit calculation to the input data. | `'S'` , `'C'` , `'T'` | `'C'` |

`lstm` | Apply a long short-term memory calculation to the input data. | `'S'` , `'C'` , `'T'` | `'C'` |

`maxpool` | Compute the maximum of the input data over moving rectangular spatial
(`'S'` ) regions defined by a pool size parameter. | `'S'` | `'S'` |

`maxunpool` | Compute the unpooling operation over the spatial (`'S'` )
dimensions. | `'S'` | `'S'` |

`mse` | Compute the half mean squared error between estimates and target values, averaged
by the size of the batch (`'B'` ) dimension. | `'S'` , `'C'` , `'B'` ,
`'T'` , `'U'` (Estimates and target arrays must
have the same sizes.) | `'S'` , `'C'` , `'B'` ,
`'T'` , `'U'` (Output is an unformatted
scalar.) |

`softmax` | Apply the softmax activation to each channel (`'C'` ) of the
input data. | `'C'` |

These functions require each dimension to have a label. You can specify the dimension
label format by providing the first input as a formatted `dlarray`

, or by using
the `'DataFormat'`

name-value argument of the function.

`dlarray`

enforces the dimension label ordering of
`'SCBTU'`

. This enforcement eliminates ambiguous semantics in operations
which implicitly match labels between inputs. `dlarray`

also enforces that the
dimension labels `'C'`

, `'B'`

, and `'T'`

can each appear at most once. The functions that use these dimension labels accept at most one
dimension for each label.

`dlarray`

provides functions for obtaining the data format associated with
a `dlarray`

(`dims`

), removing the
data format (`stripdims`

), and
obtaining the dimensions associated with specific dimension labels (`finddim`

).

For more information on how a `dlarray`

behaves with formats, see Notable dlarray Behaviors.

## Object Functions

`avgpool` | Pool data to average values over spatial dimensions |

`batchnorm` | Normalize data across all observations for each channel independently |

`crossentropy` | Cross-entropy loss for classification tasks |

`indexcrossentropy` | Index cross-entropy loss for classification tasks |

`dims` | Data format of `dlarray` object |

`dlconv` | Deep learning convolution |

`dldivergence` | Divergence of deep learning data |

`dlgradient` | Compute gradients for custom training loops using automatic differentiation |

`dljacobian` | Jacobian matrix deep learning operation |

`dllaplacian` | Laplacian of deep learning data |

`dltranspconv` | Deep learning transposed convolution |

`extractdata` | Extract data from `dlarray` |

`finddim` | Find dimensions with specified label |

`fullyconnect` | Sum all weighted input data and apply a bias |

`gru` | Gated recurrent unit |

`leakyrelu` | Apply leaky rectified linear unit activation |

`lstm` | Long short-term memory |

`maxpool` | Pool data to maximum value |

`maxunpool` | Unpool the output of a maximum pooling operation |

`mse` | Half mean squared error |

`relu` | Apply rectified linear unit activation |

`sigmoid` | Apply sigmoid activation |

`softmax` | Apply softmax activation to channel dimension |

`stripdims` | Remove `dlarray` data format |

A `dlarray`

also allows functions for numeric, matrix, and other
operations. See the full list in List of Functions with dlarray Support.

## Examples

## Tips

A

`dlgradient`

call must be inside a function. To obtain a numeric value of a gradient, you must evaluate the function using`dlfeval`

, and the argument to the function must be a`dlarray`

. See Use Automatic Differentiation In Deep Learning Toolbox.To enable the correct evaluation of gradients,

`dlfeval`

must call functions that use only supported functions for`dlarray`

. See List of Functions with dlarray Support.

## Extended Capabilities

## Version History

**Introduced in R2019b**