decode

Class: Autoencoder

Decode encoded data

Syntax

``Y = decode(autoenc,Z)``

Description

example

````Y = decode(autoenc,Z)`returns the decoded data `Y`, using the autoencoder `autoenc`.```

Input Arguments

expand all

Trained autoencoder, returned by the `trainAutoencoder` function as an object of the `Autoencoder` class.

Data encoded by `autoenc`, specified as a matrix. Each column of `Z` represents an encoded sample (observation).

Data Types: `single` | `double`

Output Arguments

expand all

Decoded data, returned as a matrix or a cell array of image data.

If the autoencoder `autoenc` was trained on a cell array of image data, then `Y` is also a cell array of images.

If the autoencoder `autoenc` was trained on a matrix, then `Y` is also a matrix, where each column of `Y` corresponds to one sample or observation.

Examples

expand all

```X = digitTrainCellArrayData; ```

`X` is a 1-by-5000 cell array, where each cell contains a 28-by-28 matrix representing a synthetic image of a handwritten digit.

Train an autoencoder using the training data with a hidden size of 15.

```hiddenSize = 15; autoenc = trainAutoencoder(X,hiddenSize); ```

Extract the encoded data for new images using the autoencoder.

```Xnew = digitTestCellArrayData; features = encode(autoenc,Xnew); ```

Decode the encoded data from the autoencoder.

```Y = decode(autoenc,features); ```

`Y` is a 1-by-5000 cell array, where each cell contains a 28-by-28 matrix representing a synthetic image of a handwritten digit.

Algorithms

If the input to an autoencoder is a vector $x\in {ℝ}^{{D}_{x}}$, then the encoder maps the vector x to another vector $z\in {ℝ}^{{D}^{\left(1\right)}}$ as follows:

`$z={h}^{{}^{\left(1\right)}}\left({W}^{\left(1\right)}x+{b}^{\left(1\right)}\right),$`

where the superscript (1) indicates the first layer. ${h}^{\left(1\right)}:{ℝ}^{{D}^{\left(1\right)}}\to {ℝ}^{{D}^{\left(1\right)}}$ is a transfer function for the encoder, ${W}^{\left(1\right)}\in {ℝ}^{{D}^{\left(1\right)}×{D}_{{}^{x}}}$ is a weight matrix, and ${b}^{\left(1\right)}\in {ℝ}^{{D}^{\left(1\right)}}$ is a bias vector. Then, the decoder maps the encoded representation z back into an estimate of the original input vector, x, as follows:

`$\stackrel{^}{x}={h}^{{}^{\left(2\right)}}\left({W}^{\left(2\right)}z+{b}^{\left(2\right)}\right),$`

where the superscript (2) represents the second layer. ${h}^{\left(2\right)}:{ℝ}^{{D}_{x}}\to {ℝ}^{{D}_{x}}$ is the transfer function for the decoder,${W}^{\left(1\right)}\in {ℝ}^{{D}_{{}^{x}}×{D}^{\left(1\right)}}$ is a weight matrix, and ${b}^{\left(2\right)}\in {ℝ}^{{D}_{x}}$ is a bias vector.