|Neural Net Pattern Recognition||Classify data by training a two-layer feed-forward network|
|Train an autoencoder|
|Train a softmax layer for classification|
|Decode encoded data|
|Encode input data|
|Reconstruct the inputs using trained autoencoder|
|Stack encoders from several autoencoders together|
|Convert Autoencoder object into network object|
|Train neural network|
|Bayesian regularization backpropagation|
|Scaled conjugate gradient backpropagation|
|Mean squared normalized error performance function|
|Receiver operating characteristic|
|Plot classification confusion matrix|
|Plot error histogram|
|Plot network performance|
|Plot linear regression|
|Plot receiver operating characteristic|
|Plot training state values|
|Neural network performance|
|Generate MATLAB function for simulating neural network|
Use parallel and distributed computing to speed up neural network training and simulation and handle large data.
Save intermediate results to protect the value of long training runs.
Preprocess inputs and targets for more efficient training.
Learn how to manually configure the network before
training using the
Use functions to divide the data into training, validation, and test sets.
Comparison of training algorithms on different problem types.
Learn methods to improve generalization and prevent overfitting.
Learn how to use error weighting when training neural networks.
Learn how to fit output elements with different ranges of values.
Illustration of using autoencoders to construct and train a deep network for image classification
Learn the primary steps in a neural network design process.
Learn the different levels of using Neural Network Toolbox functionality.
Workflow for designing a multilayer feedforward neural network for function fitting and pattern recognition.
Learn the architecture of a multilayer neural network.
Learn how the format of input data structures affects the simulation of networks.
Learn properties that define the basic features of a network.
Learn properties that define network details such as inputs, layers, outputs, targets, biases, and weights.