Documentation

This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

Deep Learning Training from Scratch

Create new deep networks for classification and regression, including series, DAG, and LSTM networks, import from Caffe, or define your own layers

Create new deep networks for image classification and regression, including series, directed acyclic graph (DAG), and long short-term memory (LSTM) networks. To create and train a new network, you can use the built-in layers, define your own layers, or import layers from Caffe models. After defining the network layers, you must define the training parameters using trainingOptions function. You can then train the network using the trainNetwork function. Use the trained network to predict class labels or numeric responses.

You can train a convolutional neural network on either a CPU, a GPU, or multiple GPUs and/or in parallel. Training on a GPU or in parallel requires the Parallel Computing Toolbox™. Using a GPU requires a CUDA® enabled NVIDIA® GPU with compute capability 3.0 or higher. Specify the execution environment using the trainingOptions function.

Functions

expand all

trainingOptionsOptions for training deep learning neural network
trainNetworkTrain neural network for deep learning
SeriesNetworkSeries network for deep learning
DAGNetworkDirected acyclic graph (DAG) network for deep learning
imageDataAugmenterConfigure image data augmentation
augmentedImageDatastoreTransform batches to augment image data
imageInputLayerImage input layer
sequenceInputLayerSequence input layer
convolution2dLayer2-D convolutional layer
fullyConnectedLayerFully connected layer
lstmLayerLong short-term memory (LSTM) layer
bilstmLayerBidirectional long short-term memory (BiLSTM) layer
reluLayerRectified Linear Unit (ReLU) layer
leakyReluLayerLeaky Rectified Linear Unit (ReLU) layer
clippedReluLayerClipped Rectified Linear Unit (ReLU) layer
batchNormalizationLayerBatch normalization layer
crossChannelNormalizationLayer Channel-wise local response normalization layer
dropoutLayerDropout layer
averagePooling2dLayerAverage pooling layer
maxPooling2dLayerMax pooling layer
maxUnpooling2dLayerMax unpooling layer
additionLayerAddition layer
depthConcatenationLayerDepth concatenation layer
softmaxLayerSoftmax layer
transposedConv2dLayerCreate a transposed 2-D convolution layer
classificationLayerCreate classification output layer
regressionLayerCreate a regression output layer
setLearnRateFactorSet learn rate factor of layer learnable parameter
setL2FactorSet L2 regularization factor of layer learnable parameter
getLearnRateFactorGet learn rate factor of layer learnable parameter
getL2FactorGet L2 regularization factor of layer learnable parameter
checkLayerCheck validity of custom layer
importCaffeLayersImport convolutional neural network layers from Caffe
importCaffeNetworkImport pretrained convolutional neural network models from Caffe
importKerasLayersImport series network or directed acyclic graph layers from Keras network
importKerasNetworkImport a pretrained Keras network and weights
findPlaceholderLayersFind placeholder layers in Layer array or LayerGraph imported using importKerasLayers
replaceLayerReplace layer in layer graph
PlaceholderLayerLayer to replace an unsupported Keras layer
exportONNXNetworkExport network to ONNX model format
layerGraphGraph of network layers for deep learning
plotPlot neural network layer graph
addLayersAdd layers to layer graph
connectLayersConnect layers in layer graph
removeLayersRemove layers from layer graph
disconnectLayersDisconnect layers in layer graph
DAGNetworkDirected acyclic graph (DAG) network for deep learning
predictPredict responses using a trained deep learning neural network
classifyClassify data using a trained deep learning neural network
predictAndUpdateStatePredict responses using a trained recurrent neural network and update the network state
classifyAndUpdateStateClassify data using a trained recurrent neural network and update the network state
resetStateReset the state of a recurrent neural network

Classes

expand all

MiniBatchableAdd mini-batch support to datastore
BackgroundDispatchableAdd prefetch reading support to datastore
PartitionableByIndexAdd parallelization support to datastore
ShuffleableAdd shuffling support to datastore

Examples and How To

Create New Deep Network

Create Simple Deep Learning Network for Classification

This example shows how to create and train a simple convolutional neural network for deep learning classification.

Resume Training from a Checkpoint Network

Learn how to save checkpoint networks while training a convolutional neural network and resume training from a previously saved network

Train Convolutional Neural Network for Regression

This example shows how to fit a regression model using convolutional neural networks to predict the angles of rotation of handwritten digits.

Sequence Classification Using Deep Learning

This example shows how to classify sequence data using a long short-term memory (LSTM) network.

Create and Train DAG Network for Deep Learning

This example shows how to create and train a directed acyclic graph (DAG) network for deep learning.

Train Residual Network on CIFAR-10

This example shows how to create a deep learning neural network with residual connections and train it on CIFAR-10 data.

Define Custom Layers

Define Custom Deep Learning Layers

Learn how to define custom deep learning layers

Define a Custom Deep Learning Layer with Learnable Parameters

This example shows how to define a PReLU layer and use it in a convolutional neural network.

Define a Custom Regression Output Layer

This example shows how to define a custom regression output layer with mean absolute error (MAE) loss and use it in a convolutional neural network.

Define a Custom Classification Output Layer

This example shows how to define a custom classification output layer with sum of squares error (SSE) loss and use it in a convolutional neural network.

Check Custom Layer Validity

Learn how to check the validity of custom deep learning layers

Concepts

Deep Learning in MATLAB

Discover deep learning capabilities in MATLAB® using convolutional neural networks for classification and regression, including pretrained networks and transfer learning, and training on GPUs, CPUs, clusters, and clouds.

List of Deep Learning Layers

A list of built-in deep learning layers in Neural Network Toolbox™

Specify Layers of Convolutional Neural Network

Learn about the layers of a convolutional neural network (ConvNet), and the order they appear in a ConvNet

Set Up Parameters and Train Convolutional Neural Network

Learn how to set up training parameters for a convolutional neural network

Long Short-Term Memory Networks

Learn about long short-term memory (LSTM) networks

Preprocess Images for Deep Learning

Learn how to resize images for training, prediction and classification, and how to preprocess images using data augmentation and mini-batch datastores.

Develop Custom Mini-Batch Datastore

Create a fully customized mini-batch datastore that contains training and test data sets for network training, prediction, and classification.

Featured Examples

Was this topic helpful?