Note: This page has been translated by MathWorks. Please click here

To view all translated materials including this page, select Japan from the country navigator on the bottom of this page.

To view all translated materials including this page, select Japan from the country navigator on the bottom of this page.

Create new deep networks for image classification and regression,
including series, DAG, and LSTM networks, import from Caffe, or define your
own layers

For image classification and regression problems, you can train a convolutional neural network (ConvNet, CNN), such as a directed acyclic graph (DAG) network. For sequence and time series classification problems, you can train a long short-term memory (LSTM) network.

To create and train a new network, you can use the built-in layers, define
your own layers, or import layers from Caffe models. After defining the
network architecture, you must define the training parameters using
`trainingOptions`

function. You
can then train the network using the `trainNetwork`

function. Use the
trained network to predict class labels or numeric responses.

You can train a convolutional neural network on either a CPU, a GPU, or
multiple GPUs and/or in parallel. Training on a GPU or in parallel requires
the Parallel
Computing Toolbox™. To use a GPU, you
must also have a CUDA^{®} enabled NVIDIA^{®} GPU with compute capability 3.0 or higher. Specify the execution environment using the `trainingOptions`

function.

**Convolutional Neural Network**

`alexnet` | Pretrained AlexNet convolutional neural network |

`vgg16` | Pretrained VGG-16 convolutional neural network |

`vgg19` | Pretrained VGG-19 convolutional neural network |

`googlenet` | Pretrained GoogLeNet convolutional neural network |

`inceptionv3` | Pretrained Inception-v3 convolutional neural network |

`resnet50` | Pretrained ResNet-50 convolutional neural network |

`resnet101` | Pretrained ResNet-101 convolutional neural network |

`importCaffeLayers` | Import convolutional neural network layers from Caffe |

`importCaffeNetwork` | Import pretrained convolutional neural network models from Caffe |

`importKerasLayers` | Import series network or directed acyclic graph layers from Keras network |

`importKerasNetwork` | Import a pretrained Keras network and weights |

`findPlaceholderLayers` | Find placeholder layers in a Layer array or LayerGraph imported using importKerasLayers |

`PlaceholderLayer` | Layer to replace an unsupported Keras layer |

`imageInputLayer` | Image input layer |

`sequenceInputLayer` | Sequence input layer |

`convolution2dLayer` | 2-D convolutional layer |

`transposedConv2dLayer` | Transposed 2-D convolution layer |

`fullyConnectedLayer` | Fully connected layer |

`LSTMLayer` | Long short-term memory (LSTM) layer |

`reluLayer` | Rectified Linear Unit (ReLU) layer |

`leakyReluLayer` | Leaky Rectified Linear Unit (ReLU) layer |

`clippedReluLayer` | Clipped Rectified Linear Unit (ReLU) layer |

`batchNormalizationLayer` | Batch normalization layer |

`crossChannelNormalizationLayer` | Channel-wise local response normalization layer |

`dropoutLayer` | Dropout layer |

`averagePooling2dLayer` | Average pooling layer |

`maxPooling2dLayer` | Max pooling layer |

`maxUnpooling2dLayer` | Max unpooling layer |

`additionLayer` | Addition layer |

`depthConcatenationLayer` | Depth concatenation layer |

`softmaxLayer` | Softmax layer |

`classificationLayer` | Create classification output layer |

`regressionLayer` | Create a regression output layer |

`setLearnRateFactor` | Set learn rate factor of layer learnable parameter |

`setL2Factor` | Set L2 regularization factor of layer learnable parameter |

`getLearnRateFactor` | Get learn rate factor of layer learnable parameter |

`getL2Factor` | Get L2 regularization factor of layer learnable parameter |

`trainingOptions` | Options for training neural network |

`trainNetwork` | Train neural network for deep learning |

`SeriesNetwork` | Series network for deep learning |

`DAGNetwork` | Directed acyclic graph (DAG) network for deep learning |

`imageDataAugmenter` | Configure image data augmentation |

`augmentedImageSource` | Generate batches of augmented image data |

`layerGraph` | Graph of network layers for deep learning |

`plot` | Plot neural network layer graph |

`addLayers` | Add layers to layer graph |

`connectLayers` | Connect layers in layer graph |

`removeLayers` | Remove layers from layer graph |

`disconnectLayers` | Disconnect layers in layer graph |

`DAGNetwork` | Directed acyclic graph (DAG) network for deep learning |

`predict` | Predict responses using a trained deep learning neural network |

`classify` | Classify data using a trained deep learning neural network |

`predictAndUpdateState` | Predict responses using a trained recurrent neural network and update the network state |

`classifyAndUpdateState` | Classify data using a trained recurrent neural network and update the network state |

`resetState` | Reset the state of a recurrent neural network |

**Create Simple Deep Learning Network for Classification**

This example shows how to create and train a simple convolutional neural network for deep learning classification.

**Resume Training from a Checkpoint Network**

Learn how to save checkpoint networks while training a convolutional neural network and resume training from a previously saved network

**Train a Convolutional Neural Network for Regression**

This example shows how to fit a regression model using convolutional neural networks to predict the angles of rotation of handwritten digits.

**Classify Sequence Data Using LSTM Networks**

This example shows how to classify sequence data using Long Short-Term Memory (LSTM) networks.

**Create and Train DAG Network for Deep Learning**

This example shows how to create and train a directed acyclic graph (DAG) network for deep learning.

**Define New Deep Learning Layers**

Learn how to define new deep learning layers

**Define a Layer with Learnable Parameters**

This example shows how to define a PReLU layer and use it in a convolutional neural network.

**Define a Regression Output Layer**

This example shows how to define a regression output layer with mean absolute error (MAE) loss and use it in a convolutional neural network.

**Define a Classification Output Layer**

This example shows how to define a classification output layer with sum of squares error (SSE) loss and use it in a convolutional neural network.

Discover deep learning capabilities in MATLAB^{®} using
convolutional neural networks for classification and regression, including
pretrained networks and transfer learning, and training on GPUs, CPUs,
clusters, and clouds.

**Specify Layers of Convolutional Neural Network**

Learn about the layers of a convolutional neural network (ConvNet), and the order they appear in a ConvNet

**Set Up Parameters and Train Convolutional Neural Network**

Learn how to set up training parameters for a convolutional neural network

**Long Short-Term Memory Networks**

Learn about long short-term memory (LSTM) networks

**Preprocess Images for Deep Learning**

Learn how to preprocess images using an image data augmenter and how to resize images for classification and prediction.

Was this topic helpful?