This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

Pretrained Convolutional Neural Networks

You can take a pretrained image classification network that has already learned to extract powerful and informative features from natural images and use it as a starting point to learn a new task. The pretrained networks are trained on more than a million images and can classify images into 1000 object categories, such as keyboard, coffee mug, pencil, and many animals. The training images are a subset of the ImageNet database [1], which is used in ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) [2]. Using a pretrained network with transfer learning is typically much faster and easier than training a network from scratch.

You can use previously trained networks for the following tasks:

PurposeDescription
Classification

Apply pretrained networks directly to classification problems. To classify a new image, use classify. For an example showing how to use a pretrained network for classification, see Classify Image Using GoogLeNet.

Feature Extraction

Use a pretrained network as a feature extractor by using the layer activations as features. You can use these activations as features to train another machine learning model, such as a support vector machine (SVM). For more information, see Feature Extraction. For an example, see Feature Extraction Using AlexNet.

Transfer Learning

Take layers from a network trained on a large data set and fine-tune on a new data set. For more information, see Transfer Learning. For a simple example, see Get Started with Transfer Learning. To try more pretrained networks, see Train Deep Learning Network to Classify New Images.

Load Pretrained Networks

Use functions such as googlenet to get links to download pretrained networks from the Add-On Explorer. For a list of all currently available downloads, see MathWorks Deep Learning Toolbox Team. The following table lists the available pretrained networks and some of their properties. The network depth is defined as the largest number of sequential convolutional or fully connected layers on a path from the input layer to the output layer.

NetworkDepthSizeParameters (Millions)Image Input Size
alexnet8

227 MB

61.0

227-by-227

vgg1616

515 MB

138

224-by-224

vgg1919

535 MB

144

224-by-224

squeezenet18

4.6 MB

1.24

227-by-227

googlenet22

27 MB

7.0

224-by-224

inceptionv348

89 MB

23.9

299-by-299

densenet201201

77 MB

20.0

224-by-224

resnet1818

44 MB

11.7

224-by-224

resnet5050

96 MB

25.6

224-by-224

resnet101101

167 MB

44.6

224-by-224

inceptionresnetv2164

209 MB

55.9

299-by-299

Compare Pretrained Networks

Pretrained networks have different characteristics that matter when choosing a network to apply to your problem. The most important characteristics are network accuracy, speed, and size. Choosing a network is generally a tradeoff between these characteristics.

Tip

To get started with transfer learning, try choosing one of the faster networks, such as SqueezeNet or GoogLeNet. You can then iterate quickly and try out different settings such as data preprocessing steps and training options. Once you have a feeling of which settings work well, try a more accurate network such as Inception-v3 or a ResNet and see if that improves your results.

Use the plot below to compare the ImageNet validation accuracy with the time required to make a prediction using the network. A good network has a high accuracy and is fast. The plot displays the classification accuracy versus the prediction time when using a modern GPU (an NVIDIA® TITAN Xp) and a mini-batch size of 64. The prediction time is measured relative to the fastest network. The area of each marker is proportional to the size of the network on disk.

A network is Pareto efficient if there is no other network that is better on all the metrics being compared, in this case accuracy and prediction time. The set of all Pareto efficient networks is called the Pareto frontier. The Pareto frontier contains all the networks that are not worse than another network on both metrics. The plot connects the networks that are on the Pareto frontier in the plane of accuracy and prediction time. All networks except AlexNet, VGG-16, VGG-19, and DenseNet-201 are on the Pareto frontier.

Note

The plot below only shows an indication of the relative speeds of the different networks. The exact prediction and training iteration times depend on the hardware and mini-batch size that you use.

The classification accuracy on the ImageNet validation set is the most common way to measure the accuracy of networks trained on ImageNet. Networks that are accurate on ImageNet are also often accurate when you apply them to other natural image data sets using transfer learning or feature extraction. This generalization is possible because the networks have learned to extract powerful and informative features from natural images that generalize to other similar data sets. However, high accuracy on ImageNet does not always transfer directly to other tasks, so it is a good idea to try multiple networks.

If you want to perform prediction using constrained hardware or distribute networks over the Internet, then also consider the size of the network on disk and in memory.

Network Accuracy

There are multiple ways to calculate the classification accuracy on the ImageNet validation set and different sources use different methods. Sometimes an ensemble of multiple models is used and sometimes each image is evaluated multiple times using multiple crops. Sometimes the top-5 accuracy instead of the standard (top-1) accuracy is quoted. Because of these differences, it is often not possible to directly compare the accuracies from different sources. The accuracies of pretrained networks in Deep Learning Toolbox™ are standard (top-1) accuracies using a single model and single central image crop.

Feature Extraction

Feature extraction is an easy and fast way to use the power of deep learning without investing time and effort into training a full network. Because it only requires a single pass over the training images, it is especially useful if you do not have a GPU. You extract learned image features using a pretrained network, and then use those features to train a classifier, such as a support vector machine using fitcsvm.

Try feature extraction when your new data set is very small. Since you only train a simple classifier on the extracted features, training is fast. It is also unlikely that fine-tuning deeper layers of the network improves the accuracy since there is little data to learn from.

  • If your data is very similar to the original data, then the more specific features extracted deeper in the network are likely to be useful for the new task.

  • If your data is very different from the original data, then the features extracted deeper in the network might be less useful for your task. Try training the final classifier on more general features extracted from an earlier network layer. If the new data set is large, then you can also try training a network from scratch.

ResNets are often the best feature extractors [4], independently of their ImageNet accuracies. For an example showing how to use a pretrained network for feature extraction, see Feature Extraction Using AlexNet.

Transfer Learning

You can fine-tune deeper layers in the network by training the network on your new data set with the pretrained network as a starting point. Fine-tuning a network with transfer learning is often faster and easier than constructing and training a new network. The network has already learned a rich set of image features, but when you fine-tune the network it can learn features specific to your new data set. If you have a very large data set, then transfer learning might not be faster than training from scratch.

Tip

Fine-tuning a network often gives the highest accuracy. For very small data sets (fewer than about 20 images per class), try feature extraction.

Fine-tuning a network is slower and requires more effort than simple feature extraction, but since the network can learn to extract a different set of features, the final network is often more accurate. Fine-tuning usually works better than feature extraction as long as the new data set is not very small, because then the network has data to learn new features from. For examples showing how to perform transfer learning, see Transfer Learning with Deep Network Designer and Train Deep Learning Network to Classify New Images.

Import and Export Networks

You can import networks and network architectures from TensorFlow®-Keras, Caffe, and the ONNX™ (Open Neural Network Exchange) model format. You can also export trained networks to the ONNX model format.

Import from Keras

Import pretrained networks from TensorFlow-Keras by using importKerasNetwork. You can import the network and weights either from the same HDF5 (.h5) file or separate HDF5 and JSON (.json) files. For more information, see importKerasNetwork.

Import network architectures from TensorFlow-Keras by using importKerasLayers. You can import the network architecture, either with or without weights. You can import the network architecture and weights either from the same HDF5 (.h5) file or separate HDF5 and JSON (.json) files. For more information, see importKerasLayers.

Import from Caffe

Import pretrained networks from Caffe by using the importCaffeNetwork function. There are many pretrained networks available in Caffe Model Zoo [3]. Download the desired .prototxt and .caffemodel files and use importCaffeNetwork to import the pretrained network into MATLAB®. For more information, see importCaffeNetwork.

You can import network architectures of Caffe networks. Download the desired .prototxt file and use importCaffeLayers to import the network layers into MATLAB. For more information, see importCaffeLayers.

Export to and Import from ONNX

Export trained networks to the ONNX model format by using the exportONNXNetwork function. You can then import the ONNX model to other deep learning frameworks, such as TensorFlow, that support ONXX model import. For more information, see exportONNXNetwork.

Import pretrained networks from ONNX using importONNXNetwork and import network architectures with or without weights using importONNXLayers.

References

[1] ImageNet. http://www.image-net.org

[2] Russakovsky, O., Deng, J., Su, H., et al. “ImageNet Large Scale Visual Recognition Challenge.” International Journal of Computer Vision (IJCV). Vol 115, Issue 3, 2015, pp. 211–252

[3] Caffe Model Zoo. http://caffe.berkeleyvision.org/model_zoo.html

[4] Kornblith, Simon, Jonathon Shlens, and Quoc V. Le. "Do Better ImageNet Models Transfer Better?." arXiv preprint arXiv:1805.08974 (2018).

See Also

| | | | | | | | | | | | | | | | |

Related Topics