Documentation

This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English verison of the page.

Note: This page has been translated by MathWorks. Please click here
To view all translated materals including this page, select Japan from the country navigator on the bottom of this page.

Pretrained Convolutional Neural Networks

Fine-tuning a pretrained network with transfer learning is typically much faster and easier than training from scratch. You can use previously trained networks for the following purposes.

PurposeDescription
Classification

Apply pretrained networks directly to classification problems. For an example showing how to use a pretrained network for classification, see Classify an Image Using AlexNet.

Transfer Learning

Take layers from a network trained on a large data set and fine-tune on a new data set. For an example showing how to use a pretrained network for transfer learning, see Transfer Learning and Fine-Tuning of Convolutional Neural Networks.

Feature Extraction

Use a pretrained network as a feature extractor by using the layer activations as features. You can use these activations as features to train another classifier, such as a support vector machine (SVM). For an example showing how to use a pretrained network for feature extraction, see Feature Extraction Using AlexNet.

Download Pretrained Networks

You can download and install pretrained networks to use for your problems. Use functions such as alexnet to get links to download pretrained networks from the Add-On Explorer. To see a list of the latest downloads, see MathWorks Neural Network Toolbox Team. To learn more about finding and installing add-ons, see Get Add-Ons (MATLAB).

AlexNet

Use the alexnet function to get a link to download a pretrained AlexNet model.

AlexNet has learned rich feature representations for a wide range of images. You can apply this rich feature learning to a wide range of image classification problems using transfer learning and feature extraction. The AlexNet model is trained on more than a million images and can classify images into 1000 object categories (such as keyboard, coffee mug, pencil, and many animals). The training images are a subset of the ImageNet database [1], which is used in ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) [2]. AlexNet won ILSVRC 2012, achieving highest classification performance. The network has 25 layers. There are 8 layers with learnable weights: 5 convolutional layers, and 3 fully connected layers. For more information, see alexnet.

VGG-16 and VGG-19

Use the vgg16 and vgg19 functions to get links to download pretrained VGG models.

You can use VGG-16 and VGG-19 for classification, transfer learning and feature extraction. VGG-16 and VGG-19 are both trained using the same data set as AlexNet and won ILSVRC 2014, both achieving the highest classification performance. VGG-16 has 41 layers. In VGG-16, there are 16 layers with learnable weights: 13 convolutional layers, and 3 fully connected layers. VGG-19 has 47 layers. In VGG-19, there are 19 layers with learnable weights: 16 convolutional layers, and 3 fully connected layers. In both networks, all convolutional layers have filters of size 3-by-3. For more information, see vgg16 and vgg19.

importCaffeNetwork

You can import pretrained networks from Caffe [3] using the importCaffeNetwork.

There are many pretrained networks available in Caffe Model Zoo [4]. Locate and download the desired .prototxt and .caffemodel files and use importCaffeNetwork to import the pretrained network into MATLAB®. For more information, see importCaffeNetwork.

importCaffeLayers

You can import network architectures from Caffe using importCaffeLayers.

You can import the network architectures of Caffe networks, without importing the pretrained network weights. Locate and download the desired .prototxt file and use importCaffeLayers to import the network layers into MATLAB. For more information, see importCaffeLayers.

Transfer Learning

Transfer learning is commonly used in deep learning applications. You can take a pretrained network and use it as a starting point to learn a new task. Fine-tuning a network with transfer learning is much faster and easier than constructing and training a new network. You can quickly transfer learning to a new task using a smaller number of training images. The advantage of transfer learning is that the pretrained network has already learned a rich set of features. These features can be applied to a wide range of other similar tasks. For example, you can take a network trained on millions of images and retrain it for new object classification using only hundreds of images. If you have a very large data set, then transfer learning might not be faster. For an example showing how to do transfer learning, see Transfer Learning and Fine-Tuning of Convolutional Neural Networks.

Feature Extraction

Feature extraction is an easy way to use the power of pretrained networks without investing time and effort into training. Feature extraction can be the fastest way to use deep learning. You extract learned features from a pretrained network, and use those features to train a classifier, such as a support vector machine using fitcsvm (Statistics and Machine Learning Toolbox™). For example, if an SVM achieves >90% accuracy on your training and validation set, then fine-tuning might not be worth the effort to increase accuracy. If you perform fine-tuning on a small data set, you also risk over-fitting to the training data. If the SVM cannot achieve good enough accuracy for your application, then fine-tuning is worth the effort to seek higher accuracy. For an example showing how to use a pretrained network for feature extraction, see Feature Extraction Using AlexNet

References

[1] ImageNet. http://www.image-net.org

[2] Russakovsky, O., Deng, J., Su, H., et al. "ImageNet Large Scale Visual Recognition Challenge." International Journal of Computer Vision (IJCV). Vol 115, Issue 3, 2015, pp. 211–252

[3] Caffe. http://caffe.berkeleyvision.org/

[4] Caffe Model Zoo. http://caffe.berkeleyvision.org/model_zoo.html

See Also

| | | |

Related Topics

Was this topic helpful?