Fine-tuning a pretrained network with transfer learning is typically much faster and easier than training from scratch. You can use previously trained networks for the following purposes.
Apply pretrained networks directly to classification problems. For an example showing how to use a pretrained network for classification, see Classify an Image Using AlexNet.
Take layers from a network trained on a large data set and fine-tune on a new data set. For an example showing how to use a pretrained network for transfer learning, see Transfer Learning and Fine-Tuning of Convolutional Neural Networks.
Use a pretrained network as a feature extractor by using the layer activations as features. You can use these activations as features to train another classifier, such as a support vector machine (SVM). For an example showing how to use a pretrained network for feature extraction, see Feature Extraction Using AlexNet.
You can download and install pretrained networks to use for
your problems. Use functions such as
get links to download pretrained networks from the Add-On Explorer.
To see a list of the latest downloads, see MathWorks
Neural Network Toolbox Team. To learn more about finding and
installing add-ons, see Get Add-Ons (MATLAB).
alexnet function to get a link
to download a pretrained AlexNet model.
AlexNet has learned rich feature representations for a wide
range of images. You can apply this rich feature learning to a wide
range of image classification problems using transfer learning and
feature extraction. The AlexNet model is trained on more than a million
images and can classify images into 1000 object categories (such as
keyboard, coffee mug, pencil, and many animals). The training images
are a subset of the ImageNet database , which is used in ImageNet Large-Scale
Visual Recognition Challenge (ILSVRC) .
AlexNet won ILSVRC 2012, achieving highest classification performance.
The network has 25 layers. There are 8 layers with learnable weights:
5 convolutional layers, and 3 fully connected layers. For more information,
to get links to download pretrained VGG models.
You can use VGG-16 and VGG-19 for classification, transfer learning
and feature extraction. VGG-16 and VGG-19 are both trained using the
same data set as AlexNet and won ILSVRC 2014, both achieving the highest
classification performance. VGG-16 has 41 layers. In VGG-16, there
are 16 layers with learnable weights: 13 convolutional layers, and
3 fully connected layers. VGG-19 has 47 layers. In VGG-19, there are
19 layers with learnable weights: 16 convolutional layers, and 3 fully
connected layers. In both networks, all convolutional layers have
filters of size 3-by-3. For more information, see
You can import pretrained networks from Caffe  using
There are many pretrained networks available in Caffe Model
Zoo . Locate and
download the desired
importCaffeNetwork to import the pretrained
network into MATLAB®. For more information, see
You can import network architectures from Caffe using
You can import the network architectures of Caffe networks,
without importing the pretrained network weights. Locate and download
.prototxt file and use
import the network layers into MATLAB. For more information,
Transfer learning is commonly used in deep learning applications. You can take a pretrained network and use it as a starting point to learn a new task. Fine-tuning a network with transfer learning is much faster and easier than constructing and training a new network. You can quickly transfer learning to a new task using a smaller number of training images. The advantage of transfer learning is that the pretrained network has already learned a rich set of features. These features can be applied to a wide range of other similar tasks. For example, you can take a network trained on millions of images and retrain it for new object classification using only hundreds of images. If you have a very large data set, then transfer learning might not be faster. For an example showing how to do transfer learning, see Transfer Learning and Fine-Tuning of Convolutional Neural Networks.
Feature extraction is an easy way to use the power of pretrained
networks without investing time and effort into training. Feature
extraction can be the fastest way to use deep learning. You extract
learned features from a pretrained network, and use those features
to train a classifier, such as a support vector machine using
fitcsvm (Statistics and Machine Learning Toolbox™).
For example, if an SVM achieves >90% accuracy on your training
and validation set, then fine-tuning might not be worth the effort
to increase accuracy. If you perform fine-tuning on a small data set,
you also risk over-fitting to the training data. If the SVM cannot
achieve good enough accuracy for your application, then fine-tuning
is worth the effort to seek higher accuracy. For an example showing
how to use a pretrained network for feature extraction, see Feature Extraction Using AlexNet
 ImageNet. http://www.image-net.org
 Russakovsky, O., Deng, J., Su, H., et al. "ImageNet Large Scale Visual Recognition Challenge." International Journal of Computer Vision (IJCV). Vol 115, Issue 3, 2015, pp. 211–252
 Caffe. http://caffe.berkeleyvision.org/
 Caffe Model Zoo. http://caffe.berkeleyvision.org/model_zoo.html