Deep Learning Toolbox
Design, train, and analyze deep learning networks
Deep Learning Toolbox™ provides a framework for designing and implementing deep neural networks with algorithms, pretrained models, and apps. You can use convolutional neural networks (ConvNets, CNNs) and long short-term memory (LSTM) networks to perform classification and regression on image, time-series, and text data. You can build network architectures such as generative adversarial networks (GANs) and Siamese networks using automatic differentiation, custom training loops, and shared weights. With the Deep Network Designer app, you can design, analyze, and train networks graphically. The Experiment Manager app helps you manage multiple deep learning experiments, keep track of training parameters, analyze results, and compare code from different experiments. You can visualize layer activations and graphically monitor training progress.
You can exchange models with TensorFlow™ and PyTorch through the ONNX format and import models from TensorFlow-Keras and Caffe. The toolbox supports transfer learning with DarkNet-53, ResNet-50, NASNet, SqueezeNet and many other pretrained models.
You can speed up training on a single- or multiple-GPU workstation (with Parallel Computing Toolbox™), or scale up to clusters and clouds, including NVIDIA® GPU Cloud and Amazon EC2® GPU instances (with MATLAB Parallel Server™).
Convolutional Neural Networks
Learn patterns in images to recognize objects, faces, and scenes. Construct and train convolutional neural networks (CNNs) to perform feature extraction and image recognition.
Long Short-Term Memory Networks
Learn long-term dependencies in sequence data including signal, audio, text, and other time-series data. Construct and train long short-term memory (LSTM) networks to perform classification and regression.
Use various network structures including directed acyclic graph (DAG) and recurrent architectures to build your deep learning network. Build advanced network architectures such as generative adversarial networks (GANs) and Siamese networks using custom training loops, shared weights, and automatic differentiation.
Design Deep Learning Networks
Create and train a deep network from scratch using the Deep Network Designer app. Import a pretrained model, visualize the network structure, edit layers, tune parameters, and train.
Analyze Deep Learning Networks
Analyze your network architecture to detect and debug errors, warnings, and layer compatibility issues before training. Visualize the network topology and view details such as learnable parameters and activations.
Manage Deep Learning Experiments
Manage multiple deep learning experiments with the Experiment Manager app. Keep track of training parameters, analyze results, and compare code from different experiments. Use visualization tools such as training plots and confusion matrices, sort and filter experiment results, and define custom metrics to evaluate trained models.
Access pretrained networks and use them as a starting point to learn a new task. Perform transfer learning to use the learned features in the network for a specific task.
Access pretrained networks from the latest research with a single line of code. Import pretrained models including DarkNet-53, ResNet-50, SqueezeNet, NASNet, and Inception-v3.
View training progress in every iteration with plots of various metrics. Plot validation metrics against training metrics to see if the network is overfitting.
Extract activations corresponding to a layer, visualize learned features, and train a machine learning classifier using the activations. Use the Grad-CAM approach to understand why a deep learning network makes its classification decisions.
Import and export ONNX models within MATLAB® for interoperability with other deep learning frameworks. ONNX enables models to be trained in one framework and transferred to another for inference. Use GPU Coder™ to generate optimized NVIDIA® CUDA® code and use MATLAB Coder™ to generate C++ code for the imported model.
Import models from Caffe Model Zoo into MATLAB for inference and transfer learning.
Speed up deep learning training and inference with high-performance NVIDIA GPUs. Perform training on a single workstation GPU or scale to multiple GPUs with DGX systems in data centers or on the cloud. You can use MATLAB with Parallel Computing Toolbox and most CUDA-enabled NVIDIA GPUs that have compute capability 3.0 or higher.
Reduce deep learning training times with cloud instances. Use high-performance GPU instances for the best results.
Run deep learning training across multiple processors on multiple servers on a network using MATLAB Parallel Server.
Use GPU Coder to generate optimized CUDA code from MATLAB code for deep learning, embedded vision, and autonomous systems. Use MATLAB Coder to generate C++ code to deploy deep learning networks to Intel® Xeon® and ARM® Cortex®-A processors. Automate cross-compilation and deployment of generated code onto NVIDIA Jetson™ and DRIVE™ platforms, and Raspberry Pi™ boards.
Deep Learning Quantization
Quantize your deep learning network to INT8 and analyze the tradeoff on the accuracy of quantizing the weights and biases of selected layers using the Model Quantization Library support package.
Train supervised shallow neural networks to model and control dynamic systems, classify noisy data, and predict future events.
Find relationships within data and automatically define classification schemes by letting the shallow network continually adjust itself to new inputs. Use self-organizing, unsupervised networks as well as competitive layers and self-organizing maps.
Perform unsupervised feature transformation by extracting low-dimensional features from your data set using autoencoders. You can also use stacked autoencoders for supervised learning by training and stacking multiple encoders.
Experiment Manager App
Manage multiple deep learning experiments, keep track of training parameters, and analyze and compare results and code
Deep Network Designer App
Interactively train a network for image classification, generate MATLAB code for training, and access pretrained models
Custom Training Loops
Train networks with multiple inputs, multiple outputs, or 3-D CNN layers
Deep Learning Examples
Train image captioning networks using attention and train conditional GANs using data labels and attributes
Perform transfer learning with DarkNet-19 and DarkNet-53
Import networks with multiple inputs or multiple outputs using the ONNX Model Converter
Specify custom layer backward functions for custom training loops
With just a few lines of MATLAB code, you can apply deep learning techniques to your work whether you’re designing algorithms, preparing and labeling data, or generating code and deploying to embedded systems.