Deep Learning Toolbox
Create, analyze, and train deep learning networks
Deep Learning Toolbox™ (formerly Neural Network Toolbox™) provides a framework for designing and implementing deep neural networks with algorithms, pretrained models, and apps. You can use convolutional neural networks (ConvNets, CNNs) and long short-term memory (LSTM) networks to perform classification and regression on image, time-series, and text data. You can build advanced network architectures such as generative adversarial networks (GANs) and Siamese networks using custom training loops, shared weights, and automatic differentiation. Apps and plots help you visualize activations, edit and analyze network architectures, and monitor training progress.
You can exchange models with TensorFlow™ and PyTorch through the ONNX format and import models from TensorFlow-Keras and Caffe. The toolbox supports transfer learning with a library of pretrained models (including NASNet, SqueezeNet, Inception-v3, and ResNet-101).
You can speed up training on a single- or multiple-GPU workstation (with Parallel Computing Toolbox™), or scale up to clusters and clouds, including NVIDIA® GPU Cloud and Amazon EC2® GPU instances (with MATLAB Parallel ServerTM).
Convolutional Neural Networks
Learn patterns in images to recognize objects, faces, and scenes. Construct and train convolutional neural networks (CNNs) to perform feature extraction and image recognition.
Long Short-Term Memory Networks
Learn long-term dependencies in sequence data including signal, audio, text, and other time-series data. Construct and train long short-term memory (LSTM) networks to perform classification and regression.
Use various network structures including directed acyclic graph (DAG) and recurrent architectures to build your deep learning network. Build advanced network architectures such as generative adversarial networks (GANs) and Siamese networks using custom training loops, shared weights, and automatic differentiation.
Design Deep Learning Networks
Create a deep network from scratch using the Deep Network Designer app. Import a pretrained model, visualize the network structure, edit the layers, and tune parameters.
Analyze Deep Learning Networks
Analyze your network architecture to detect and debug errors, warnings, and layer compatibility issues before training. Visualize the network topology and view details such as learnable parameters and activations.
Access pretrained networks and use them as a starting point to learn a new task and quickly transfer learned features to a new task using fewer training images.
View training progress in every iteration with plots of various metrics. Plot validation metrics against training metrics to visually check if the network is overfitting.
Extract activations corresponding to a layer, visualize the learned features, and train a machine learning classifier using the activations. Use the Grad-CAM approach to understand why a deep learning network makes its classification decisions.
Import and export ONNX models within MATLAB® for interoperability with other deep learning frameworks. ONNX enables models to be trained in one framework and transferred to another for inference. Use GPU Coder™ to generate optimized CUDA code and use MATLAB Coder™ to generate C++ code for the importer model.
Import models from Caffe Model Zoo into MATLAB for inference and transfer learning.
Speed up deep learning training and inference with high-performance NVIDIA GPUs. Perform training on a single workstation GPU or scale to multiple GPUs with DGX systems in data centers or on the cloud. You can use MATLAB with Parallel Computing Toolbox and most CUDA® enabled NVIDIA GPUs that have compute capability 3.0 or higher.
Reduce deep learning training times with cloud instances. Use high-performance GPU instances for the best results.
Run deep learning training across multiple processors on multiple servers on a network using MATLAB Parallel Server.
Train supervised shallow neural networks to model and control dynamic systems, classify noisy data, and predict future events.
Find relationships within data and automatically define classification schemes by letting the shallow network continually adjust itself to new inputs. Use self-organizing, unsupervised networks as well as competitive layers and self-organizing maps.
Perform unsupervised feature transformation by extracting low-dimensional features from your data set using autoencoders. You can also use stacked autoencoders for supervised learning by training and stacking multiple encoders.
Train advanced network architectures using custom training loops, automatic differentiation, shared weights, and custom loss functions
Deep Learning Networks
Build generative adversarial networks (GANs), Siamese networks, variational autoencoders, and attention networks
Improve training performance using multiple data normalization options
Map strongly activating features of input data using occlusion sensitivity
Multiple-Input, Multiple-Output Networks
Train networks with multiple inputs and multiple outputs
Long Short-Term Memory (LSTM) Networks
Compute intermediate layer activations
Export networks that combine CNN and LSTM layers and networks that include 3D CNN layers to ONNX format