Networks and Layers Supported for C++ Code Generation

MATLAB® Coder™ supports code generation for series and directed acyclic graph (DAG) convolutional neural networks (CNNs or ConvNets). You can generate code for any trained convolutional neural network whose layers are supported for code generation. See Supported Layers.

Supported Pretrained Networks

The following pretrained networks, available in Deep Learning Toolbox™, are supported for code generation.

Network NameDescriptionARM® Compute LibraryIntel® MKL-DNN
AlexNet

AlexNet convolutional neural network. For the pretrained AlexNet model, see alexnet.

YesYes
GoogLeNet

GoogLeNet convolutional neural network. For the pretrained GoogLeNet model, see googlenet.

YesYes
Inception-v3Inception-v3 convolutional neural network. For the pretrained Inception-v3 model, see inceptionv3.YesYes
MobileNet-v2

MobileNet-v2 convolutional neural network. For the pretrained MobileNet-v2 model, see mobilenetv2.

YesYes
ResNet

ResNet-50 and ResNet-101 convolutional neural networks. For the pretrained ResNet models, see resnet50 and resnet101.

YesYes
SegNet

Multi-class pixelwise segmentation network. For more information, see segnetLayers.

NoYes
SqueezeNet

Small, deep neural network. For the pretrained ResNet models, see squeezenet.

YesYes
VGG-16

VGG-16 convolutional neural network. For the pretrained VGG-16 model, see vgg16.

YesYes
VGG-19

VGG-19 convolutional neural network. For the pretrained VGG-19 model, see vgg19.

YesYes

Supported Layers

The following layers are supported for code generation by MATLAB Coder for the target deep learning libraries specified in the table.

Once you install the support package MATLAB Coder Interface for Deep Learning Libraries, you can use coder.getDeepLearningLayers to see a list of the layers supported for a specific deep learning library. For example:

coder.getDeepLearningLayers('mkldnn')

Layer NameDescriptionARM Compute LibraryIntel MKL-DNN
additionLayer

Addition layer

YesYes
averagePooling2dLayer

Average pooling layer

YesYes
batchNormalizationLayer

Batch normalization layer

YesYes
classificationLayer

Create classification output layer

YesYes
clippedReluLayer

Clipped Rectified Linear Unit (ReLU) layer

YesYes
convolution2dLayer

2-D convolution layer

Yes

Yes

crop2dLayer

Layer that applies 2-D cropping to the input

NoYes
CrossChannelNormalizationLayer

Channel-wise local response normalization layer

YesYes

Custom output layers

All output layers including custom classification or regression output layers created by using nnet.layer.ClassificationLayer or nnet.layer.RegressionLayer.

For an example showing how to define a custom classification output layer and specify a loss function, see Define Custom Classification Output Layer (Deep Learning Toolbox).

For an example showing how to define a custom regression output layer and specify a loss function, see Define Custom Regression Output Layer (Deep Learning Toolbox).

Yes

Yes

depthConcatenationLayer

Depth concatenation layer

Yes

Yes

  • To generate code that uses MKL-DNN v0.14, either all of the channel dimensions of the layer inputs must be multiples of 8 or all of the channel dimensions must be non-multiples of 8.

dropoutLayer

Dropout layer

YesYes
fullyConnectedLayer

Fully connected layer

YesYes
globalAveragePooling2dLayer

Global average pooling layer for spatial data

Yes

Yes

groupedConvolution2dLayer

2-D grouped convolutional layer

Yes

  • If you specify an integer for numGroups, then the value must be less than or equal to 2.

Yes

imageInputLayer

Image input layer

  • Code generation does not support 'Normalization' specified using a function handle.

YesYes
leakyReluLayer

Leaky Rectified Linear Unit (ReLU) layer

YesYes
maxPooling2dLayer

Max pooling layer

YesYes
maxUnpooling2dLayer

Max unpooling layer

NoYes
pixelClassificationLayer

Create pixel classification layer for semantic segmentation

YesYes
regressionLayer

Create a regression output layer

YesYes
reluLayer

Rectified Linear Unit (ReLU) layer

YesYes
softmaxLayer

Softmax layer

Yes

Yes

nnet.keras.layer.FlattenCStyleLayer

Flattens activations into 1-D assuming C-style (row-major) order

Yes

Yes

nnet.keras.layer.GlobalAveragePooling2dLayer

Global average pooling layer for spatial data

Yes

Yes

nnet.keras.layer.SigmoidLayer

Sigmoid activation layer

Yes

Yes

nnet.keras.layer.TanhLayer

Hyperbolic tangent activation layer

Yes

Yes

nnet.keras.layer.ZeroPadding2dLayer

Zero padding layer for 2-D input

Yes

Yes

nnet.onnx.layer.ElementwiseAffineLayer

Layer that performs element-wise scaling of the input followed by an addition

YesYes

nnet.onnx.layer.FlattenLayer

Flatten layer for ONNX™ network

Yes

Yes

tanhLayer

Hyperbolic tangent (tanh) layer

Yes

Yes

transposedConv2dLayer

Transposed 2-D convolution layer

Code generation does not support asymmetric cropping of the input. For example, specifying a vector [t b l r] for the 'Cropping' parameter to crop the top, bottom, left, and right of the input is not supported.

Yes

Yes

YOLOv2OutputLayer

Output layer for YOLO v2 object detection network

Yes

Yes

YOLOv2ReorgLayer

Reorganization layer for YOLO v2 object detection network

Yes

Yes

YOLOv2TransformLayer

Transform layer for YOLO v2 object detection network

Yes

Yes

Limitation with MKL-DNN and Large Negative Input Values to Softmax Layer

The generated code for a softmax layer might be incorrect (contain NaN values) when all of the following conditions are true:

  • You generate code using MKL-DNN.

  • The network is an FCN network or a custom network (a network that you create and train).

  • The input to the softmax layer has a large, negative value. If the softmax layer is not preceded by an ReLU layer, the input to the softmax layer can be a large, negative value.

Limitation for convolution2dLayer for MKL-DNN and CPUs That Use AVX-512

For MKL-DNN v0.14 and CPUs that use AVX-512, the generated code for convolution operations can produce incorrect results if the network has convolution layers with these sizes and padding:

  • The input tensors to a convolution layer have height and width that are multiples of 224.

  • The padding right parameter of the convolution layer has a non-zero value.

  • The number of output filters of the convolution layer is a multiple of 16.

Verify if the output of the generated code does not contain such incorrect results by comparing it with the output obtained from running predict on the network in MATLAB.

Supported Classes

Class

Description

ARM Compute Library

Intel MKL-DNN

yolov2ObjectDetector

  • Only the detect method of the yolov2ObjectDetector is supported for code generation.

  • The roi argument to the detect method must be a code generation constant (coder.const()) and a 1x4 vector.

  • Only the Threshold, SelectStrongest, MinSize, and MaxSize name-value pairs for detect are supported.

  • The labels output of detect is returned as a cell array of character vectors, for example, {'car','bus'}.

Yes

Yes

See Also

Related Topics