How can I generate code from a Deep Learning network with Simulink and deploy it to hardware?

20 views (last 30 days)
How can I generate code in Simulink from a pretrained Deep Learning neural network and deploy it to hardware, such as Speedgoat, C2000, STM32, Raspberry Pi, Arduino, dSPACE prototyping platforms?

Accepted Answer

MathWorks Support Team
MathWorks Support Team on 15 Sep 2025
Edited: MathWorks Support Team on 15 Sep 2025
Starting from R2021a, Deep Learning Toolbox offers the ability to generate generic C/C++ code for deep learning networks that does not use third-party libraries. This enables cross-platform deployment for embedded targets and prototyping platforms such as Speedgoat (Simulink Real-Time), TI C2000, STM32, Raspberry Pi, or Arduino. See the following basic example demonstrating the workflow with Embedded Coder for STM32:
The following sections describe the three steps for deploying deep learning networks to hardware from Simulink:

Step 1: Create or import a pretrained Deep Learning network

To import a pretrained model from third-party frameworks into a MATLAB dlnetwork object, use one of the following functions:
📖 Refer to the “More About” section on each function's documentation page for details on supported operators. If you encounter the warning "Returning an uninitialized dlnetwork" on import, each page provides guidance on how to initialize the network.
To import, modify, create, or train your own network, we also recommend using Deep Network Designer, which offers an interactive interface with feedback and guidance. Example: Import PyTorch Model Using Deep Network Designer.
For a list of networks and layers supported for code generation, see Networks and Layers Supported for Code Generation.

Considerations for custom layers:

When importing pretrained Deep Learning networks from third-party frameworks such as TensorFlow, PyTorch, or ONNX, MATLAB automatically generates custom layers for those that can't be mapped to built-in layers. Code generation support for custom layers is still limited.
🚀 New in R2025a: You can now generate code for custom layers in ONNX networks imported using importNetworkFromONNX. This significantly expands deployment possibilities for imported networks. For other frameworks, code generation for custom layers is not yet supported. As a workaround, consider exporting your TensorFlow and PyTorch models to ONNX (e.g., using tf2onnx) to enable code generation for custom layers.

Step 2: Incorporate the Deep Learning network into your main model

Save the initialized dlnetwork object to a MAT file. Then, add a Predict (or other block from the "Deep Neural Networks" library) to your Simulink model and reference the MAT file. See Deep Learning in Simulink by Using Deep Neural Networks Library.
Note that blocks such as the ONNX Model Predict block from the "Python Neural Networks" library are designed for simulation only and do not support code generation.

Step 3: Generate code for the model to deploy to hardware

Once the Predict block (or other "Deep Neural Networks" block) is integrated into your Simulink model, use Simulink Coder or Embedded Coder to generate code for the entire model for hardware deployment. See Generate C Code from Simulink Models, or the documentation of your target-specific toolbox or hardware support package.

More Answers (0)

Categories

Find more on Deep Learning with GPU Coder in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!