How to deploy deep learning networks for Hardware-in-the-Loop (HIL) simulation with, for example, Speedgoat or dSPACE hardware systems from MATLAB R2018b onward?

10 views (last 30 days)
I want to generate plain code out of my keras deep learning neuronal network to avoid the dependency of third-party libs. I do this in MATLAB R2021a.
After that, I would like to build an S-Function for the use inside a model in MATLAB R2017b for deployment in HIL, which is set up in MATLAB R2017b. What is the recommended workflow?

Accepted Answer

MathWorks Support Team
MathWorks Support Team on 31 May 2021
Edited: MathWorks Support Team on 9 Jun 2021
The workflow for deploying deep learning networks in HIL simulation consists of three steps:
  1. First, depending on your target hardware, you generate CUDA code that uses third-party libraries (such as cuDNN) for your deep learning network. Alternatively, from R2021a onward, you can generate generic C/C++ code for your deep learning network that does not use third-party libraries. Generic code generation is useful if you usually work with deep learning networks using third-party libraries but want to avoid them for your current workflow.
  2. After you generate code for your network, the second step is to incorporate the generated code into your main model, for example, by using an S-function.
  3. Finally, you generate code for the entire Simulink model and deploy it in HIL.
The following sections describe these steps in greater detail.
(1) Generate Code for Deep Learning Network:
Generate C++ CUDA code that uses third-party libraries:
From MATLAB R2018b onward, you can generate C++ CUDA code for your pre-trained deep learning network that uses third-party libraries. To do this, in your MATLAB code, use the coder.loadDeepLearningConfig function to load the target library.
Generate C++ CUDA code directly in Simulink:
From MATLAB R2020b onward, you can generate code for your deep learning model directly in Simulink by using MATLAB Function blocks or the Deep Neural Networks block library. GPU Coder generates optimized code for deep learning networks implemented by using these blocks in Simulink models.
If you use MATLAB Function blocks, you write MATLAB code that loads pre-trained deep learning networks by using the coder.loadDeepLearningNetwork function.
Alternatively, use the Predict and Image Classifier blocks in the Deep Neural Networks library.
For both kinds of blocks, you can generate code by using either Simulink Coder or Embedded Coder.
Generate generic C/C++ code that does not use third-party libraries:
From R2021a onward, you can generate plain C/C++ code that does not use third-party libraries for your deep learning network. To do this, set the DeepLearningConfig code configuration property as:
cfg.DeepLearningConfig = coder.DeepLearningConfig('TargetLibrary', 'none');
However, if you aim to insert the generated code of your Deep Learning Neuronal Network, whether CUDA or plain code, into Simulink model inside older MATLAB Releases, e.g. in MATLAB R2017b, it is recommended to generate a DLL from the entire deep learning network instead of a MEX:
% generate a standalone C dynamic library as embedded code using the Embedded Coder
cfg = coder.config('dll');
cfg.TargetLang = 'C';
cfg.DeepLearningConfig = coder.DeepLearningConfig('none'); %”none” because of plain code generation
(2) Incorporate Generated Code into Simulink Model:
Incorporate C++ CUDA code that uses third-party libraries: As mentioned above, you can directly use deep learning networks in Simulink starting in R2020b.If you want to use CUDA code for algorithms other than deep learning networks, Simulink blocks do not currently support GPU code. However, you can bring CUDA code into Simulink by generating a dynamically linked library that still uses the performance of GPUs and implement this library as an S-function into the Simulink model by using Legacy Code Tool. See Integrating Deep Learning with GPU Coder into Simulink.
Incorporate generic C/C++ code that does not use third-party libraries: If you generate plain C/C++ code, incorporate this code into your main Simulink model by using any one of these utilities
  • C Caller block — Integrates C code into Simulink by importing your C functions.
  • C Function block — Integrates and calls external C code from a Simulink model.
  • S-Function — Uses special syntax known as S-function API to communicate with the Simulink engine. They allow you to create continuous, discrete, and hybrid systems.
  • S-Function Builder — Integrates C/C++ code by building an S-function from your code with the specifications you supply. The S-function builder also serves as a wrapper for the S-functions generated in your models.
  • Legacy Code Tool — Integrates C/C++ functions, such as lookup tables, and general functions and interfaces into Simulink models.
For more information on these approaches, see Implement Algorithms Using C/C++ Code.
You can also incorporate the code generated for your deep neural network, whether CUDA or plain C/C++, into a Simulink model in older MATLAB releases, for example, in MATLAB R2017b.
For that, it is recommended to use the Legacy Code Tool to integrate the generated C function from the DLL (from the above example). In the following section, it is described which steps are necessary to create the corresponding S-function. This workflow is documented in Integrate C Functions Using Legacy Code Tool:
Here, it is important to consider the correct definition of “OutputFcnSpec” in order to obtain a workable S-function. This property requires the signature from the function’s header file that is included inside the generated DLL:
The signature inside the header file is following:
float Predict_NN_MAT(double dki, double lki, double tki, double nab, double lab,double tab, double done);
In this case, the signature contains "float" as the output type and in reference to the documentation link above, you need to switch the output data type from “float” to "single". Furthermore, the variables should be unique, like u1, u2, etc.. Here is an example of how to set up the generation for the S-Function:
def = legacy_code('initialize');
def.SFunctionName = 'Predict_NN_MAT_SFcn';
def.OutputFcnSpec = 'single y1= Predict_NN_MAT(double u1, double u2, double u3,double u4, double u5, double u6,double u7)';
def.HeaderFiles = {'FullyConnectedActivation.h', 'Predict_NN_MAT.h', 'Predict_NN_MAT_data.h', 'Predict_NN_MAT_initialize.h', 'Predict_NN_MAT_internal_types.h', 'Predict_NN_MAT_terminate.h', 'Predict_NN_MAT_types.h', 'predict.h', 'rtwtypes.h'};
def.SourceFiles = {'FullyConnectedActivation.c', 'Predict_NN_MAT.c', 'Predict_NN_MAT_data.c', 'Predict_NN_MAT_initialize.c', 'Predict_NN_MAT_terminate.c', 'predict.c'};
def.IncPaths = {'...\dll\Predict_NN_MAT'};
def.SrcPaths = {'...\dll\Predict_NN_MAT'};
def.LibPaths = {'...\dll\Predict_NN_MAT'};
def.Options.useTlcWithAccel = false;
legacy_code('sfcn_cmex_generate', def);
(3) Generate Code for the Entire Model to Deploy in HIL:
After you integrate the generic C/C++ or CUDA code within your Simulink model, use Embedded Coder to generate code for the entire model for HIL deployment. See Code Generation by Using Embedded Coder.
For an example that shows how to generate C Code from a Simulink model, see Generate C Code from Simulink Models.

More Answers (0)

Categories

Find more on Deep Learning with GPU Coder in Help Center and File Exchange

Tags

No tags entered yet.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!