Learn how you can generate CUDA® code from a trained deep neural network in MATLAB® and leverage the NVIDIA® TensorRT™ library for inference on NVIDIA GPUs. The video demonstrates this by using a pedestrian detection application as an example.
The NVIDIA TensorRT library is a high-performance deep learning inference optimizer and runtime library. The generated code leverages the network-level and layer-level TensorRT APIs to get the best performance, and you see the neural network for pedestrian detection running on a NVIDIA Titan XP around 700 fps.
You can export the generated code along with the rest of the application and deploy the algorithm on embedded GPU targets such as Jetson Tegra® or Drive™ PX platforms.