Embedded Vision Using MATLAB and Simulink

Embedded vision applications like autonomous vehicles, smartphone cameras, augmented reality, and medical devices require the end-to-end design workflow provided by MATLAB and Simulink.

By using MATLAB and Simulink in your development workflow, you can:

  • design vision algorithms with a comprehensive set of reference-standard functions for image processing, computer vision, deep learning, automated driving, and more
  • automatically generate C/C++, CUDA, Verilog, or VHDL code that is ready for embedded deployment
  • test and verify the generated code using rapid prototyping, processor-in-the-loop, and hardware-in-the-loop simulations
  • jointly collaborate with existing development projects, using integration APIs in MATLAB and Simulink or code generation, and
  • generate executables that run on popular embedded hardware like the NVIDIA Jetson or a Raspberry Pi.  Hardware support packages make it easy to get started and access hardware-specific features.

MATLAB Coder lets you generate C and C++ code from vision algorithms for both desktop systems and embedded hardware. With Embedded Coder, you can expand on MATLAB Coder’s capabilities to achieve hardware-specific optimizations, code traceability between your algorithm and generated code, and SIL and PIL verification. MATLAB Coder also lets you integrate with optimized libraries such as the ARM Compute Library for ARM architectures and MKL-DNN library for Intel CPUs.

HDL Coder enables you to design and generate readable, synthesizable code in VHDL and Verilog for FPGAs and ASICs. Vision HDL Toolbox provides a library of vision algorithms designed for the pixel-streaming architecture required. You can quickly set up and start prototyping with hardware support packages for FPGA-based vision platforms like the Xilinx Zynq and UltraScale platform.

GPU Coder lets you generate optimized CUDA from MATLAB for embedded vision applications, including deep learning. The generated code calls optimized NVIDIA CUDA libraries, including cuFFT, cuBLAS, cuDNN, and TensorRT, and can be used for prototyping on GPUs like the NVIDIA Jetson and Drive platforms.

For more information on these features and capabilities, please follow the link in the description.