Deploying Deep Learning on Embedded Devices - When FPGAs Make Sense
Designing deep learning, computer vision, and signal processing applications and deploying them to FPGAs, GPUs, and CPU platforms like Xilinx Zynq™ or NVIDIA® Jetson or ARM® processors is challenging because of resource constraints inherent in embedded devices. This talk walks you through a deployment workflow based on MATLAB® that generates C/C++ or CUDA® or VHDL code.
For system designers looking to integrate deep learning into their FPGA-based applications, the talk helps teach the challenges and considerations for deploying to FPGA hardware and details the workflow in MATLAB. See how to explore and prototype trained networks on FPGAs using prebuilt bistreams from MATLAB. You can further customize your network to meet your performance requirments and hardware resource usage, generate HDL, and integrate it into an FPGA-based edge inference system.
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.