Deep Learning HDL Toolbox


Deep Learning HDL Toolbox

Prototype and deploy deep learning networks on FPGAs and SoCs

Get Started:

Deep Learning Inference on FPGAs

Prototype and implement deep learning networks on FPGAs for edge deployment.

Programmable Deep Learning Processor

The toolbox includes a deep learning processor that features generic convolution and fully-connected layers controlled by scheduling logic. This deep learning processor performs FPGA-based inferencing of networks developed using Deep Learning Toolbox™. High-bandwidth memory interfaces speed memory transfers of layer and weight data.

Deep learning processor architecture.

Compilation and Deployment

Compile your deep learning network into a set of instructions to be run by the deep learning processor. Deploy to the FPGA and run prediction while capturing actual on-device performance metrics.

Compiling and deploying a YOLO v2 network.

Get Started with Prebuilt Bitstreams

Prototype your network without FPGA programming using available bitstreams for popular FPGA development kits.

FPGA-Based Inferencing in MATLAB

Run deep learning inferencing on FPGAs from MATLAB.

Creating a Network for Deployment

Begin by using Deep Learning Toolbox to design, train, and analyze your deep learning network for tasks such as object detection or classification. You can also start by importing a trained network or layers from other frameworks.

Deploying Your Network to the FPGA

Once you have a trained network, use the deploy command to program the FPGA with the deep learning processor along with the Ethernet or JTAG interface. Then use the compile command to generate a set of instructions for your trained network without reprogramming the FPGA.

Use MATLAB to configure the board and interface, compile the network, and deploy to the FPGA.

Running FPGA-Based Inferencing as Part of Your MATLAB Application

Run your entire application in MATLAB®, including your test bench, preprocessing and post-processing algorithms, and the FPGA-based deep learning inferencing. A single MATLAB command, predict, performs the inferencing on the FPGA and returns results to the MATLAB workspace.

Run MATLAB applications that perform deep learning inferencing on the FPGA.

Network Customization

Tune your deep learning network to meet application-specific requirements on your target FPGA or SoC device.

Profile FPGA Inferencing

Measure layer-level latency as you run predictions on the FPGA to find performance bottlenecks.

Profile deep learning network inference on an FPGA from MATLAB.

Tune the Network Design

Using the profile metrics, tune your network configuration with Deep Learning Toolbox. For example, use Deep Network Designer to add layers, remove layers, or create new connections.

Deep Learning Quantization

Reduce resource utilization by quantizing your deep learning network to a fixed-point representation. Analyze tradeoffs between accuracy and resource utilization using the Model Quantization Library support package.

Deploying Custom RTL Implementations

Deploy custom RTL implementations of the deep learning processor to any FPGA, ASIC, or SoC device with HDL Coder.

Custom Deep Learning Processor Configuration

Specify hardware architecture options for implementing the deep learning processor, such as the number of parallel threads or maximum layer size.

Generate Synthesizable RTL

Use HDL Coder to generate synthesizable RTL from the deep learning processor for use in a variety of implementation workflows and devices. Reuse the same deep learning processor for prototype and production deployment.

Generate synthesizable RTL from the deep learning processor.

Generate IP Cores for Integration

When HDL Coder generates RTL from the deep learning processor, it also generates an IP core with standard AXI interfaces for integration into your SoC reference design.

Target platform interface table showing the mapping between I/O and AXI interfaces.