Ram Cherukuri, MathWorks
GPU Coder™ generates optimized CUDA® code from MATLAB® code for deep learning, embedded vision, and autonomous systems. The generated code calls optimized NVIDIA CUDA libraries and can be integrated into your project as source code, static libraries, or dynamic libraries. It can also be used for prototyping on GPUs such as the NVIDIA Tesla® and NVIDIA Tegra®.
This video walks you through the steps in the CUDA code generation process using the ray tracing example. It highlights how GPU Coder extracts data parallelism to create kernels on the GPU and the coding patterns that would allow you to maximize this parallelism.
GPU Coder also handles the allocation of threads within the kernel and minimizes the data transfer between the CPU and the GPU to offer further speed up. In the example, we will show how this can offer significant speed up for various applications areas such as image processing and computer vision, signal processing and deep learning.
Last but not least, GPU Coder enables you to deploy your application onto an embedded platform such as NVIDIA® Jetson™ TX1 board.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .Select web site
You can also select a web site from the following list:
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.