This webinar is jointly presented by MathWorks and NVIDIA.
Researchers and engineers around the globe use MATLAB to analyze data, create algorithms or train models.
In this webinar, learn how MATLAB combined with NVIDIA solutions can accelerate and scale your work on GPUs.
At first, Axel (NVIDIA) presents the latest GPU options and features, including MIG (Multi-Instance GPUs), 3rd generation Tensor Cores, Mixed Precision and NVLink™. The focus will be on the NVIDIA A100 Tensor Core-GPU and how to leverage it in the cloud.
Building on that, Christoph (MathWorks) describes how you can take advantage of NVIDIA GPUs from within MATLAB without rewriting code for computationally intensive applications, for example in signal & image processing as well as deep learning. A case study is used to demonstrate how the usage of GPUs allows to scale computations. This is illustrated by the example of a parameter search for deep learning training, leveraging local and public cloud resources and comparing the performance.
Switching to the embedded area, Christoph shows how the power of NVIDIA embedded GPUs can be leveraged from MATLAB and Simulink via automated C++ and CUDA code generation, including a few example applications.
Robin (NVIDIA) then will further discuss NVIDIAs embedded GPU solutions, concluding the webinar with an update on the NVIDIA® Jetson™ platform and describes the hardware and software specifics, including NVIDIA JetPack SDK.