Join MathWorks and NVIDIA to learn about training deep learning models on GPUs and GPU clouds for speech and audio applications.
Using deep learning in speech and audio poses important computational challenges when transitioning from research models to real-world designs. The need to work in a wide range of operating conditions requires large training datasets, and implementing designs on low-power embedded devices requires exploration to find the optimal parameters and the right trade-offs between prediction performance and computational complexity.
In this webinar, we’ll show how to use MATLAB and NVIDIA GPUs to build deep networks, accelerate data-intensive problems, and train multiple network configurations in parallel.
During the session you will learn about:
- Designing and importing deep networks for speech and audio applications
- Using data augmentation to synthesize additional application-specific training data
- Extracting most commonly used features from speech and audio signals
- Training deep learning models on NVIDIA GPUs and NVIDIA GPU Cloud (NGC)
Please allow approximately 45 minutes to attend the presentation and Q&A session. We will be recording this webinar, so if you can't make it for the live broadcast, register and we will send you a link to watch it on-demand.