How can I accelerate deep learning training using GPU?
213 views (last 30 days)
I've made a simple neural network
It classifies MNIST handwritten digit using fully-connected layers
lgraph_2 = [ ...
imageInputLayer([28 28 1])
And the options in the neural network is
miniBatchSize = 10;
valFrequency = 5;
options = trainingOptions('sgdm', ...
I expected when i use a GPU, it's training speed will be high
But when I train this network using Macbook(sigle CPU)
it takes 1 hour for around 2500 iterations
And when I use my desktop using RTX 2080Ti,
It takes much longer time to train.
MATLAB detects my GPU properly(I checked the GPU information using gpuDevice)
I don't know how can I accelerate the training proess.
Thank you in advance
Joss Knight on 2 Jun 2019
Your mini-batch size is far too small. You're not going to get any benefit of GPU over CPU with that little GPU utilisation. Increase it to 512 or 1024, or higher (MNIST is a toy network - you could probably train the whole thing in a single mini-batch).
Also, the ExecutionEnvironment option you're looking for is gpu or auto, not parallel. parallel may be slowing things down in your case, if you have a second supported graphics card.
Shivam Sardana on 29 May 2019
Edited: KSSV on 27 May 2021
Considering CUDA® enabled NVIDIA® GPU with compute capability 3.0 or higher and Parallel Computing Toolbox™ are installed. Consider changing ‘ExecutionEnivronement’ to ‘gpu’. You can refer to the documentation link to see if this helps: