I am currently trying to train a neural network using multiple gpus on my machine. Currently, I have 4 Titan V gpus, and have been using them all to train. Training on a small data set (~300 images of size 216x384x3) is fine when I use all 4 gpus. However, recently, the number of training images has increased to 1000, and I am looking to use all 1000 to train the network. Currently to train the network, I am using the parameter below.
When I use all 1000 images to train, it seems as if all 4 of my gpus are overloaded. When I run the training algorithm, my computer remains on, but my monitor continues to flash. I think its because the gpu can't render the graphics on the screen and also be used for computation. I am currently rendering the graphics display on 1 of the 4 Titan V gpus.
My solution was to specify using only the other 3 gpus for analysis. That way, the current gpu that is being used to render the screen would not crash. Is this a fair assessment?
parobj = parpool('local', 3);
spmd, gpuDevice(2); end
However, when I do this, I get the warning below.
Warning: Unable to assign workers with indices 2 in the parallel pool their
own GPUs, so these workers will be unused. When opening the pool, specify a
number of workers equal to the number of available GPUs.
I had not received this warning when I used all 4 gpus. What would be a solution for being able to use 3 gpus instead of the 4?