I am currently trying to train a neural network using multiple gpus on my machine. Currently, I have 4 Titan V gpus, and have been using them all to train. Training on a small data set (~300 images of size 216x384x3) is fine when I use all 4 gpus. However, recently, the number of training images has increased to 1000, and I am looking to use all 1000 to train the network. Currently to train the network, I am using the parameter below.
When I use all 1000 images to train, it seems as if all 4 of my gpus are overloaded. When I run the training algorithm, my computer remains on, but my monitor continues to flash. I think its because the gpu can't render the graphics on the screen and also be used for computation. I am currently rendering the graphics display on 1 of the 4 Titan V gpus.
My solution was to specify using only the other 3 gpus for analysis. That way, the current gpu that is being used to render the screen would not crash. Is this a fair assessment?
To specify only using gpus indexed at 2,3, and 4, I followed the suggestion at this link: Selecting specific GPUs for parpool
parobj = parpool('local', 3); spmd, gpuDevice(2); end
However, when I do this, I get the warning below.
Warning: Unable to assign workers with indices 2 in the parallel pool their own GPUs, so these workers will be unused. When opening the pool, specify a number of workers equal to the number of available GPUs.
I had not received this warning when I used all 4 gpus. What would be a solution for being able to use 3 gpus instead of the 4?
That code is selecting device 2 on every worker. You want something like
parpool('local', 3); spmd gpuDevice(labindex+1); end
Another option is to look at reducing the load on worker 1, say
trainingOptions(..., 'WorkerLoad', [0.2, 1, 1, 1]);
Or whatever number stops the graphics misbehaving.
Generally, any GPU driving graphics on Windows OS must be in WDDM mode which means all sorts of limitations are imposed by the OS. You should look at getting a separate card for graphics.