I have a sparse 1 million by 1 million matrix system that I wish to solve repeatedly in a loop, (I use MLDIVIDE or the ' \ '). My CPU takes about 250 s to solve this once and the RAM goes up to 40 GB during the process. This has made me doubt that, Is it possible to solve this system on a 4 GB GPU, using the GPU MLDIVIDE? Will solving it on a GPU make it faster? Or it does not make sense? I have read that GPU is good for highly parallel operations. I have GeForce GTX 1050 Ti 4 GB GPU. My CPU is i7-9700 with 3 GHz, 8 cores, and 64 GB RAM.
The general advice is that Sparse MLDIVIDE may be convenient, but it is 'usually' slower than use of an iterative solver with an appropriate preconditioner: gmres, cgs, pcg, bicg, bicgstab, qmr, tfqmr, lsqr.
On the CPU, if you're repeatedly solving the same system, you might be able to benefit from the recently-introduced decomposition object.
On the GPU, it's hard to say without knowing the exact details whether or not the GPU will be of benefit in this case, so perhaps it would be best to get a Parallel Computing Toolbox trial licence to enable you to experiment.