# Application of Particle Swarm Optimization to solve linear system of equations

21 views (last 30 days)
M.Shaarawy on 14 Feb 2018
Commented: Walter Roberson on 2 Mar 2018
Hi, I have a linear system of equations in the matrix notation form Ax=b, with A is 27*27, b vector is 27*1, and we want to solve for x which is vector of 27 unknowns (i.e, 27*1). Any help to use PSO to solve like this system due to the condition number is high so solving using x=inv(A)*b is a bad solution.

Matt J on 14 Feb 2018
If the condition number is high, then any solution will be a bad solution, but what about
x=pinv(A)*b
John D'Errico on 14 Feb 2018
PSO is just a silly thing to try on this. Flat out insane. As Matt says, use pinv.

John D'Errico on 14 Feb 2018
Edited: John D'Errico on 14 Feb 2018
Ok. I have some free time to answer this now.
Using PSO to solve a singular problem is a bit silly. Sorry, but it is. Yes, inv fails. But you should NEVER be using inv to solve that problem anyway. Inv is a poor choice in general, with backslash a preferred solution, thus
x = A\b;
When A is numerically singular however, backslash will also have problems. In that case, there really is no good solution method, although some are clearly better or worse than others. Any choice will have problems. Pinv is a good choice, SOME of the time. I say some of the time, because pinv is not perfect. While pinv will not actively fail in the way that backslash or inv will do, the solution is not unique. An example is a good way to see some of what happens.
A = magic(4)
A =
16 2 3 13
5 11 10 8
9 7 6 12
4 14 15 1
rank(A)
ans =
3
So A is singular. As you can see, we get near infinite garbage when we use inv on a random right hand side.
inv(A)*rand(4,1)
Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 1.306145e-17.
ans =
1.3173e+14
3.952e+14
-3.952e+14
-1.3173e+14
We know that the sum of the rows of this 4x4 magic square are all 34. So it should be true that
Even in a case where b is created where we have a known solution, there are issues.
b = repmat(34,4,1) + randn(4,1)*1e-16
b =
34
34
34
34
inv(A)*b
Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 1.306145e-17.
ans =
1
6
-2
0
b = repmat(34,4,1) + randn(4,1)*1e-14
b =
34
34
34
34
inv(A)*b
Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 1.306145e-17.
ans =
5
18
-14
-4
So, changing the right hand side by a tiny amount of random fuzz is sufficient to generate a completely different solution.
See that pinv is more stable here.
pinv(A)*b
ans =
1
1
1
1
But there was never a good reason to use PSO. If you insist on the use of an iterative algorithm, lsqr is not a bad choice though.
lsqr(A,b)
lsqr converged at iteration 1 to a solution with relative residual 3e-16.
ans =
1
1
1
1
LSQR will generally arrive at the same solution that pinv would yield, when faced with a singular system.
So why is PSO a bad idea? PSO is an iterative tool, that will not yield fast results. Nor will it yield results that have any high number of digits in the result. As an iterative scheme based on stochastic methods, it will produce different result every time you try it. On a problem with no unique result as in the case of a singular matrix, it will produce an effectively random result. As such, PSO is just a bad idea here, especially when you have two good solutions that will be robust and consistent on singular problems.
Finally, I said before that PINV is not a perfect solution. Since I know somebody will ask me to clarify that statement, I might as well forestall the obvious request.
1. PINV will generally be slower than other methods. LSQR will usually produce the same solution on singular problems, but it is an iterative method, so it too may be slower, and there you have convergence issues.
2. PINV produces a solution that has minimum norm. Very often I would argue that on a singular problem, you are best served by being told there is a serious issue, not by giving you an answer that looks like it is reasonable. If you do use backslash, and are told the problem is singular, then you can always come back and use pinv, WHEN THAT IS A GOOD CHOICE. Understanding why there is a problem is more important, because most of the time when you try to solve such a singular system, there is a significant reason why you should be fixing the problem at the source, before you ever try to solve the resulting system.
So just always throwing pinv at all problems is arguably a bad idea. No warning messages, but those warning messages are important in this case!
3. Again, PINV produces a solution that has minimum norm. But minimum norm may not be always the best choice, out of all infinitely many equally good solutions. In some cases, a solution that has a maximum number of zeros is the one that a user needs. So one argument is that knowing why you might have chosen to use pinv is important, which comes down to understanding pinv and the linear algebra behind it.
4. PINV does not apply to sparse matrices, although LSQR will survive there.
A = sprand(5,5,.2)
A =
(4,2) 0.57466
(1,3) 0.90085
(5,3) 0.84518
(5,5) 0.73864
b = rand(5,1);
pinv(A)*b
Error using svd
SVD does not support sparse matrices. Use SVDS to compute a subset of the singular values and vectors of a sparse matrix.
Error in pinv (line 18)
[U,S,V] = svd(A,'econ');
x = lsqr(A,b)
lsqr converged at iteration 3 to a solution with relative residual 0.64.
x =
0
0.14527
0.65048
0
0.10315
So as you see on a clearly singular matrix A, pinv fails. inv and backslash would also have been somewhat unhappy. But lsqr did produce a solution. Since A is singular, the solution is not a great one.
[A*x,b]
ans =
0.58599 0.58599
0 0.24673
0 0.66642
0.083483 0.083483
0.62596 0.62596
As I said before, no solution is perfect when you have a singular problem.

Walter Roberson on 1 Mar 2018
In the expression
norm(A*x-b,2)
then you do not need to index x. However, it appears that A is likely a 2D array, so you would have to be sure that the number of columns of A is the same as the number of rows of x . The size of the result of A*x will be the number of rows of A by the number of columns of x. For the subtraction to work properly after that, then b would either have to be that same size, or the dimension of where A*x does not agree in size with b would have to be 1.
It looks to me as if you are counting on x being a particular shape (row vector, column vector, 2D array). I would not make that assumption. I would probably code
norm(A*x(:) - b(:), 2)
which does assume that A is a 2D array but does not assume shape for x or for b.
M.Shaarawy on 2 Mar 2018
The size of A is 9×27 b is 9×1 vector And I want to solve for x which must be 27×1
I can do what??
Walter Roberson on 2 Mar 2018
Then try with
norm(A*x(:) - b(:), 2)
as I indicated.