MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

### Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

# Thread Subject: How to improve the bad condition number of Xb=A(X = A/B)

 Subject: How to improve the bad condition number of Xb=A(X = A/B) From: Ala Date: 16 Dec, 2009 02:15:22 Message: 1 of 14 A lot of techniques have been developed for the improvement of the bad condition number of the linear equations AX=B, i.e., X =A\B, or X = B*inv(A).Suppose X and B areis 8X1, A is 8X8. Qne useful technique is multiplying both sides of the equations by a diagonal matrix D whose elements are the maximum of each row of A. DAX=DB, and thus X = DB*inv(DA), After do that to a bad condition number of A in computing AX=B, the condition number got much lower and no warning of singularity is displayed in the matlab command window. While for XA=B, i.e., X = B/A or X = B* inv(A), here X and B are 1X8 , A is the same A and B is the transposition of B in the above case. When the similar technique is imposed, i.e., XAD=BD, X = BD*inv(AD). D is same as the D in the above case. it failed and this time, no improvement is obtained and the warning is still displayed in the command window with a condition number (cond = 1e-17) .  I wonder why and how could this situation be improved ? It's a little urgent and thanks a lot for your advice!!
 Subject: How to improve the bad condition number of Xb=A(X = A/B) From: Bruno Luong Date: 16 Dec, 2009 08:03:19 Message: 2 of 14 "Ala " wrote in message ... > A lot of techniques have been developed for the improvement of the bad condition number of the linear equations AX=B, i.e., X =A\B, or X = B*inv(A).Suppose X and B areis 8X1, A is 8X8. > Qne useful technique is multiplying both sides of the equations by a diagonal matrix D whose elements are the maximum of each row of A. > DAX=DB, and thus X = DB*inv(DA), After do that to a bad condition number of A in computing AX=B, the condition number got much lower and no warning of singularity is displayed in the matlab command window. > > While for XA=B, i.e., X = B/A or X = B* inv(A), here X and B are 1X8 , A is the same A and B is the transposition of B in the above case. > When the similar technique is imposed, i.e., XAD=BD, X = BD*inv(AD). D is same as the D in the above case. You can still do left preconditioning XA = B <=> X*inv(D)*DA = B So solve in 2 steps Y = B/(D*A) X = Y*D Instead of using max-row, for D you can try using row 2-norm. Preconditioning is an art, but and often it is needed because the parameters is not properly scaled. Bruno
 Subject: How to improve the bad condition number of Xb=A(X = A/B) From: Ala Date: 16 Dec, 2009 08:37:03 Message: 3 of 14 "Bruno Luong" wrote in message ... > "Ala " wrote in message ... > > A lot of techniques have been developed for the improvement of the bad condition number of the linear equations AX=B, i.e., X =A\B, or X = B*inv(A).Suppose X and B areis 8X1, A is 8X8. > > Qne useful technique is multiplying both sides of the equations by a diagonal matrix D whose elements are the maximum of each row of A. > > DAX=DB, and thus X = DB*inv(DA), After do that to a bad condition number of A in computing AX=B, the condition number got much lower and no warning of singularity is displayed in the matlab command window. > > > > While for XA=B, i.e., X = B/A or X = B* inv(A), here X and B are 1X8 , A is the same A and B is the transposition of B in the above case. > > When the similar technique is imposed, i.e., XAD=BD, X = BD*inv(AD). D is same as the D in the above case. > > You can still do left preconditioning > > XA = B <=> got it at last. Using the same methods as you suggested. Thanks very much anyway. Thank you very much. That is just what I got at last. > X*inv(D)*DA = B > > So solve in 2 steps > Y = B/(D*A) > X = Y*D > > Instead of using max-row, for D you can try using row 2-norm. > > Preconditioning is an art, but and often it is needed because the parameters is not properly scaled. > > Bruno
 Subject: How to improve the bad condition number of Xb=A(X = A/B) From: Greg Heath Date: 16 Dec, 2009 11:35:25 Message: 4 of 14 On Dec 15, 9:15 pm, "Ala " wrote: > A lot of techniques have been developed for the improvement of the bad condition number of the linear equations AX=B, i.e., > X =A\B, or X = B*inv(A). No. X = inv(A)*B. >Suppose X and B areis 8X1, A is 8X8. > Qne useful technique is multiplying both sides of the > equations by a diagonal matrix D whose elements are the > maximum of each row of A. That doesn't make sense. What if the maximum is huge? What if the minimum is negative with a huge abs value but the max is positive with a small abs value? It is better to decompose A into the product of 3 matrices A = C*D*E where C and E are full rank diagonal with elements that are exact powers of two (to avoid floating point errors) and D has elements in the interval [-0.5,+0.5] or [-1,+1]. Then A*X = C*D*E*X = B and X = E\D\C\B or X = E\pinv(D)*C\B > DAX=DB, and thus X = DB*inv(DA), No. X = inv(DA)*DB if A nonsingular. What do you do if A is rectangular or singular? > After do that to a bad condition number of A in computing > AX=B, the condition number got much lower and no > warning of singularity is displayed in the matlab command > window. However, what is the size of your error?? > While for XA=B, i.e., X = B/A or X = B* inv(A), here X and > B are 1X8 , A is the same A and B is the transposition of B > in the above case. > When the similar technique is imposed, i.e., XAD=BD, > X = BD*inv(AD). D is same as the D in the above case. > it failed and this time, no improvement is obtained and the > warning is still displayed in the command window with a > condition number (cond = 1e-17) . In MATLAB cond >= 1 and 1e+17 is bad! > I wonder why and how could this situation be improved ? > > It's a little urgent and thanks a lot for your advice!! Why don't you post A and B? Hope this helps. Greg
 Subject: How to improve the bad condition number of Xb=A(X = A/B) From: Bruno Luong Date: 16 Dec, 2009 12:03:05 Message: 5 of 14 Greg Heath wrote in message <1f267612-e204-4e1d-a296-586127bcaf37@t19g2000vbc.googlegroups.com>... > > A = C*D*E > > where C and E are full rank diagonal with elements that > are exact powers of two (to avoid floating point errors) I disagree, IMO there is no point to worry about floating point errors of the left/and preconditioners with cumbersome power-of-two things ; the effect is negligible. The main focus of should be how to reduce as much as possible the condition number of D, the floating point errors are right there. Bruno
 Subject: How to improve the bad condition number of Xb=A(X = A/B) From: Ala Date: 16 Dec, 2009 14:50:17 Message: 6 of 14 Greg Heath wrote in message <1f267612-e204-4e1d-a296-586127bcaf37@t19g2000vbc.googlegroups.com>... > On Dec 15, 9:15 pm, "Ala " wrote: > > A lot of techniques have been developed for the improvement > of the bad condition number of the linear equations AX=B, i.e., > > X =A\B, or X = B*inv(A). > > No. X = inv(A)*B. sorry, a silly mistake. > >Suppose X and B areis 8X1, A is 8X8. > > Qne useful technique is multiplying both sides of the > > equations by a diagonal matrix D whose elements are the > > maximum of each row of A. > > That doesn't make sense. What if the maximum is huge? > What if the minimum is negative with a huge abs value > but the max is positive with a small abs value? > It is better to decompose A into the product of 3 > matrices > > A = C*D*E > > where C and E are full rank diagonal with elements that > are exact powers of two (to avoid floating point errors) > and D has elements in the interval [-0.5,+0.5] or > [-1,+1]. Then > > A*X = C*D*E*X = B > > and > > X = E\D\C\B > > or > > X = E\pinv(D)*C\B > > > > DAX=DB, and thus X = DB*inv(DA), > > No. X = inv(DA)*DB if A nonsingular. > > What do you do if A is rectangular or singular? > > > > After do that to a bad condition number of A in computing > > AX=B, the condition number got much lower and no > > warning of singularity is displayed in the matlab command > > window. > > However, what is the size of your error?? > > > While for XA=B, i.e., X = B/A or X = B* inv(A), here X and > > B are 1X8 , A is the same A and B is the transposition of B > > in the above case. > > When the similar technique is imposed, i.e., XAD=BD, > > X = BD*inv(AD). D is same as the D in the above case. > > it failed and this time, no improvement is obtained and the > > warning is still displayed in the command window with a > > condition number (cond = 1e-17) . > > In MATLAB cond >= 1 and 1e+17 is bad! > > I wonder why and how could this situation be improved ? > > > > It's a little urgent and thanks a lot for your advice!! > > Why don't you post A and B? > Hope this helps. > > Greg HI Greg, Yes, for AX=B, X = inv(A)*B not X = B*inv(A), sorry, a silly mistake. you said 'In MATLAB cond >= 1 and 1e+17 is bad!' I wonder if you what you mean bad, is that mean not correct? I thought if no warning existed in the command window, that's good. for the XA = B case, I actually just want to obtain the first element of X, i.e., X(1), so I did the following : Still let D the diagnal matrix whose elements are the  maximum absolute values of each row of A. X(1) = B*inv(A)*O= B*inv(DA) *DO, that is , to let A = DA, O =DO, here O is a column vector whose has all zeros elements except that the first element is 1. I got cond(A) = 1e6, as you mentioned, it is still bad conditioned. So How could I decompose A into the product of 3? what is the principle behind this ? Thank you very much!!!!!
 Subject: How to improve the bad condition number of Xb=A(X = A/B) From: Ala Date: 16 Dec, 2009 15:08:07 Message: 7 of 14 Greg Heath wrote in message <1f267612-e204-4e1d-a296-586127bcaf37@t19g2000vbc.googlegroups.com>... > On Dec 15, 9:15 pm, "Ala " wrote: > > A lot of techniques have been developed for the improvement > of the bad condition number of the linear equations AX=B, i.e., > > X =A\B, or X = B*inv(A). > > No. X = inv(A)*B. > > >Suppose X and B areis 8X1, A is 8X8. > > Qne useful technique is multiplying both sides of the > > equations by a diagonal matrix D whose elements are the > > maximum of each row of A. > > That doesn't make sense. What if the maximum is huge? > What if the minimum is negative with a huge abs value > but the max is positive with a small abs value? > > It is better to decompose A into the product of 3 > matrices > > A = C*D*E > > where C and E are full rank diagonal with elements that > are exact powers of two (to avoid floating point errors) > and D has elements in the interval [-0.5,+0.5] or > [-1,+1]. Then > > A*X = C*D*E*X = B > > and > > X = E\D\C\B > > or > > X = E\pinv(D)*C\B > > > > DAX=DB, and thus X = DB*inv(DA), > > No. X = inv(DA)*DB if A nonsingular. > > What do you do if A is rectangular or singular? > > > > After do that to a bad condition number of A in computing > > AX=B, the condition number got much lower and no > > warning of singularity is displayed in the matlab command > > window. > > However, what is the size of your error?? > > > While for XA=B, i.e., X = B/A or X = B* inv(A), here X and > > B are 1X8 , A is the same A and B is the transposition of B > > in the above case. > > When the similar technique is imposed, i.e., XAD=BD, > > X = BD*inv(AD). D is same as the D in the above case. > > it failed and this time, no improvement is obtained and the > > warning is still displayed in the command window with a > > condition number (cond = 1e-17) . > > In MATLAB cond >= 1 and 1e+17 is bad! > > > I wonder why and how could this situation be improved ? > > > > It's a little urgent and thanks a lot for your advice!! > > Why don't you post A and B? > > Hope this helps. > > Greg Hi Greg, One important questions is why should In MATLAB cond >= 1 and 1e+17 is bad! condition should be a relative quantity, but relative to what? i thought the condition is good if there is no warning in the matlab command window. Seem according to your logic, even matlab dosenot warn you of the singular case, still the result could be incorrect, is that right? I actually want to get X(1) at last. so I rightly multiply a column vector O , whose elements are all zeros except the first one, i.e., O = [1, 0,0,0,0,0,0,0]'; XA = B X(1) =X*O=B*inv(A)*O, thus let A =DA, O =DO, D is the diagonal matrix whose elements are the max absolute value of each row. In this way, the condition number is reduced from order 1e16 to 1e6 and no warning existed in the command window this time. But in terms of what you said, the new matrix A is still bad conditioned until cond(A) <1? But how to decompose A = C*D*E is still a trouble for me ? could you offer some details? Thanks very much!!! Yanni
 Subject: How to improve the bad condition number of Xb=A(X = A/B) From: Ala Date: 16 Dec, 2009 15:19:21 Message: 8 of 14 Attached: A = (you can paste it to the Excel) 1.81422292065333e+17 + 181430699009315i 181308951835456 - 1.81300550532866e+17i -181318555910554 + 1.81276533685491e+17i -1.81398291336085e+17 - 181440296636121i 0.00000000000000 + 0.00000000000000i 0.00000000000000 + 0.00000000000000i 0.00000000000000 + 0.00000000000000i 0.00000000000000 + 0.00000000000000i -181306551447397 + 1.81306551447397e+17i 1.81428297009420e+17 + 181428297009420i -1.81398291530844e+17 - 181440296441389i 181318555715952 - 1.81276533880120e+17i 0.00000000000000 + 0.00000000000000i 0.00000000000000 + 0.00000000000000i 0.00000000000000 + 0.00000000000000i 0.00000000000000 + 0.00000000000000i 2.49974244694115e+16 + 24998582827322.3i 24099099900979.4 - 2.40979832223579e+16i -24099761668242.1 + 2.40963285623883e+16i -2.49958293483261e+16 - 24999220778231.4i -288933643436.265 - 109407977789.008i 105216112541.010 - 277565497728.401i 100054088162.304 - 173654910751.698i 190073304693.864 + 104076512896.475i -24098780847832.7 + 2.40987808478327e+16i 2.49982518657284e+16 + 24998251865728.3i -2.49958293751630e+16 - 24999220751395.7i 24099761642372.2 - 2.40963285882595e+16i 103983345417.190 - 338126007526.117i 351965559109.397 + 108102490897.231i 195829057974.483 + 102658501007.529i -98858681322.6034 + 179005367172.056i -725.231055584781 + 0.00000000000000i 0.00000000000000 + 752.299823913576i 6.72059419675370e-05 - 752.261778825269i 725.190860243949 + 6.97146975006281e-05i 677.687115826019 + 4.67095494105880i -4.61091525110492 + 705.349591109953i -64.4147250082597 + 512.095906766742i -472.204283188339 - 69.8564544766706i -0.00000000000000 - 752.309783903985i -725.240657201222 + 0.00000000000000i 725.157256371334 + 0.000103316118279369i -0.000102061919485487 + 752.226920541230i -3.07707059761060 + 721.190921529997i -692.907738399614 - 3.20267079861044i -373.727194984317 - 88.0000237384249i 86.3428886826285 - 406.068484273400i 0.00000000000000 + 0.00000000000000i 0.00000000000000 + 0.00000000000000i 0.00000000000000 + 0.00000000000000i 0.00000000000000 + 0.00000000000000i 80250791447.8213 + 29997000075.0000i -87682157.1740737 + 38620570690.3478i 0.00000000000000 + 39487561222.1866i -78106568141.3300 - 30820493773.6253i 0.00000000000000 + 0.00000000000000i 0.00000000000000 + 0.00000000000000i 0.00000000000000 + 0.00000000000000i 0.00000000000000 + 0.00000000000000i 0.00000000000000 + 0.00000000000000i -97728007298.1423 - 29560412562.4455i -80250791447.8213 - 29997000075.0000i 0.00000000000000 + 0.00000000000000i B = 0.00000000000000 + 0.00000000000000i 0.00000000000000 + 0.00000000000000i 0.00000000000000 + 0.00000000000000i 0.00000000000000 + 0.00000000000000i 0.00000000000000 + 0.00000000000000i 0.444077934177166 - 195.599011265932i -0.00000000000000 - 199.990000000000i 0.00000000000000 + 0.00000000000000i (before leftly multiply D) Thanks Yannni
 Subject: How to improve the bad condition number of Xb From: Greg Heath Date: 16 Dec, 2009 22:29:30 Message: 9 of 14 On Dec 16, 7:03 am, "Bruno Luong" wrote: > Greg Heath wrote in message <1f267612-e204-4e1d-a296-586127bca...@t19g2000vbc.googlegroups.com>... > > > A = C*D*E > > > where C and E are full rank diagonal with elements that > > are exact powers of two (to avoid floating point errors) > > I disagree, IMO there is no point to worry about floating point errors of the left/and preconditioners with cumbersome power-of-two things ;  the effect is negligible. The main focus of should be how to reduce as much as possible the condition number of > D, the floating point errors are right there. The algorithm was from a numerical analysis paperback I used with my Apple II in the early 1980s. The point was power of two arithmetic is error free... perfect for fiddling with illconditioned matrices on low precision machines. Hope this helps. Greg
 Subject: How to improve the bad condition number of Xb From: Greg Heath Date: 16 Dec, 2009 23:03:07 Message: 10 of 14 On Dec 16, 9:50 am, "Ala " wrote: > Greg Heath wrote in message <1f267612-e204-4e1d-a296-586127bca...@t19g2000vbc.googlegroups.com>... > > On Dec 15, 9:15 pm, "Ala " wrote: -----SNIP > you said 'In MATLAB cond >= 1 and  1e+17 is bad!' > > I wonder if you what you mean bad, is that mean not correct? I was responding to your implication that cond = 1e-17 was bad! >I thought if no warning existed in the command window, that's good. The only thing that is good is a low error. The relative error inequalities that are something like ||dX||/||X|| ----------------- <= cond(A)  ||dB|/||B||, and ||dX||/||X+dX|| ----------------- <= cond(A)  ||dA||/||A|| are just upperbounds and do not guarantee disaster. The log of the condition number yields an upper bound on the number of relative precision digits in the answer. Therefore, when the normal equation A'*A*X = A'*B is used the condition number is squared and you can get into real trouble. > for the XA = B case, I actually just want to obtain the first > element of X, i.e., X(1), so I did the following : > Still let D the diagnal matrix whose elements are the >  maximum absolute values of each row of A. > X(1) = B*inv(A)*O= B*inv(DA) *DO, that is , to let A = DA, O =DO, here O is a column vector whose has all zeros elements except that the first element is 1. > I got cond(A) = 1e6, as you mentioned, it is still bad conditioned. > > So How could I decompose A into the product of 3? I don't remember the algorithm, but it is similar to complete pivoting. The maximum elements of each row and column tend to end up on the diagonal and divided by the next highest power of 2. >what is the principle behind this ? 1. Pre and post multiplying by powers of 2 are error free. 2. Diagonally dominant matrices tend to have lower    condition numbers. > Thank you very much!!!!!- Hope this helps. Greg
 Subject: How to improve the bad condition number of Xb From: Greg Heath Date: 17 Dec, 2009 12:25:56 Message: 11 of 14 On Dec 16, 10:19 am, "Ala " wrote: > Attached: > A = -----SNIP > B = -----SNIP format long g, B = B', clc condA1 = cond(A) % 3.35522792668804e+016 svdA = svd(A) maxsvdA = max(svdA) % 5.17615678896237e+017 minsvdA = min(svdA) % 15.427139085814 condA2 = maxsvdA/minsvdA % 3.35522792668804e+016 CHECK! normA = norm(A) % 5.17615678896237e+017 invA = inv(A); % MATLAB Warning: Matrix is close to singular or badly scaled. % Results may be inaccurate. RCOND = 1.450640e-017. % GEH Notes: % 1. Rcond is an order of magnitude estimate for 1/condA % 2. When condA ~ 1 we expect that pinvA ~ invA and using % invA does not result in a significant loss of precision % 3. Whereas when 1 << condA < inf, we rely on pinvA norminvA = norm(invA) % 0.0648208258788514 condA3 = normA*norminvA % 3.35522757938965e+016 CHECK! pinvA = pinv(A); normpinvA = norm(pinvA) % 3.3839787735228e-011 condA4 = normA*normpinvA % 17516004.7022746 NOTE! norm(pinvA-invA) % 0.0648208258788514 % GEH Notes: % 4. invA is obviously very different from pinvA % 5. norm(pinvA) << norm(invA) ==> condA4 << condA % 6. condA4 is not a real condition number. I have never % seen it defined or used before. However, it does % interest me and seems to indicate the order of % magnitude that could occur with good scaling X1 = invA*B X2 = A\B % MATLAB Warning RCOND = 1.450640e-017. X3 = pinvA*B normX1 = norm(X1) % 12.332491280308 normX2 = norm(X2) % 12.332491280308 normX3 = norm(X3) % 5.89130900316705e-009 % GEH Notes: % 7. X3 is obviously the Minimum Norm Solution E1 = A*X1-B; E2 = A*X2-B; E3 = A*X3-B; abserr = abs([E1 E2 E3]) % abserr = % 5214.11599088534 84.276516598635 1.77356372413934e-7 % 5516.39864970798 23.5 1.08791529682407e-7 % 730.980136886004 6.70664818857902 2.78991792266640e-7 % 667.927777569865 68.5199920502501 2.40332224267053e-7 % 2.29918103111507e-11 3.06111178766429e-13 3.77796546936298e-7 % 2.13680319214306e-11 1.30757059109645e-12 195.599515428662 % 1.04842427979625e-05 1.26941753719146e-05 3.45838678226443e-7 % 3.71932983398438e-05 1.01376970412585e-05 2.38169417852474e-7 % 2-norm P1 = norm(E1) % 7654.94082965228 P2 = norm(E2) % 111.331710175008 P3 = norm(E3) % 195.599515428662 % 1-norm P1 = norm(E1,1) % 12129.4226027268 P2 = norm(E2,1) % 183.003179669338 P3 = norm(E3,1) % 195.599515428662 % inf-norm P1 = norm(E1,inf) % 5516.39864970798 P2 = norm(E2,inf) % 84.27651659863 P3 = norm(E3,inf) % 195.599515428662 % GEH Notes: % 8. P3 is obviously the same for all norm conventions % 9. P3 > P2 for all norm conventions % Scaling A = C*D absA = abs(A) maxabsA2 = max(absA,[],2) ceillog2maxabsA2 = ceil(log2(maxabsA2)) invC = 2^diag(-ceillog2maxabsA2) D = invC*A condD = cond(D) % 81880987.2531699 % GEH Notes: % 10. condD/condA4 = 4.67463834618273 Hope this helps. Greg
 Subject: How to improve the bad condition number of Xb From: Bruno Luong Date: 17 Dec, 2009 20:23:06 Message: 12 of 14 Greg Heath wrote in message <236f01ea-e168-4ab5-92af-b627d5cd7879@j14g2000yqm.googlegroups.com>... > > absA = abs(A) > maxabsA2 = max(absA,[],2) > ceillog2maxabsA2 = ceil(log2(maxabsA2)) > invC = 2^diag(-ceillog2maxabsA2) > D = invC*A > condD = cond(D) % 81880987.2531699 > If I decide bypass the power2 thing absA = abs(A) maxabsA2 = max(absA,[],2) invE = diag(1./maxabsA2) F = invE*A cond(F) % 74509655.3114937 cond(F) / cond(D) % 0.909975045689342 So I reduce the condition number further by a factor of 0.9, thus about the same amount on the relative error of the finale solution. True, 0.9 is not much, but this is still much bigger than the EPS(1) you could save with using power-two rounding (this can be proved simply by look at the first order expansion of the matrix-vector product with errors on both operands). I'm still not convinced this power-two bring any advantage at all, in contrary it just degrade the preconditioner. Especially the raw matrix A certainly be already affected by round-off errors. You could run some monte-carto simulation to confirm numerically if you like to check whereas better to use the power-two or not power-two. PS: Normalized by a 2-norm rather the max reduce the condition number even further by 5% percent G = sqrt(sum(abs(A).^2,2)); H = diag(1./G)*A; cond(H)/cond(D) % 0.843685660745875 Bruno
 Subject: How to improve the bad condition number of Xb From: Greg Heath Date: 17 Dec, 2009 23:13:00 Message: 13 of 14 On Dec 17, 3:23 pm, "Bruno Luong" wrote: > Greg Heath wrote in message <236f01ea-e168-4ab5-92af-b627d5cd7...@j14g2000yqm.googlegroups.com>... > > > absA = abs(A) > > maxabsA2 = max(absA,[],2) > > ceillog2maxabsA2 = ceil(log2(maxabsA2)) > > invC = 2^diag(-ceillog2maxabsA2) > > D = invC*A > > condD = cond(D) % 81880987.2531699 > > If I decide bypass the power2 thing > > absA = abs(A) > maxabsA2 = max(absA,[],2) > invE = diag(1./maxabsA2) > F = invE*A > cond(F) % 74509655.3114937 > > cond(F) / cond(D) % 0.909975045689342 > > So I reduce the condition number further by a factor of 0.9, thus about the same amount on the relative error of the finale solution. True, 0.9 is not much, but this is still much bigger than the EPS(1) you could save with using power-two rounding (this can be proved simply by look at the first order expansion of the matrix-vector product with errors on both operands). > > I'm still not convinced this power-two bring any advantage at all, in contrary it just degrade the preconditioner. Especially the raw matrix A certainly be already affected by round-off errors. > > You could run some monte-carto simulation to confirm numerically if you like to check whereas better to use the power-two or not power-two. It will probably be problem dependent and useful only for limited precision calculations (e.g., Apple II single precision). > PS: Normalized by a 2-norm rather the max reduce the condition > number even further by 5% percent > > G = sqrt(sum(abs(A).^2,2)); > H = diag(1./G)*A; > cond(H)/cond(D) % 0.843685660745875 Thanks, very much, for bringing this up. I was concerned by the original statement "One useful technique is multiplying both sides of the equations by a diagonal matrix D whose elements are the maximum of each row of A." which, of course should have replaced "multiplying" by "dividing". The alternate algorithm that I offered up was from a very elementary paperback I used to teach myself numerical analysis on my Apple II 30 years ago. ( I wish I could remember the reference). The presentation started with how to invert via Gaussian elimination triangularization followed by "improvements" like extension of GE to diagonalization, partial pivoting, complete pivoting, scaling with powers of two, double scaling with powers of two,and improving precision by recursively using residuals. QR and SVD were beyond the scope of the book. Perhaps when inverting using GE with a limited precision machine, the power of 2 scaling was helpful, perhaps not. Later, it became evident that if GE with partial pivoting had precision problems forget about "improving " it and use the pseudoinverse. Now, thanks to MATLAB, I use QR and, for high condition numbers, PINV. Now to some interesting results: While comparing INV,QR and PINV approaches to the current problem I found :: 1. The ranking of the residual norms(1,2 and inf) for INV,QR and PINV     was      Nqr < Npinv < Ninv     i.e., the lowest residual resulted from QR and not PINV. 2. When I scaled with invC     a. the norm rankings did not change     b. Nscaled > Nunscaled    i.e. although COND(D) << COND(A), scaling INCREASED the     residuals. Does this make sense to you? Some sort of bias/variance tradeoff? Does this happen with your preconditioners? % Scaling A = C*D absA = abs(A) maxabsA2 = max(absA,[],2) ceillog2maxabsA2 = ceil(log2(maxabsA2)) invC = 2^diag(-ceillog2maxabsA2) D = invC*A B1 = invC*B condD = cond(D) % 81880987.2531699 % GEH Notes: condD/condA4 % 10. condD/condA4 = 4.67463834618273 condD/condA1 % 11. condD/condA1 = 2.44040014694307e-9 maxabsD = max(abs(D),[],2) minabsD = min(abs(D),[],2) maxabsB1 = max(abs(B1)) minabsB1 = min(abs(B1)) X4 = inv(D)*B1 X5 = D\B1 X6 = pinv(D)*B1 normX4 = norm(X4) % 12.3324912750196 normX5 = norm(X5) % 12.3324912745073 normX6 = norm(X6) % 12.3324912775002 % GEH Notes: % 12. X6 is supposed to be the minimum norm solution. % Therefore the last 6 digits should be ignored E4 = A*X4-B; E5 = A*X5-B; E6 = A*X6-B; abserr = abs([E4 E5 E6]) % abserr = % 170494273.422333 169.906454041628 423.101329886308 % 170539647.622068 268.134411256743 256.000122070283 % 22939030.0476929 29.2034228398615 75.397654710675 % 23220501.3943801 19.5473076905319 9.46993809664674 % 6.9879227572942e-7 1.01684598917008e-12 1.73465588428225e-12 % 6.90373672523711e-7 4.05486294907172e-13 1.20373901309418e-12 % 3.97101622981777e-4 1.98487463590725e-05 6.87316566550448e-05 % 2.12872412541255e-4 3.32694311952828e-05 2.12446874713378e-04 % 2-norm P4 = norm(E4) % 243346377.547322 P5 = norm(E5) % 319.373140336681 P6 = norm(E6) % 500.325178174192 % 1-norm P4 = norm(E4,1) % 387193452.487085 P5 = norm(E5,1) % 486.791648946943 P6 = norm(E6,1) % 763.969325942448 % inf-norm P4 = norm(E4,inf) % 170539647.622068 P5 = norm(E5,inf) % 268.134411256743 P6 = norm(E6,inf) % 423.101329886308 % GEH Notes: % 13. Similar to the unscaled case, P6 > P5 for % all norm conventions % 14. [P4 P5 P6] > [ P1 P2 P3] NOTE! % 15. I am puzzled by both of these results Hope this helps. Greg
 Subject: How to improve the bad condition number of Xb From: Bruno Luong Date: 18 Dec, 2009 12:43:02 Message: 14 of 14 Greg Heath wrote in message <6451912b-b151-4103-b2cd-66ac88fb1c30@l2g2000vbg.googlegroups.com>... > > Now to some interesting results: > > While comparing INV,QR and PINV approaches to the current problem > I found :: > > 1. The ranking of the residual norms(1,2 and inf) for INV,QR and PINV > was > > Nqr < Npinv < Ninv > > i.e., the lowest residual resulted from QR and not PINV. > > 2. When I scaled with invC > a. the norm rankings did not change > b. Nscaled > Nunscaled > i.e. although COND(D) << COND(A), scaling INCREASED the > residuals. > > Does this make sense to you? > Some sort of bias/variance tradeoff? > Does this happen with your preconditioners? > When we do *LEFT* preconditioner, we change the metric of the RHS. PINV and QR minimize least-squares norm of the modified-metric, which is no longer comparable to the original one. This shouldn't happen with *RIGHT* preconditioner. Bruno

## Tags for this Thread

### What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us