Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
Does change in precision help circumvent ill-conditioned matrices?

Subject: Does change in precision help circumvent ill-conditioned matrices?

From: tejas Gotkhindi

Date: 3 Jan, 2011 14:52:07

Message: 1 of 8

Sir
 I am having a system of solving linear equations and coefficient matrix is ill-conditioned(RCOND 1.732e-35 and WARNING- ). Can a change from default precision(double) to higher precision help in getting solution accurately?
If so,kindly guide me
Thank you for your attention and for help in advance.

Subject: Does change in precision help circumvent ill-conditioned matrices?

From: Greg Heath

Date: 3 Jan, 2011 15:22:31

Message: 2 of 8

On Jan 3, 9:52 am, "tejas Gotkhindi" <tejasprakas...@gmail.com> wrote:
> Sir
>  I am having a system of solving linear equations and coefficient matrix is ill-conditioned(RCOND  1.732e-35 and WARNING- ). Can a change from default precision(double) to higher precision help in getting solution accurately?
> If so,kindly guide me
> Thank you for your attention and for help in advance.

No.

Example:

% Matrix with random imprecisions:

impreciseA = [1+eps*rand 1-eps*rand;...
              1-eps*rand 1+eps*rand]

condimpA = cond(impreciseA)

Now, repeat with the precise A.

Hope this helps.

Greg

Subject: Does change in precision help circumvent ill-conditioned matrices?

From: John D'Errico

Date: 3 Jan, 2011 15:24:04

Message: 3 of 8

"tejas Gotkhindi" wrote in message <ifsnqn$2f2$1@fred.mathworks.com>...
> Sir
> I am having a system of solving linear equations and coefficient matrix is ill-conditioned(RCOND 1.732e-35 and WARNING- ). Can a change from default precision(double) to higher precision help in getting solution accurately?
> If so,kindly guide me

No, you do not have a higher precision alternative available
in matlab.

Secondly, this is the fools way to solve your problem, the
lazy way out. Sorry, but it is. Rather than understanding why
a problem becomes ill-conditioned and perhaps scaling or
centering your data, too often people just think "Why not
just use higher precision?" Just throw a bigger computer at
it.

What happens is that ill-conditioned matrix will magnify any
noise in your data unimaginably. You will succeed in getting
a set of numbers that are complete and total crapola.

Do you actually have 40 or 50 significant, correct digits in
the data that you used to generate this problem? If not, then
you will get complete crap out the end, regardless of how
many digits of precision you use to solve the problem,
regardless of whether you do get a singularity warning or
not.

John

Subject: Does change in precision help circumvent ill-conditioned matrices?

From: Mark Shore

Date: 3 Jan, 2011 16:00:19

Message: 4 of 8

"tejas Gotkhindi" wrote in message <ifsnqn$2f2$1@fred.mathworks.com>...
> Sir
> I am having a system of solving linear equations and coefficient matrix is ill-conditioned(RCOND 1.732e-35 and WARNING- ). Can a change from default precision(double) to higher precision help in getting solution accurately?
> If so,kindly guide me
> Thank you for your attention and for help in advance.

To follow up on Greg and John's comments, there are a number of applied texts treating numerical stability. Numerical Methods that Work (Acton), Numerical Methods for Scientists and Engineers (Hamming) and the more recent Accuracy and Stability of Numerical Algorithms (Higham).

For some reason this field seems to be neglected by far too many people.

Subject: Does change in precision help circumvent ill-conditioned matrices?

From: Derek O'Connor

Date: 3 Jan, 2011 17:22:04

Message: 5 of 8

"Mark Shore" wrote in message <ifsrqj$rkk$1@fred.mathworks.com>...
> "tejas Gotkhindi" wrote in message <ifsnqn$2f2$1@fred.mathworks.com>...
> > Sir
> > I am having a system of solving linear equations and coefficient matrix is ill-conditioned(RCOND 1.732e-35 and WARNING- ). Can a change from default precision(double) to higher precision help in getting solution accurately?
> > If so,kindly guide me
> > Thank you for your attention and for help in advance.
>
> To follow up on Greg and John's comments, there are a number of applied texts treating numerical stability. Numerical Methods that Work (Acton), Numerical Methods for Scientists and Engineers (Hamming) and the more recent Accuracy and Stability of Numerical Algorithms (Higham).
>
> For some reason this field seems to be neglected by far too many people.

----------

@John, Mark

Hear! Hear!

Too many people think that higher precision solves the problem of ill-conditioning. This is a typical example:

Yun He and Chris H.Q. Ding, "Using Accurate Arithmetics to Improve Numerical
Reproducibility and Stability in Parallel Applications", The Journal of Supercomputing, Volume 18, Number 3, 259-277, DOI: 10.1023/A:1008153532043

He and Ding were summing 7680 dpfp numbers and got different results when they changed the order of summation. I showed in

http://www.scribd.com/doc/26135665/Two-Simple-Statistical-Calculations-and-ClimateGate

page 10, that their problem had a 1-norm condition number = 10^(21), yet He and Ding did not mention ill-conditioning and gave the impression that an accurate (or exact) calculation of the sum solved their problem. It does not solve the problem, as this
`maxim' of Nick Trefethen implies:

"If the answer is highly sensitive to perturbations, you
have probably asked the wrong question."

See: "Maxims about Numerical Mathematics, Computers, Science and Life",
 L. N. Trefethen, SIAM News, Vol. 31, No. 1 (1998), p 4.

Download here:
http://www.comlab.ox.ac.uk/people/nick.trefethen/publication/PDF/1998_76.pdf

A further maxim from Trefethen is worth remembering:

"No physical constants are known to more than around eleven digits,
and no truly scientific problem requires computation with much more
precision than this."

I think we should let Newton have the last word [on floating point arithmetic?]:

"Yet the errors do not come from the art
but from those who practice the art."

Or, in the original,

"Attamen errores non sunt Artis sed Artificum",

from the `Author's Preface to the Reader',
Philosophiae Naturalis Principia Mathematica, First Edition, July 5,1686.
 --- Isaac Newton (1642 -- 1727)



Derek O'Connor

Subject: Does change in precision help circumvent ill-conditioned matrices?

From: Mark Shore

Date: 3 Jan, 2011 18:06:20

Message: 6 of 8

"Derek O'Connor" wrote in message <ift0js$ct0$1@fred.mathworks.com>...

>
> Too many people think that higher precision solves the problem of ill-conditioning. This is a typical example:
>
> Yun He and Chris H.Q. Ding, "Using Accurate Arithmetics to Improve Numerical
> Reproducibility and Stability in Parallel Applications", The Journal of Supercomputing, Volume 18, Number 3, 259-277, DOI: 10.1023/A:1008153532043
>
> He and Ding were summing 7680 dpfp numbers and got different results when they changed the order of summation....
>

Aggh. Extract from that paper:

"The SSH variable is a two-dimensional sea surface volume (integrated sea surface
area times sea surface height) distributed among multiple processors. At each time
step, the global summation of the sea surface volume of each model grid is needed
in order to calculate the average sea surface height. The absolute value of the data
itself is very large (in the order of 10^10 to 10^15), with different signs, while the result of the global summation is only of order of 1."

All I can figure is that the authors were calculating the area of a given degree-sized ocean cell in square meters (or something) and then calculating a running sum. Their considered 'fixes' were fixed point summation via intermediate conversion to integers, extended precision floating point, and an error compensation method.

Instead of something as trivial as using an appropriately scaled measurement such as square degrees.

Subject: Does change in precision help circumvent ill-conditioned matrices?

From: dpb

Date: 3 Jan, 2011 18:34:45

Message: 7 of 8

Mark Shore wrote:
...

> Aggh. Extract from that paper:
>
...
> ... The absolute value of the data itself is very large
> (in the order of 10^10 to 10^15), with different signs, while the result
> of the global summation is only of order of 1."
>
> All I can figure is that the authors were calculating the area of a
> given degree-sized ocean cell in square meters (or something) and then
> calculating a running sum. Their considered 'fixes' were fixed point
> summation via intermediate conversion to integers, extended precision
> floating point, and an error compensation method.
>
> Instead of something as trivial as using an appropriately scaled
> measurement such as square degrees.

I don't see that would change the range of scales between the result and
the accumulators, though??? Wouldn't they still have the classic
problem of small differences between large values, just a different
range of them?

--

Subject: Does change in precision help circumvent ill-conditioned matrices?

From: Mark Shore

Date: 3 Jan, 2011 19:10:22

Message: 8 of 8

dpb <none@non.net> wrote in message <ift4s6$6rp$1@news.eternal-september.org>...
> Mark Shore wrote:
> ...
>
> > Aggh. Extract from that paper:
> >
> ...
> > ... The absolute value of the data itself is very large
> > (in the order of 10^10 to 10^15), with different signs, while the result
> > of the global summation is only of order of 1."
> >
> > All I can figure is that the authors were calculating the area of a
> > given degree-sized ocean cell in square meters (or something) and then
> > calculating a running sum. Their considered 'fixes' were fixed point
> > summation via intermediate conversion to integers, extended precision
> > floating point, and an error compensation method.
> >
> > Instead of something as trivial as using an appropriately scaled
> > measurement such as square degrees.
>
> I don't see that would change the range of scales between the result and
> the accumulators, though??? Wouldn't they still have the classic
> problem of small differences between large values, just a different
> range of them?
>
> --

Possibly, but I don't have access to the original modelling paper the authors discuss so I haven't a clue how they wound up in this badly formulated situation:

"The SSH variable is a two-dimensional sea surface volume (integrated sea surface
area times sea surface height) distributed among multiple processors. At each time
step, the global summation of the sea surface volume of each model grid is needed
in order to calculate the average sea surface height. The absolute value of the data
itself is very large (in the order of 1010 to 1015), with different signs, while the result of the global summation is only of order of 1."

They are really only calculating a weighted average of 7680 values (both positive and negative) and in practice would only need a result accurate to two or three significant figures.

An example the authors gave (in the Journal of Supercomputing, of all things) was the following:

"A simple example explains the idea best. In double precision, the following Fortran statement S = 1.25 x 10^20 + 555.55 - 1.25 x 10^20 will get S = 0.0, instead of S = 555.55. The reason is that when compared to 1.25 x 10^20, 555.55 is negligibly small, or non-representable. In hardware, 555.55 is simply right-shifted out of CPU registers when the first addition is performed."

Tags for this Thread

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us