Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
Reducing fill-ins when factoring sparse symmetric positive definite matrix

Subject: Reducing fill-ins when factoring sparse symmetric positive definite matrix

From: evan um

Date: 19 Sep, 2008 07:26:02

Message: 1 of 15

Hi,

I try to solve Ax=b using a direct method.
A is a sparse symmetric positive definite matrix.
The problem is 'out-of-memory' error when I try to factor matrix A along with Cholesky or LU decomposition.

I believe that MATLAB internally uses its own fill-in reducing algorithm. However, the resultant low/upper triangular matrix had 10 times more non-zero elements than matrix A. Consequently, I end up with 'out-of-memory' error due to these fill-ins.

I am just wondering if there is any better way to reduce fill-ins in lower and upper triangular matrices.

What would be acceptible or anticipated increase in the number of non-zero elements when we use MATLAB?

Could you suggest how to minimize fill-ins when we factor a spd matrix in MATLAB?

In advance, I appreciate all your helps and suggestions.

Best,
Evan

Subject: Reducing fill-ins when factoring sparse symmetric positive definite matrix

From: Tim Davis

Date: 19 Sep, 2008 09:50:04

Message: 2 of 15

"evan um" <evanum@gmail.com> wrote in message <gavk6a$h95$1@fred.mathworks.com>...
> Hi,
>
> I try to solve Ax=b using a direct method.
> A is a sparse symmetric positive definite matrix.
> The problem is 'out-of-memory' error when I try to factor matrix A along with Cholesky or LU decomposition.
>
> I believe that MATLAB internally uses its own fill-in reducing algorithm. However, the resultant low/upper triangular matrix had 10 times more non-zero elements than matrix A. Consequently, I end up with 'out-of-memory' error due to these fill-ins.
>
> I am just wondering if there is any better way to reduce fill-ins in lower and upper triangular matrices.
>
> What would be acceptible or anticipated increase in the number of non-zero elements when we use MATLAB?
>
> Could you suggest how to minimize fill-ins when we factor a spd matrix in MATLAB?
>
> In advance, I appreciate all your helps and suggestions.
>
> Best,
> Evan

Send me your matrix and I'll take a look (and add it to my collection). You can upload it to http://www.cise.ufl.edu/~web-gfs for user "davis". Please include the right-hand-side, b. Also include a description of the problem (just a few sentences would be fine).

A factor of 10 in nnz(L)/nnz(A) is not too bad, actually.

I'm assuming your using the latest MATLAB?

You can see what the fill-in is with:

p=amd(A);
c=symbfact(A(p,p));
nz_in_L = sum(c)
flops = sum(c.^2)

You could also try the METIS ordering ... when it doesn't segfault (which is rare) it can give a very good ordering.

p=metis(A)

or x=cholmod2(A,b)

after you install SuiteSparse on the File Exchange.

Subject: Reducing fill-ins when factoring sparse symmetric positive definite matrix

From: Tim Davis

Date: 19 Sep, 2008 10:29:03

Message: 3 of 15


> What would be acceptible or anticipated increase in the number of non-zero elements when we use MATLAB?

That's a theoretically impossible question to answer, whether or not MATLAB is used.

> Could you suggest how to minimize fill-ins when we factor a spd matrix in MATLAB?

Reducing fill-in is an NP-hard problem, so heuristics are used. A non-heuristic method that would always find the true minimum fill-in on a large matrix would take an amount of time that would exceed the lifetime of the known universe ... :-)

Subject: Reducing fill-ins when factoring sparse symmetric positive definite matrix

From: Matt

Date: 19 Sep, 2008 16:17:02

Message: 4 of 15

"evan um" <evanum@gmail.com> wrote in message <gavk6a$h95$1@fred.mathworks.com>...
> Hi,
>
> I try to solve Ax=b using a direct method.
> A is a sparse symmetric positive definite matrix.
> The problem is 'out-of-memory' error when I try to factor matrix A along with Cholesky or LU decomposition.


Any reason you wouldn't use Gaussian elimination, A\b?

If A is banded, or block-banded, MATLAB's sparse Gaussian elimination routine will solve it pretty fast, or so I've observed.

Subject: Reducing fill-ins when factoring sparse symmetric positive definite matrix

From: Tim Davis

Date: 19 Sep, 2008 23:43:01

Message: 5 of 15

"Matt " <mjacobson.removethis@xorantech.com> wrote in message <gb0j9u$daj$1@fred.mathworks.com>...
> "evan um" <evanum@gmail.com> wrote in message <gavk6a$h95$1@fred.mathworks.com>...
> > Hi,
> >
> > I try to solve Ax=b using a direct method.
> > A is a sparse symmetric positive definite matrix.
> > The problem is 'out-of-memory' error when I try to factor matrix A along with Cholesky or LU decomposition.
>
>
> Any reason you wouldn't use Gaussian elimination, A\b?
>
> If A is banded, or block-banded, MATLAB's sparse Gaussian elimination routine will solve it pretty fast, or so I've observed.

When A is sparse and symmetric positive definite, x=A\b doesn't use Gaussian elimination. It uses a sparse supernodal Cholesky factorization, along with the AMD preordering. However, for some problems, METIS can give a better ordering than AMD, particularly large matrices arising from 3D discretizations.

Subject: Reducing fill-ins when factoring sparse symmetric positive definite matrix

From: evan um

Date: 20 Sep, 2008 00:58:32

Message: 6 of 15

Thank you for your suggestions.
I will upload my matrix, vector and their descriptions.
We use Matlab 7.5 along with your SuiteSparse and a few others. Thanks for the great library.

Evan

Subject: Reducing fill-ins when factoring sparse symmetric positive definite matrix

From: evan um

Date: 20 Sep, 2008 00:58:33

Message: 7 of 15

Hi Mat,

My primary reason about not using \ is that the operator does not allow me to store by-products during the operation. For example, my matrix A is not changed unless I don't change my step size. If I use \, I need to factorize my unchanged matrix every time step, resulting in inefficiency. Sometimes, MATLAB hides too much:)

Evan

Subject: Reducing fill-ins when factoring sparse symmetric positive definite matrix

From: evan um

Date: 21 Sep, 2008 03:13:04

Message: 8 of 15

Hi Tim,
Thank you for useful comments and suggestions.
I will upload my matrix and vector soon.
We use Matlab 7.5 along wit your SuiteSparse library and others.

Evan

"Tim Davis" <davis@cise.ufl.edu> wrote in message <gavskc$23g$1@fred.mathworks.com>...
> "evan um" <evanum@gmail.com> wrote in message <gavk6a$h95$1@fred.mathworks.com>...
> > Hi,
> >
> > I try to solve Ax=b using a direct method.
> > A is a sparse symmetric positive definite matrix.
> > The problem is 'out-of-memory' error when I try to factor matrix A along with Cholesky or LU decomposition.
> >
> > I believe that MATLAB internally uses its own fill-in reducing algorithm. However, the resultant low/upper triangular matrix had 10 times more non-zero elements than matrix A. Consequently, I end up with 'out-of-memory' error due to these fill-ins.
> >
> > I am just wondering if there is any better way to reduce fill-ins in lower and upper triangular matrices.
> >
> > What would be acceptible or anticipated increase in the number of non-zero elements when we use MATLAB?
> >
> > Could you suggest how to minimize fill-ins when we factor a spd matrix in MATLAB?
> >
> > In advance, I appreciate all your helps and suggestions.
> >
> > Best,
> > Evan
>
> Send me your matrix and I'll take a look (and add it to my collection). You can upload it to http://www.cise.ufl.edu/~web-gfs for user "davis". Please include the right-hand-side, b. Also include a description of the problem (just a few sentences would be fine).
>
> A factor of 10 in nnz(L)/nnz(A) is not too bad, actually.
>
> I'm assuming your using the latest MATLAB?
>
> You can see what the fill-in is with:
>
> p=amd(A);
> c=symbfact(A(p,p));
> nz_in_L = sum(c)
> flops = sum(c.^2)
>
> You could also try the METIS ordering ... when it doesn't segfault (which is rare) it can give a very good ordering.
>
> p=metis(A)
>
> or x=cholmod2(A,b)
>
> after you install SuiteSparse on the File Exchange.

Subject: Reducing fill-ins when factoring sparse symmetric positive definite matrix

From: Tim Davis

Date: 21 Sep, 2008 03:13:06

Message: 9 of 15


METIS much better, as I expected.

I'm not sure if I have enough memory on this
machine to handle the matrix, so I haven't tried x=A\b or
the same with METIS instead of AMD; I'll do that later.

>> load twocubes_sphere_matrices
>> whos
 Name Size Bytes Class Attributes

 A 101492x101492 39940044 double sparse b 101492x1 811936 double
>> cspy(A)
>>
>>
>> p=amd(A);
>> c=symbfact(A(p,p));
>> sum(c),sum(c.^2)

ans =

   88679332


ans =

 2.9843e+011

>>
>> sum(c)/nnz(A)

ans =

  53.8343

>>
>> p=metis(A);
>> c=symbfact(A(p,p));
>> sum(c),sum(c.^2), sum(c)/nnz(A)

ans =

   47906996


ans =

 8.6772e+010


ans =

  29.0828

So with AMD, nnz(L) is about 89 million, and x=A\b requires 300 billion flops. With METIS, nnz(L) is about 48 million, and requires 87 billion flops.

Oh, and the cspy(A) looks really cool for this matrix.

Tim Davis wrote:
> Sure thing. I'll take a look, either today or Monday.
>
> Electromagnetic problems tend to be difficult,
> especially 3D ones. With large 3D problems, METIS will
> likely give a much better ordering than AMD.
>
> What is the shape of the domain? There wouldn't be any
> chance you could specify the xyz coordinates of each unknown
> in the matrix, is there? (Some researchers, not I, use that in their
> ordering algorithms. Also, it makes for an interesting comparison
> with codes that compute the coordinates automatically; for example:
>
> http://www.cise.ufl.edu/research/sparse/matrices/Pothen/commanche_dual.html
>
> Thanks,
> Tim Davis
>
> webmaster@cise.ufl.edu wrote:
>> Someone has uploaded you a file. You can view it it at this URL:...
>>
>> Sender: evanum@gmail.com
>>
>> Dear Prof. Davis,
>> I am studying finite-element time domain solvers for electromagnetic diffusion equations. My 3-D computational domain consists of 88,213 tetrahedral elements. The mesh info is used to assemble a global finite element matrix as shown here. Thank you very much for your comments and suggestions through Newsgroup.
>> Best,
>> Evan
>> Geophysics, Stanford
>>
>>
>

Subject: Reducing fill-ins when factoring sparse symmetric positive definite matrix

From: Matt

Date: 21 Sep, 2008 03:13:08

Message: 10 of 15


> > If A is banded, or block-banded, MATLAB's sparse Gaussian elimination routine will solve it pretty fast, or so I've observed.
>
> When A is sparse and symmetric positive definite, x=A\b doesn't use Gaussian elimination. It uses a sparse supernodal Cholesky factorization, along with the AMD preordering. However, for some problems, METIS can give a better ordering than AMD, particularly large matrices arising from 3D discretizations.

Ok. Well, they really should update the text in "help slash" and "help mldivide". It still only talks about Gaussian elimination.

Subject: Reducing fill-ins when factoring sparse symmetric positive definite matrix

From: Tim Davis

Date: 21 Sep, 2008 17:04:02

Message: 11 of 15

"Matt " <mjacobson.removethis@xorantech.com> wrote in message <gb4e43$dc4$1@fred.mathworks.com>...
>
> > > If A is banded, or block-banded, MATLAB's sparse Gaussian elimination routine will solve it pretty fast, or so I've observed.
> >
> > When A is sparse and symmetric positive definite, x=A\b doesn't use Gaussian elimination. It uses a sparse supernodal Cholesky factorization, along with the AMD preordering. However, for some problems, METIS can give a better ordering than AMD, particularly large matrices arising from 3D discretizations.
>
> Ok. Well, they really should update the text in "help slash" and "help mldivide". It still only talks about Gaussian elimination.
>

Wow, you're right. I didn't realize it said that. It's an over-simplification. It uses Gaussian elimination for some systems (LU factorization), but Cholesky for others, and an LDL' factorization for still others, and ... on and on.

The full-blown algorithm used in mldivide wouldn't fit in a "help mldivide", though. See "doc mldivide".

Subject: Reducing fill-ins when factoring sparse symmetric positive definite matrix

From: Steven Lord

Date: 22 Sep, 2008 14:50:45

Message: 12 of 15


"Tim Davis" <davis@cise.ufl.edu> wrote in message
news:gb5uq2$67d$1@fred.mathworks.com...
> "Matt " <mjacobson.removethis@xorantech.com> wrote in message
> <gb4e43$dc4$1@fred.mathworks.com>...
>>
>> > > If A is banded, or block-banded, MATLAB's sparse Gaussian elimination
>> > > routine will solve it pretty fast, or so I've observed.
>> >
>> > When A is sparse and symmetric positive definite, x=A\b doesn't use
>> > Gaussian elimination. It uses a sparse supernodal Cholesky
>> > factorization, along with the AMD preordering. However, for some
>> > problems, METIS can give a better ordering than AMD, particularly large
>> > matrices arising from 3D discretizations.
>>
>> Ok. Well, they really should update the text in "help slash" and "help
>> mldivide". It still only talks about Gaussian elimination.
>>
>
> Wow, you're right. I didn't realize it said that. It's an
> over-simplification. It uses Gaussian elimination for some systems (LU
> factorization), but Cholesky for others, and an LDL' factorization for
> still others, and ... on and on.
>
> The full-blown algorithm used in mldivide wouldn't fit in a "help
> mldivide", though. See "doc mldivide".

Thanks Matt and Tim. I've reported that to our documentation staff. As Tim
mentioned, the reference page in the documentation (DOC MLDIVIDE) includes a
section describing the algorithm behind MLDIVIDE, and it is indeed a little
longer than would fit in the M-file help (at least without making the M-file
help several pages long, which we try not to do.)

--
Steve Lord
slord@mathworks.com

Subject: Reducing fill-ins when factoring sparse symmetric positive definite matrix

From: Matt

Date: 22 Sep, 2008 15:29:02

Message: 13 of 15


>
> Thanks Matt and Tim. I've reported that to our documentation staff. As Tim
> mentioned, the reference page in the documentation (DOC MLDIVIDE) includes a
> section describing the algorithm behind MLDIVIDE, and it is indeed a little
> longer than would fit in the M-file help (at least without making the M-file
> help several pages long, which we try not to do.)


Glad to know us ordinary citizens can make a difference...:)

Subject: Reducing fill-ins when factoring sparse symmetric positive definite matrix

From: Tim Davis

Date: 26 Sep, 2008 10:45:04

Message: 14 of 15

"Matt " <mjacobson.removethis@xorantech.com> wrote in message <gb8dju$e5u$1@fred.mathworks.com>...
>
> >
> > Thanks Matt and Tim. I've reported that to our documentation staff. As Tim
> > mentioned, the reference page in the documentation (DOC MLDIVIDE) includes a
> > section describing the algorithm behind MLDIVIDE, and it is indeed a little
> > longer than would fit in the M-file help (at least without making the M-file
> > help several pages long, which we try not to do.)
>
>
> Glad to know us ordinary citizens can make a difference...:)

Absolutely! The more eyeballs & brains the better.

The doc should probably read something like this:

 \ Backslash or left matrix divide.
    ...
    If A is an N-by-N matrix and B is a column vector with N
    components, or a matrix with several such columns, then
    X = A\B is the solution to the equation A*X = B computed by
    a matrix decomposition (LU, LDL', or Cholesky). ...

And this is also wrong; QR does not do 2-norm pivoting for the sparse case. It does column pivoting to reduce fill-in:
 
    If A is an M-by-N matrix with M < or > N and B is a column
    vector with M components, or a matrix with several such columns,
    then X = A\B is the solution in the least squares sense to the
    under- or overdetermined system of equations A*X = B. The
    effective rank, K, of A is determined from the QR decomposition
    with pivoting. A solution X is computed which has at most K
    nonzero components per column. If K < N this will usually not
    be the same solution as PINV(A)*B. A\EYE(SIZE(A)) produces a
    generalized inverse of A.
 
    C = MLDIVIDE(A,B) is called for the syntax 'A \ B' when A or B is an
    object.

Subject: Reducing fill-ins when factoring sparse symmetric positive definite matrix

From: Lakshmi Sadasiv

Date: 26 Sep, 2008 14:36:05

Message: 15 of 15

Thanks Tim. This documentation will be updated in the next release.

Lakshmi Sadasiv
MATLAB Mathematics Documentation


"Tim Davis" <davis@cise.ufl.edu> wrote in message
news:gbiefg$egl$1@fred.mathworks.com...
> "Matt " <mjacobson.removethis@xorantech.com> wrote in message
> <gb8dju$e5u$1@fred.mathworks.com>...
>>
>> >
>> > Thanks Matt and Tim. I've reported that to our documentation staff.
>> > As Tim
>> > mentioned, the reference page in the documentation (DOC MLDIVIDE)
>> > includes a
>> > section describing the algorithm behind MLDIVIDE, and it is indeed a
>> > little
>> > longer than would fit in the M-file help (at least without making the
>> > M-file
>> > help several pages long, which we try not to do.)
>>
>>
>> Glad to know us ordinary citizens can make a difference...:)
>
> Absolutely! The more eyeballs & brains the better.
>
> The doc should probably read something like this:
>
> \ Backslash or left matrix divide.
> ...
> If A is an N-by-N matrix and B is a column vector with N
> components, or a matrix with several such columns, then
> X = A\B is the solution to the equation A*X = B computed by
> a matrix decomposition (LU, LDL', or Cholesky). ...
>
> And this is also wrong; QR does not do 2-norm pivoting for the sparse
> case. It does column pivoting to reduce fill-in:
>
> If A is an M-by-N matrix with M < or > N and B is a column
> vector with M components, or a matrix with several such columns,
> then X = A\B is the solution in the least squares sense to the
> under- or overdetermined system of equations A*X = B. The
> effective rank, K, of A is determined from the QR decomposition
> with pivoting. A solution X is computed which has at most K
> nonzero components per column. If K < N this will usually not
> be the same solution as PINV(A)*B. A\EYE(SIZE(A)) produces a
> generalized inverse of A.
>
> C = MLDIVIDE(A,B) is called for the syntax 'A \ B' when A or B is an
> object.

Tags for this Thread

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us