"Tim Davis" <davis@cise.ufl.edu> wrote in message <fdup2q$j8o$1@fred.mathworks.com>...
> roberson@ibd.nrccnrc.gc.ca (Walter Roberson) wrote in
> ...
> > I am working with a function
> >
> > Pij = g( Dij * inv(f1(m1,m2)) * Dij' + log(det(f1(m1,m2))),
> > Dij * inv(f2(m1,m2)) * Dij' + log(det(f2(m1,m2))) )
> >
> > Where
> > Dij is ([m1;m2](i,:)  [m1;m2](j,:))  i.e., a
> difference vector
> > f1(m1,m2) is a matrix function in the covariances of m1
> and m2
> > f2(m1,m2) is a matrix function closely related to f1
>
> What you want to do is factorize the matrix A = f1(m1,m2)
> once and then reuse the factorization lots of times. This
> isn't the case of "one of the few reasons to use the
> inverse". If you're multiplying the inverse times a vector,
> then that's a clear signal that you should do something
> else. Never ever multiply by the inverse.
>
> > Isn't it the case that if I use sum(Dij / f1(m1,m2) .*
> Dij, 2) for
> > each column that I would see a substantial slowdown in the
> calculation?
>
> Yes, that would be costly. But it's not the right thing to do.
>
> > Is it worth it? Or is "substantial reuse of the result of
> the logical
> > division" one of the "few reasons" that is considered to
> justify
> > taking the inverse?
>
> No, this is a good reason to factor once and solve 1545
> times. I'm going to rewrite your problem to shorten the
> expressions just a bit. You want to compute s = sum ((b /
> A) .* b, 2) where A is used many times with different
> vectors b, right?
>
> For simplicity, and so I don't get it wrong, let me write
> this as backslash, and assume b is a column vector instead:
>
> s = sum (b'*(A\b)) ;
>
> Let's suppose A is unsymmetric and dense. Then do this once:
>
> [L,U,p] = lu (A,'vector') ;
> lower.LT = true ;
> upper.UT = true ;
>
> and then do this many times with different b:
>
> x = linsolve (U, linsolve (L, b (p), lower), upper) ;
> s = sum (b'*x) ;
>
> or just a oneliner:
>
> x = sum (b' * insolve (U, linsolve (L, b(p), lower), upper));
>
> If that syntax looks ugly, try this (a little slower):
>
> [L,U,p]=lu(A,'vector') ;
>
> s = sum (b' * (U \ (L \ b (p)))) ;
>
> If that's still ugly, get LINFACTOR on the File Exchange and
> do this once:
>
> F = linfactor (A) ;
>
> and then do
>
> s = sum (b' * linfactor (A,b)) ;
>
> where linfactor.m will do the lu and linsolve's for you. It
> will also do the more general thing (sparse case, and both
> LU and Chol).
>
> I wrote linfactor precisely to answer this very question.
>
> This will be more accurate and just as fast as:
>
> F = inv(A) ;
>
> s = sum (b'*(F*b)) ;
>
> which is what you're doing now.
>
> It's important to use the "vector" option for LU; otherwise,
> P*b takes a lot of time.
>
> Here are some timings, in MATLAB 7.4:
>
> >> A=rand(1545);
> >> b=rand(1545,1);
> >> tic; s = sum (b'*(A\b)); toc
> Elapsed time is 1.460618 seconds.
>
> >> tic; F=inv(A);toc
> Elapsed time is 2.420236 seconds.
> >> tic; s2 = sum (b'*(F*b)); toc
> Elapsed time is 0.006123 seconds.
>
> >> clear F
>
> >> tic ; F = linfactor(A) ; toc
> Elapsed time is 0.905961 seconds.
> >> F
>
> F =
>
> L: [1545x1545 double]
> U: [1545x1545 double]
> p: [1x1545 double]
> kind: 'dense LU: L*U = A(p,:)'
> code: 3
>
>
> >> tic; s = sum (b' * linfactor(F,b)) ; toc
> Elapsed time is 0.006120 seconds.
>
> So, since linfactor is the right thing to do in terms of
> accuracy, and since it's just as fast as F=inv(A) followed
> by F*b, why use inv?
>
> Note that linfactor doesn't emulate the b/A syntax (b/A is a
> tiny bit slower than backslash anyway; the slash operator in
> MATLAB is extremely simple; it just does (A'\b')' and uses
> backslash).
Hi
I guess i have a similar problem...I'm trying to find the positive part of a matrix, lets call it A+ of A (I dont know if that is the correct name) but is simply the matrix with ONLY the positive eigenvalues (diagonal matrix) of A multiplied by the right eigenvector matrix and the inverse of the right eigenvector matrix.
I dont know if my approach is correct but what I tried to do was to simply multiply teh positive eigenvalue matrix by the right eigenvector matrix. Then I applied linsolve, i get an upper triangular matrix but the lower matrix is simply a diagonal matrix of ones. and so the system has no solution. (I get many NaN's and the matrix really does not mmake much sense to me)
I applied linfactor and get exactly the same result.
Does someone have any idea how to get A+?
I'd appreciate your help very much.
Thank you.
