Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
Alternative to matrix inverse?

Subject: Alternative to matrix inverse?

From: Jean-Philippe

Date: 21 Feb, 2012 06:18:09

Message: 1 of 12

Hi,

I am currently using the inv function in my program and it seems to be mostly prohibited by everyone I found on Internet, so i'm looking for an alternative. I do not think my case can be solve using \ or /, and if it is, please let me know.

I am working with images of sizes around I=256x256 and I=512x512. In my algorithm, I use very large sparse matrices P=NxN where N = numel(ex: I=256x256). Here is the equation I have to code:

B = (1-alpha) * inv(speye(N)-(alpha*P)); where P is very sparse (it has close to 9 diagonals grouped in 3 separated groups)

The computation of this equation takes forever (++30min) or give out of memory errors even if I have 8gigs of memory available.

Also, I have tried to use an iterative approach like this :

function C = inverse(M, A, limit)
    C = M;
    for i = 1:limit
        M = M*A;
        C = C+M;
    end

B = (1-alpha) * inverse(speye(N), (alpha*P), 50);

This require many iterations to obtain a similar result than inv. I put my limit to 50, but it is far from a good result. The problem here is that the matrix is less sparse as the iterations go, and it is almost exponentional time consuming through each iterations.

Would anyone have a better way to do this work?

Thank you very much!

J-P - excuse my english.

Subject: Alternative to matrix inverse?

From: Roger Stafford

Date: 21 Feb, 2012 08:05:10

Message: 2 of 12

"Jean-Philippe" wrote in message <jhvcv1$j70$1@newscl01ah.mathworks.com>...
> I am currently using the inv function in my program and it seems to be mostly prohibited by everyone I found on Internet, so i'm looking for an alternative. I do not think my case can be solve using \ or /, and if it is, please let me know.
> ........
- - - - - - - - - -
  Here is my understanding of the question about the use of 'inv' in matlab as against '/' or '\'. In the vast majority of cases, the need for finding the inverse of an n by n matrix stems from trying to solve a set of n linear equations in n unknowns, such as finding an unknown column vector x where A*x = b, where A is a known n by n matrix and b is a known column vector. Or possibly the problem is to solve m such sets of equations where A is the same for each set but b varies and where m is less than n. In such problems the solutions can be obtained by matlab's '\' function as x = A\b or X = A\B where B is the n by m collection of b columns. As long as m remains substantially less than n, the algorithms which solve for x or X here are much more efficient than having to find the full inverse of A and then multiplying it by b or B. However when m becomes as large as n, so that B is
a (non-singular) n by n matrix, there is (or at least shouldn't be) any substantial advantage in using '\' over the 'inv' function.

  For that reason the people in Mathworks and others in this group will ask you why you want the full inverse of speye(N)-alpha*P, as opposed to, in some sense, solving fewer than N sets of linear equations using this matrix. As you have seen, obtaining the full inverse of a very large matrix is a formidably large computational task (with the result very likely no longer sparse) and the 'inv' function probably has as efficient an algorithm as you can find. However, quite possibly it is a task that you might not really have to perform if there are other methods available of accomplishing whatever final objective you have in mind.

Roger Stafford

Subject: Alternative to matrix inverse?

From: Roger Stafford

Date: 21 Feb, 2012 08:23:10

Message: 3 of 12

"Roger Stafford" wrote in message <jhvj7m$79q$1@newscl01ah.mathworks.com>...
> However when m becomes as large as n, so that B is a (non-singular) n by n matrix, there is (or at least shouldn't be) any substantial advantage in using '\' over the 'inv' function.
- - - - - - - -
  That last sentence in the first paragraph should read: "However when m becomes as large as n, so that B is a (non-singular) n by n matrix, there is not (or at least shouldn't be) any substantial advantage in using '\' over the 'inv' function."

Roger Stafford

Subject: Alternative to matrix inverse?

From: Steven_Lord

Date: 21 Feb, 2012 15:04:10

Message: 4 of 12



"Jean-Philippe " <jpmorin196@gmail.com> wrote in message
news:jhvcv1$j70$1@newscl01ah.mathworks.com...
> Hi,
> I am currently using the inv function in my program and it seems to be
> mostly prohibited by everyone I found on Internet, so i'm looking for an
> alternative. I do not think my case can be solve using \ or /, and if it
> is, please let me know.
>
> I am working with images of sizes around I=256x256 and I=512x512. In my
> algorithm, I use very large sparse matrices P=NxN where N = numel(ex:
> I=256x256). Here is the equation I have to code:
>
> B = (1-alpha) * inv(speye(N)-(alpha*P)); where P is very sparse (it has
> close to 9 diagonals grouped in 3 separated groups)

Why? What are you planning to DO with B?

> The computation of this equation takes forever (++30min) or give out of
> memory errors even if I have 8gigs of memory available.

The inverse of a sparse matrix is likely no longer sparsely populated, as
Roger stated.

*snip*

> Would anyone have a better way to do this work?

That depends on what the definition of "this work" is.

--
Steve Lord
slord@mathworks.com
To contact Technical Support use the Contact Us link on
http://www.mathworks.com

Subject: Alternative to matrix inverse?

From: Jean-Philippe

Date: 21 Feb, 2012 15:51:11

Message: 5 of 12

Thank you very much for your replies.

In fact, P is a graph representing the weights of transitions between the neighbouring pixels for every pixels. The inverse is then used to extend que probabilities to all the other pixels. We obtain B, which contains probabilities for every pixels. The problem with the iterative approach I suggested is that the "spreading" of the probabilities accross B would require too many iterations and would likely be slower than inv.

Is there another way to propagate the probabilities accross B without using the inverse, or is there a way to parallelize this equation?

Thank you.

"Steven_Lord" <slord@mathworks.com> wrote in message <ji0bpa$m1k$1@newscl01ah.mathworks.com>...
>
>
> "Jean-Philippe " <jpmorin196@gmail.com> wrote in message
> news:jhvcv1$j70$1@newscl01ah.mathworks.com...
> > Hi,
> > I am currently using the inv function in my program and it seems to be
> > mostly prohibited by everyone I found on Internet, so i'm looking for an
> > alternative. I do not think my case can be solve using \ or /, and if it
> > is, please let me know.
> >
> > I am working with images of sizes around I=256x256 and I=512x512. In my
> > algorithm, I use very large sparse matrices P=NxN where N = numel(ex:
> > I=256x256). Here is the equation I have to code:
> >
> > B = (1-alpha) * inv(speye(N)-(alpha*P)); where P is very sparse (it has
> > close to 9 diagonals grouped in 3 separated groups)
>
> Why? What are you planning to DO with B?
>
> > The computation of this equation takes forever (++30min) or give out of
> > memory errors even if I have 8gigs of memory available.
>
> The inverse of a sparse matrix is likely no longer sparsely populated, as
> Roger stated.
>
> *snip*
>
> > Would anyone have a better way to do this work?
>
> That depends on what the definition of "this work" is.
>
> --
> Steve Lord
> slord@mathworks.com
> To contact Technical Support use the Contact Us link on
> http://www.mathworks.com

Subject: Alternative to matrix inverse?

From: Roger Stafford

Date: 21 Feb, 2012 23:35:12

Message: 6 of 12

"Jean-Philippe" wrote in message <ji0ehf$28d$1@newscl01ah.mathworks.com>...
> Thank you very much for your replies.
>
> In fact, P is a graph representing the weights of transitions between the neighbouring pixels for every pixels. The inverse is then used to extend que probabilities to all the other pixels. We obtain B, which contains probabilities for every pixels. The problem with the iterative approach I suggested is that the "spreading" of the probabilities accross B would require too many iterations and would likely be slower than inv.
>
> Is there another way to propagate the probabilities accross B without using the inverse, or is there a way to parallelize this equation?
>
> Thank you.
>
> "Steven_Lord" <slord@mathworks.com> wrote in message <ji0bpa$m1k$1@newscl01ah.mathworks.com>...
> >
> >
> > "Jean-Philippe " <jpmorin196@gmail.com> wrote in message
> > news:jhvcv1$j70$1@newscl01ah.mathworks.com...
> > > Hi,
> > > I am currently using the inv function in my program and it seems to be
> > > mostly prohibited by everyone I found on Internet, so i'm looking for an
> > > alternative. I do not think my case can be solve using \ or /, and if it
> > > is, please let me know.
> > >
> > > I am working with images of sizes around I=256x256 and I=512x512. In my
> > > algorithm, I use very large sparse matrices P=NxN where N = numel(ex:
> > > I=256x256). Here is the equation I have to code:
> > >
> > > B = (1-alpha) * inv(speye(N)-(alpha*P)); where P is very sparse (it has
> > > close to 9 diagonals grouped in 3 separated groups)
> >
> > Why? What are you planning to DO with B?
> >
> > > The computation of this equation takes forever (++30min) or give out of
> > > memory errors even if I have 8gigs of memory available.
> >
> > The inverse of a sparse matrix is likely no longer sparsely populated, as
> > Roger stated.
> >
> > *snip*
> >
> > > Would anyone have a better way to do this work?
> >
> > That depends on what the definition of "this work" is.
> >
> > --
> > Steve Lord
> > slord@mathworks.com
> > To contact Technical Support use the Contact Us link on
> > http://www.mathworks.com
- - - - - - - - -
  Could you expound at greater length on what you meant by your statement "P is a graph representing the weights of transitions between the neighbouring pixels for every pixels. The inverse is then used to extend que probabilities to all the other pixels?" It seems as though you are dealing with a Markov process with a certain "neighborhood matrix" of transition probabilities, but I don't understand the details of what you are doing. In particular don't understand how the number 'alpha' enters into the problem.

  As for multiplying by P repeatedly to approximate your desired result, I would think that you would take advantage of the fact that P is a band matrix with only nine diagonals so as to avoid excessive numbers of operations. Multiplying two N by N matrices in ordinary matrix multiplication requires N^3 multiplications and additions, whereas if one them has only 9 diagonals, the right algorithm should require only 9*N^2 or less of these operations, which for your number N is very much less indeed. Maintaining a sparse format does not seem like a good solution since as you pointed out it will quickly become very non-sparse in this iteration.

Roger Stafford

Subject: Alternative to matrix inverse?

From: Jean-Philippe

Date: 22 Feb, 2012 00:17:14

Message: 7 of 12

"Roger Stafford" wrote in message <ji19ng$7jm$1@newscl01ah.mathworks.com>...
> "Jean-Philippe" wrote in message <ji0ehf$28d$1@newscl01ah.mathworks.com>...
> > Thank you very much for your replies.
> >
> > In fact, P is a graph representing the weights of transitions between the neighbouring pixels for every pixels. The inverse is then used to extend que probabilities to all the other pixels. We obtain B, which contains probabilities for every pixels. The problem with the iterative approach I suggested is that the "spreading" of the probabilities accross B would require too many iterations and would likely be slower than inv.
> >
> > Is there another way to propagate the probabilities accross B without using the inverse, or is there a way to parallelize this equation?
> >
> > Thank you.
> >
> > "Steven_Lord" <slord@mathworks.com> wrote in message <ji0bpa$m1k$1@newscl01ah.mathworks.com>...
> > >
> > >
> > > "Jean-Philippe " <jpmorin196@gmail.com> wrote in message
> > > news:jhvcv1$j70$1@newscl01ah.mathworks.com...
> > > > Hi,
> > > > I am currently using the inv function in my program and it seems to be
> > > > mostly prohibited by everyone I found on Internet, so i'm looking for an
> > > > alternative. I do not think my case can be solve using \ or /, and if it
> > > > is, please let me know.
> > > >
> > > > I am working with images of sizes around I=256x256 and I=512x512. In my
> > > > algorithm, I use very large sparse matrices P=NxN where N = numel(ex:
> > > > I=256x256). Here is the equation I have to code:
> > > >
> > > > B = (1-alpha) * inv(speye(N)-(alpha*P)); where P is very sparse (it has
> > > > close to 9 diagonals grouped in 3 separated groups)
> > >
> > > Why? What are you planning to DO with B?
> > >
> > > > The computation of this equation takes forever (++30min) or give out of
> > > > memory errors even if I have 8gigs of memory available.
> > >
> > > The inverse of a sparse matrix is likely no longer sparsely populated, as
> > > Roger stated.
> > >
> > > *snip*
> > >
> > > > Would anyone have a better way to do this work?
> > >
> > > That depends on what the definition of "this work" is.
> > >
> > > --
> > > Steve Lord
> > > slord@mathworks.com
> > > To contact Technical Support use the Contact Us link on
> > > http://www.mathworks.com
> - - - - - - - - -
> Could you expound at greater length on what you meant by your statement "P is a graph representing the weights of transitions between the neighbouring pixels for every pixels. The inverse is then used to extend que probabilities to all the other pixels?" It seems as though you are dealing with a Markov process with a certain "neighborhood matrix" of transition probabilities, but I don't understand the details of what you are doing. In particular don't understand how the number 'alpha' enters into the problem.
>
> As for multiplying by P repeatedly to approximate your desired result, I would think that you would take advantage of the fact that P is a band matrix with only nine diagonals so as to avoid excessive numbers of operations. Multiplying two N by N matrices in ordinary matrix multiplication requires N^3 multiplications and additions, whereas if one them has only 9 diagonals, the right algorithm should require only 9*N^2 or less of these operations, which for your number N is very much less indeed. Maintaining a sparse format does not seem like a good solution since as you pointed out it will quickly become very non-sparse in this iteration.
>
> Roger Stafford


P is a NxN matrix where N is the total number of pixels of an image. For a pixel i (row i of P), we want to assign to every neighbouring pixels j of i a weight. This gives us, except for pixels on the edges) 8+1 (the pixel itself) values on the row i of P. We do this for every pixels and it gives us the 9-diagonals matrix P. It is also very sparse.

Alpha is a parameter representing a probability affecting the weights.

M is another NxN matrix but is almost not sparse. It contains the probabilities of transition between a pixel i and another pixel k in the image. The transition probabilities will be used to segment the image according to seed points.
 
Unfortunately I cannot reveal publicly all the details of this method yet, maybe I can give you more details in private. I could send you a picture of what we want to do. (jpmorin196 [@] gmail [.] com)

Thank you for your suggestion on how to multiply repeatedly P as a 9 bands matrix. But, I am not sure how I can do this exactly.

Subject: Alternative to matrix inverse?

From: Roger Stafford

Date: 22 Feb, 2012 03:09:11

Message: 8 of 12

"Jean-Philippe" wrote in message <ji1c6a$eng$1@newscl01ah.mathworks.com>...
> Thank you for your suggestion on how to multiply repeatedly P as a 9 bands matrix. But, I am not sure how I can do this exactly.
- - - - - - - - - -
  You can multiply M by the k-th level diagonal of P above its main diagonal this way:

 M2 = [zeros(N,k),bsxfun(@times,M(:,1:N-k),diag(P,k).')];

or for diagonals below the main diagonal, indicated by negative k, this way:

 M2 = [bsxfun(@times,M(:,1-k:N),diag(P,k).'),zeros(N,-k)];

(You could use this single line for either kind of k:

 M2 = [zeros(N,k),bsxfun(@times,M(:,max(1,1-k):min(N-k,N)),...
       diag(P,k).'),zeros(N,-k)];

but it would probably be a little slower in action.) It would then be necessary to add all nine of these for the nine values of k for various diagonals in P. The net effect of this would be the equivalent of the ordinary matrix multiplication M*P but with far fewer operations.

  Note: It would probably be best to precompute the quantities diag(P,k).' and zeros(N,k) or zeros(N,-k) so that they would not have to be repeatedly computed for each iteration step.

Roger Stafford

Subject: Alternative to matrix inverse?

From: Jean-Philippe

Date: 22 Feb, 2012 07:24:11

Message: 9 of 12

"Roger Stafford" wrote in message <ji1m8n$di6$1@newscl01ah.mathworks.com>...
> "Jean-Philippe" wrote in message <ji1c6a$eng$1@newscl01ah.mathworks.com>...
> > Thank you for your suggestion on how to multiply repeatedly P as a 9 bands matrix. But, I am not sure how I can do this exactly.
> - - - - - - - - - -
> You can multiply M by the k-th level diagonal of P above its main diagonal this way:
>
> M2 = [zeros(N,k),bsxfun(@times,M(:,1:N-k),diag(P,k).')];
>
> or for diagonals below the main diagonal, indicated by negative k, this way:
>
> M2 = [bsxfun(@times,M(:,1-k:N),diag(P,k).'),zeros(N,-k)];
>
> (You could use this single line for either kind of k:
>
> M2 = [zeros(N,k),bsxfun(@times,M(:,max(1,1-k):min(N-k,N)),...
> diag(P,k).'),zeros(N,-k)];
>
> but it would probably be a little slower in action.) It would then be necessary to add all nine of these for the nine values of k for various diagonals in P. The net effect of this would be the equivalent of the ordinary matrix multiplication M*P but with far fewer operations.
>
> Note: It would probably be best to precompute the quantities diag(P,k).' and zeros(N,k) or zeros(N,-k) so that they would not have to be repeatedly computed for each iteration step.
>
> Roger Stafford

I'm sorry, when I said M, in my previous post, I actually meant B.

This is what I have come to so far:

% Pre-calculation of the diagonals indices sorted from above to under the main diagonal
% M is an identity matrix of size P
% P is the weights matrix affected by alpha

B = (1-alpha) * inverse(speye(n), (alpha*P), 5);

function ? inverse(M, P, limit)
    Z = sort([(find(P(:,1)))*-1;0;find(P(1,:))']);
    Q = zeros(size(P,1),length(Z));
    for i = 1:length(Z)
         Q(:,i) = [zeros((Z(i)>0)*Z(i),1);diag(A,Z(i));zeros((Z(i)<0)*-Z(i),1)];
    end

    % this is where I'm not sure...
    % for every iteration until the limit
    % for every diagonals
    % identity matrix * diagonal i (it was M = M*P)
    % where to do: "It would then be necessary to add all nine of these for the nine values of k for various diagonals in P"
    for j = 1:limit
    for i = 1:length(Z)
         M2 = bsxfun(@times, M(:,1:size(M,1)-abs(Z(j))), Q(:,j));
    end
    end
end

It is late so I will go to bed and have a clearer understanding tomorrow. Thank you again.

Subject: Alternative to matrix inverse?

From: Christopher Creutzig

Date: 22 Feb, 2012 12:25:54

Message: 10 of 12

On 21.02.12 16:51, Jean-Philippe wrote:
> Thank you very much for your replies.
>
> In fact, P is a graph representing the weights of transitions between the neighbouring pixels for every pixels. The inverse is then used to extend que probabilities to all the other pixels. We obtain B, which contains probabilities for every pixels. The problem with the iterative approach I suggested is that the "spreading" of the probabilities accross B would require too many iterations and would likely be slower than inv.

I.e., you are using

  infinity
     ---
     \ i 1
     / (alpha P) = -----------
     --- 1 - alpha P
    i = 0

to find the sum on the right hand side, as the limit of some Markov
chain, right? (If the formula doesn't display correctly, try a
monospaced font. That may seem outdated, but continues being the best
choice for Usenet, really …)

> Is there another way to propagate the probabilities accross B without using the inverse, or is there a way to parallelize this equation?

You could simply calculate the sum directly, truncating it at some
modest upper limit, say, by monitoring norm((alpha*P)^i). If
norm(alpha*P) is sufficiently small, the sum should converge rather
quickly – plus, you avoid potential cancellation errors on the diagonal
for really small entries.


Christopher

Subject: Alternative to matrix inverse?

From: Jean-Philippe

Date: 24 Feb, 2012 04:01:30

Message: 11 of 12

Christopher Creutzig <Christopher.Creutzig@mathworks.com> wrote in message <4F44DED2.5030901@mathworks.com>...
> On 21.02.12 16:51, Jean-Philippe wrote:
> > Thank you very much for your replies.
> >
> > In fact, P is a graph representing the weights of transitions between the neighbouring pixels for every pixels. The inverse is then used to extend que probabilities to all the other pixels. We obtain B, which contains probabilities for every pixels. The problem with the iterative approach I suggested is that the "spreading" of the probabilities accross B would require too many iterations and would likely be slower than inv.
>
> I.e., you are using
>
> infinity
> ---
> \ i 1
> / (alpha P) = -----------
> --- 1 - alpha P
> i = 0
>
> to find the sum on the right hand side, as the limit of some Markov
> chain, right? (If the formula doesn't display correctly, try a
> monospaced font. That may seem outdated, but continues being the best
> choice for Usenet, really …)
>
> > Is there another way to propagate the probabilities accross B without using the inverse, or is there a way to parallelize this equation?
>
> You could simply calculate the sum directly, truncating it at some
> modest upper limit, say, by monitoring norm((alpha*P)^i). If
> norm(alpha*P) is sufficiently small, the sum should converge rather
> quickly – plus, you avoid potential cancellation errors on the diagonal
> for really small entries.
>
>
> Christopher

Hi Christopher, thank you for helping me out :)

The problem I am actually trying to solve is to use the matrix B as the probabilities of transition between every two pixels of the image. To find B, we use the weight matrix P which contains the intensity difference for every pixels and their neighbouring pixels. We use alpha to control how the probabilities are dispersed across B. Those probabilities are then mixed with others seed pixels probabilities and we achieve image segmentation by keeping the most probable seed class for every pixel.

I am still learning all those maths right now, so I am not sure what is a Markov Chain *blush*...

What I was asking first, was a way to obtain a similar result than what the function inv gives me because it is really slow when I'm working on big matrices. P and B are NxN matrices where N is the total number of pixels.

I suggested an iterative method to estimate B, but it requires many iterations to obtain something useful and it becomes even slower than inv. I was wondering if I could optimize the matlab implementation of this method (M=M*P; C=C+M;) where all the matrices are currently sparses. Therefore I cannot use the function norm on those matrices and I cannot do full() on any of those because it causes an OUT OF MEMORY error.

I tried my best to understand Roger's and your suggestions, but it seems that I don't get it... Thank you for your patience with my problem.

What I understood from Roger's anwser is that I could modify the iterative method to be more efficient by reducing the number of operations. I have been able to implement a way to extract all the diagonals of matrix P in a 9xN matrix. But then, I do not know what to do next to iterate.

From your answer, I understand that I could use a third way to calculate B by iterating (alpha*P)^i and by looking at the norm of P to stop the iterations automatically based of a certain value. If I understood correctly, then how can I get the norm of a sparse matrix?

--- as reminder ---
using the matlab inv

    B = ((1-alpha) * inv(speye(N)-(alpha*P)));

or the iterative way

    B = (1-alpha) * inverse(speye(N), (alpha*P), limit);
    B = (1-alpha) * C;

    C = I;
    for i = 1:limit
        I = I*P;
        C = C+I;
    end

Subject: Alternative to matrix inverse?

From: Torsten

Date: 24 Feb, 2012 07:47:20

Message: 12 of 12

On 24 Feb., 05:01, "Jean-Philippe " <jpmorin...@gmail.com> wrote:
> Christopher Creutzig <Christopher.Creut...@mathworks.com> wrote in message <4F44DED2.5030...@mathworks.com>...
> > On 21.02.12 16:51, Jean-Philippe wrote:
> > > Thank you very much for your replies.
>
> > > In fact, P is a graph representing the weights of transitions between the neighbouring pixels for every pixels. The inverse is then used to extend que probabilities to all the other pixels. We obtain B, which contains probabilities for every pixels. The problem with the iterative approach I suggested is that the "spreading" of the probabilities accross B would require too many iterations and would likely be slower than inv.
>
> > I.e., you are using
>
> >   infinity
> >      ---
> >      \              i        1
> >      /     (alpha P)  = -----------
> >      ---                1 - alpha P
> >     i = 0
>
> > to find the sum on the right hand side, as the limit of some Markov
> > chain, right? (If the formula doesn't display correctly, try a
> > monospaced font. That may seem outdated, but continues being the best
> > choice for Usenet, really …)
>
> > > Is there another way to propagate the probabilities accross B without using the inverse, or is there a way to parallelize this equation?
>
> > You could simply calculate the sum directly, truncating it at some
> > modest upper limit, say, by monitoring norm((alpha*P)^i). If
> > norm(alpha*P) is sufficiently small, the sum should converge rather
> > quickly – plus, you avoid potential cancellation errors on the diagonal
> > for really small entries.
>
> > Christopher
>
> Hi Christopher, thank you for helping me out :)
>
> The problem I am actually trying to solve is to use the matrix B as the probabilities of transition between every two pixels of the image. To find B, we use the weight matrix P which contains the intensity difference for every pixels and their neighbouring pixels. We use alpha to control how the probabilities are dispersed across B. Those probabilities are then mixed with others seed pixels probabilities and we achieve image segmentation by keeping the most probable seed class for every pixel.
>
> I am still learning all those maths right now, so I am not sure what is a Markov Chain *blush*...
>
> What I was asking first, was a way to obtain a similar result than what the function inv gives me because it is really slow when I'm working on big matrices. P and B are NxN matrices where N is the total number of pixels.
>
> I suggested an iterative method to estimate B, but it requires many iterations to obtain something useful and it becomes even slower than inv. I was wondering if I could optimize the matlab implementation of this method (M=M*P; C=C+M;) where all the matrices are currently sparses. Therefore I cannot use the function norm on those matrices and I cannot do full() on any of those because it causes an OUT OF MEMORY error.
>
> I tried my best to understand Roger's and your suggestions, but it seems that I don't get it... Thank you for your patience with my problem.
>
> What I understood from Roger's anwser is that I could modify the iterative method to be more efficient by reducing the number of operations. I have been able to implement a way to extract all the diagonals of matrix P in a 9xN matrix. But then, I do not know what to do next to iterate.
>
> From your answer, I understand that I could use a third way to calculate B by iterating (alpha*P)^i and by looking at the norm of P to stop the iterations automatically based of a certain value. If I understood correctly, then how can I get the norm of a sparse matrix?
>
> --- as reminder ---
> using the matlab inv
>
>     B = ((1-alpha) * inv(speye(N)-(alpha*P)));
>
> or the iterative way
>
>     B = (1-alpha) * inverse(speye(N), (alpha*P), limit);
>     B = (1-alpha) * C;
>
>     C = I;
>     for i = 1:limit
>         I = I*P;
>         C = C+I;
>     end- Zitierten Text ausblenden -
>
> - Zitierten Text anzeigen -

A possible norm would be the maximum norm, i.e. the maximum of the
absolute values of the matrix
(alpha*P)^i.
But for the infinite series to converge you must have that the
spectral radius of the matrix alpha*P
is less than 1. Is this the case for your application ? If you don't
know, you could make a test and check
whether (alpha*P)^i is near to the zero matrix for large values of i.

Best wishes
Torsten.

Tags for this Thread

No tags are associated with this thread.

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us