Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
Solving Matrix equation with contraints

Subject: Solving Matrix equation with contraints

From: monalisa sen

Date: 7 May, 2010 20:25:20

Message: 1 of 25

Hi:

 I am trying to solve AX=B; where X,A, B are n by n matrices; subject to the constraints:
a) x(i,i)=0 (ie all the diagonal element of X matrix shd be 0)
b) each of the row sum of X should be 1, ie, sum[x(i,j)]=1 when summing over all the columns for each row i.
c) each x(i,j)>=0;

Can any of you please help?
Thnaks a lot.

Subject: Solving Matrix equation with contraints

From: John D'Errico

Date: 7 May, 2010 21:20:08

Message: 2 of 25

"monalisa sen" <msen1981@gmail.com> wrote in message <hs1svg$84u$1@fred.mathworks.com>...
> Hi:
>
> I am trying to solve AX=B; where X,A, B are n by n matrices; subject to the constraints:
> a) x(i,i)=0 (ie all the diagonal element of X matrix shd be 0)
> b) each of the row sum of X should be 1, ie, sum[x(i,j)]=1 when summing over all the columns for each row i.
> c) each x(i,j)>=0;
>
> Can any of you please help?
> Thnaks a lot.

Formulate this as a linear system of equations in
n^2 unknowns. If you do so, then we find a system
of

1. n^2 equations for the equality A*X = B.

2. n equations that correspond to the diagonal
elements being zero.

3. n row sum equations.

In general, this system will not have an exact
solution, since we have n^2 + 2*n equations in
n^2 unknowns. It is an overdetermined problem.

The inequality constraints merely serve to reduce
the probability that a solution exists. However, if
you know that a solution exists, it is easy enough.

n = 4;
A = [ 16 2 3 13; 5 11 10 8; 9 7 6 12;4 14 15 1];
B = [4.78082343328984 9.22416092385747 11.3133835756658 8.68163206718693; ...
8.71780822242756 5.02919066456483 8.44394288910161 11.809058223906; ...
7.34746997932943 7.70521084234125 9.64129536853806 9.30602380979126; ...
8.89183816258422 1.19610039052821 7.7213261373564 16.1907353095312];

A
A =
    16 2 3 13
     5 11 10 8
     9 7 6 12
     4 14 15 1

Can we solve for X? We will need lsqlin from the
optimization toolbox.

M = kron(eye(n),A);
Meq = [sparse(1:n,1:(n+1):n^2,1,n,n^2);kron(ones(1,n),speye(n))];

X = lsqlin(M,B(:),[],[],Meq,[zeros(n,1);ones(n,1)],zeros(n^2,1));
X = reshape(X,n,n)
X =
            0 0.14366 0.49293 0.36341
      0.50342 0 0.39621 0.10037
       0.1052 0.0060072 0 0.88879
      0.26603 0.53135 0.20262 0

Test that it worked, to within the floating point precision
available in matlab.

A*X - B
ans =
   1.7764e-15 1.7764e-15 1.7764e-15 0
            0 0 1.7764e-15 -1.7764e-15
   2.6645e-15 0 0 0
  -1.7764e-15 -1.3323e-15 1.7764e-15 3.5527e-15

I'll let the interested student figure out how I built the
matrices M and Meq, and why they were appropriate.

HTh,
John

Subject: Solving Matrix equation with contraints

From: Bruno Luong

Date: 8 May, 2010 07:46:04

Message: 3 of 25

"monalisa sen" <msen1981@gmail.com> wrote in message <hs1svg$84u$1@fred.mathworks.com>...
> Hi:
>
> I am trying to solve AX=B; where X,A, B are n by n matrices; subject to the constraints:
> a) x(i,i)=0 (ie all the diagonal element of X matrix shd be 0)
> b) each of the row sum of X should be 1, ie, sum[x(i,j)]=1 when summing over all the columns for each row i.
> c) each x(i,j)>=0;
>
> Can any of you please help?

I have a doubt if the problem is well described, since if A is full-rank matrix (likely for square), then X = A\B is the unique solution of first equation, and you cal only pray it to meet the other constraints.
 
If A in singular matrix, you could also formalize X(:) as an element of a polytope (hopefully non empty), by finding an element of a feasible set using a dummy linear-programming.

Just call LINPROG with a dummy cost coefficients with all above constraints and maximum of 1 iteration. The solution returned is what you are looking for.

Use similar Kron trick as John's has showed to build the linear constraint matrix to pass to LINPROG.

Bruno

Subject: Solving Matrix equation with contraints

From: John D'Errico

Date: 8 May, 2010 08:48:08

Message: 4 of 25

"Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <hs34rs$5ue$1@fred.mathworks.com>...
> "monalisa sen" <msen1981@gmail.com> wrote in message <hs1svg$84u$1@fred.mathworks.com>...
> > Hi:
> >
> > I am trying to solve AX=B; where X,A, B are n by n matrices; subject to the constraints:
> > a) x(i,i)=0 (ie all the diagonal element of X matrix shd be 0)
> > b) each of the row sum of X should be 1, ie, sum[x(i,j)]=1 when summing over all the columns for each row i.
> > c) each x(i,j)>=0;
> >
> > Can any of you please help?
>
> I have a doubt if the problem is well described, since if A is full-rank matrix (likely for square), then X = A\B is the unique solution of first equation, and you cal only pray it to meet the other constraints.

I agree completely with Bruno here.

The example I supplied was explicitly chosen to have
an exact solution, but most systems will not be so
lucky.

John

Subject: Solving Matrix equation with contraints

From: Bruno Luong

Date: 8 May, 2010 12:05:08

Message: 5 of 25

Actually since all the equation are equalities, no simple linear inversion (no need fancy toolbox) should do:

% John's example
n = 4;
A = [ 16 2 3 13;
    5 11 10 8;
    9 7 6 12;
    4 14 15 1];
B = [4.78082343328984 9.22416092385747 11.3133835756658 8.68163206718693; ...
    8.71780822242756 5.02919066456483 8.44394288910161 11.809058223906; ...
    7.34746997932943 7.70521084234125 9.64129536853806 9.30602380979126; ...
    8.89183816258422 1.19610039052821 7.7213261373564 16.1907353095312];

Aeq = [kron(speye(size(A)),A);
       sparse(1:n,1:n+1:n^2,1,n,n*n);
       kron(ones(1,n),speye(n))];

beq = [B(:);
       zeros(n,1);
       ones(n,1)];

X = reshape(Aeq\beq,[n n]);

disp(X)

% Bruno

Subject: Solving Matrix equation with contraints

From: monalisa sen

Date: 8 May, 2010 15:33:09

Message: 6 of 25

"Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <hs3k1k$76b$1@fred.mathworks.com>...
> Actually since all the equation are equalities, no simple linear inversion (no need fancy toolbox) should do:
>
> % John's example
> n = 4;
> A = [ 16 2 3 13;
> 5 11 10 8;
> 9 7 6 12;
> 4 14 15 1];
> B = [4.78082343328984 9.22416092385747 11.3133835756658 8.68163206718693; ...
> 8.71780822242756 5.02919066456483 8.44394288910161 11.809058223906; ...
> 7.34746997932943 7.70521084234125 9.64129536853806 9.30602380979126; ...
> 8.89183816258422 1.19610039052821 7.7213261373564 16.1907353095312];
>
> Aeq = [kron(speye(size(A)),A);
> sparse(1:n,1:n+1:n^2,1,n,n*n);
> kron(ones(1,n),speye(n))];
>
> beq = [B(:);
> zeros(n,1);
> ones(n,1)];
>
> X = reshape(Aeq\beq,[n n]);
>
> disp(X)
>
> % Bruno


Thanks a lot John and Bruno. I really appreciate your help. Both my A and B matrix are full rank, but B can be real or complex matrix. How would I go about if it is complex.

thanks

Subject: Solving Matrix equation with contraints

From: monalisa sen

Date: 8 May, 2010 15:44:05

Message: 7 of 25

if I have to solve for X with the same constraints as above, but now the equation is
E(X+X')+C(XX')=D; all these matrices E,C,D are full rank and real.

any help/ suggestion would be great.

Subject: Solving Matrix equation with contraints

From: Bruno Luong

Date: 8 May, 2010 21:24:06

Message: 8 of 25

"monalisa sen" <msen1981@gmail.com> wrote in message <hs40s5$7df$1@fred.mathworks.com>...
> if I have to solve for X with the same constraints as above, but now the equation is
> E(X+X')+C(XX')=D; all these matrices E,C,D are full rank and real.
>
> any help/ suggestion would be great.

I would suggest to take a look at Newton method. Each Newton step will reduce to a linear problem where you can use the technique John and I have indicated earlier. But again, I have a reason to think you have yet to formulate the problem correctly, so I'll not give any concrete help.

Bruno

Subject: Solving Matrix equation with contraints

From: John D'Errico

Date: 9 May, 2010 10:15:06

Message: 9 of 25

"Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <hs3k1k$76b$1@fred.mathworks.com>...
> Actually since all the equation are equalities, no simple linear inversion (no need fancy toolbox) should do:
>

NO! I strongly disagree with this statement.

There is a very significant difference between what
Bruno shows and what I showed how to do.

When you solve a linear system of equations subject to
equality constraints as did I, you force the equality
constraints to hold exactly true. Thus, solve the linear
system

   A*x = y

subject to the linear equality constraints

   Aeq*x = beq

When you do so with a tool designed to solve that
linear system, you force the constraints in Aeq to
be satisfied EXACTLY, while the equations in A*x = y
are satisfied as well as you can subject to the equality
constraints. You are then minimizing

  norm(A*x - y)

What Bruno has suggested doing is to replace this
equality constrained problem with a very different
one!!!!!

He tells you to form the new linear system

   [A;Aeq] * x = [y;beq]

and to solve that system as a single least squares
problem. The equality constraints are now lumped
in with the rest of A, allowing them to be only
approximately satisfied!!!!! You are now solving
the problem of minimizing

   norm([A;Aeq]*x - [y;beq])

This is a VERY different thing. A very different
solution, and I'll claim not an appropriate one
given the problem statement. That it is an easier
one to solve in MATLAB since it does not require
lsqlin does not make it a better solution.

As long as you have an exact solution, as I did
when I made up data in my test case, it did not
matter. The two methods must yield the same
result. However, in the real world, when your data
is imperfect, there is a significant difference!!!!!

John

Subject: Solving Matrix equation with contraints

From: Bruno Luong

Date: 9 May, 2010 11:30:22

Message: 10 of 25

"monalisa sen" <msen1981@gmail.com> wrote in message <hs40s5$7df$1@fred.mathworks.com>...
> if I have to solve for X with the same constraints as above, but now the equation is
> E(X+X')+C(XX')=D; all these matrices E,C,D are full rank and real.
>
> any help/ suggestion would be great.

Despite what I wrote, I can't resist to make a code to solve of the equation:

% script newtonsolve.m
%% Data
n=4;

E = 10*randn(n);
C = 10*randn(n);

% Generate X that saitisfies constraints
X = rand(n);
X(1:n+1:end) = 0;
X = diag(1./sum(X,2))*X;

% rhs
D = C*X*X'+E*(X+X');

fprintf('X input:\n')
disp(X);
fprintf('D\n')
disp(D);

%% Engine
idx = zeros(n);
idx(:) = 1:numel(idx);

maxiter = 100;
tol = 1e-10;

Aeq = [sparse(1:n,1:n+1:n^2,1,n,n*n);
       kron(ones(1,n),speye(n))];

beq = [zeros(n,1);
       ones(n,1)];
       
X0 = reshape(Aeq \ beq, [n n]);

Q = null(full(Aeq));
P=Q*Q'; % Projection operator on span <Aeq>
 
EE = C\E;
DD = C\D;

% Matrix for EE*DX
M1 = kron(eye(size(EE)),EE);

% Matrix for EE*DX'
M2 = kron(EE',eye(size(EE)))';
M2 = M2(idx',:);

% Random starting solution
X = zeros(n);

lhsfun = @(X) (EE*(X+X') + X*X');
% Newton iteration
for i=1:maxiter
    
    % Matrix for X*DX'
    M3 = kron(X',eye(size(X)))';
    M3 = M3(idx',:);
    
    % Matrix for DX*X'
    M4 = kron(X',eye(size(X')))';
    
    % Group all together
    M = M1+M2+M3+M4;
    
    % Rhs
    B = DD - lhsfun(X);
 
    DX = reshape(pinv(M)*B(:),[n n]);

    XOLD = X;
    % Update X, and make sure it satisfies the constraints
    X(:) = X0(:) + P*(X(:)+DX(:)-X0(:));
    res = norm(XOLD-X,'fro');
    if res<tol
        break
    end
end

%% Check
fprintf('#iterations = %d\n', i);
fprintf('X output:\n')
disp(X)
fprintf('E*(X+X'') + C*X*X''\n')
disp(E*(X+X') + C*X*X')

%%%%%%%%%%%%%%%%%%%%%

>> newtonsolve
X input:
                   0 0.200011057410700 0.435143156125102 0.364845786464198
   0.041699642636209 0 0.838590892356224 0.119709465007567
   0.632003764772581 0.038824206504272 0 0.329172028723146
   0.283411262822000 0.400519151217284 0.316069585960716 0

D
   8.437875454824908 8.231567016110517 25.378926447605572 20.767614997066133
 -32.576607481949722 -23.167874664568505 -23.733560823288471 -17.317195377377494
 -21.109645575796083 -19.144084118959430 -28.679022904527024 -21.219115873571241
  27.951333799118892 23.738997187380996 30.296712983976700 35.870650389416355

#iterations = 10
X output:
                   0 0.200011057410682 0.435143156125108 0.364845786464209
   0.041699642636213 0 0.838590892356238 0.119709465007549
   0.632003764772585 0.038824206504273 0 0.329172028723141
   0.283411262821972 0.400519151217300 0.316069585960729 0

E*(X+X') + C*X*X'
   8.437875454824923 8.231567016110521 25.378926447605522 20.767614997065859
 -32.576607481949466 -23.167874664568380 -23.733560823288894 -17.317195377377303
 -21.109645575795909 -19.144084118959562 -28.679022904527226 -21.219115873571123
  27.951333799119027 23.738997187380903 30.296712983977219 35.870650389416028

% Bruno

Subject: Solving Matrix equation with contraints

From: Bruno Luong

Date: 9 May, 2010 12:30:09

Message: 11 of 25

"John D'Errico" <woodchips@rochester.rr.com> wrote in message <hs61va$k2g$1@fred.mathworks.com>...
> "Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <hs3k1k$76b$1@fred.mathworks.com>...
> > Actually since all the equation are equalities, no simple linear inversion (no need fancy toolbox) should do:
> >
>
> NO! I strongly disagree with this statement.
>
> There is a very significant difference between what
> Bruno shows and what I showed how to do.
>
> When you solve a linear system of equations subject to
> equality constraints as did I, you force the equality
> constraints to hold exactly true. Thus, solve the linear
> system
>
> A*x = y
>
> subject to the linear equality constraints
>
> Aeq*x = beq
>
> When you do so with a tool designed to solve that
> linear system, you force the constraints in Aeq to
> be satisfied EXACTLY, while the equations in A*x = y
> are satisfied as well as you can subject to the equality
> constraints. You are then minimizing
>
> norm(A*x - y)
>
> What Bruno has suggested doing is to replace this
> equality constrained problem with a very different
> one!!!!!

Sorry John, what did I replace? Let me allow to quote OP

["I am trying to solve AX=B;"]

Then Wikipedia page: http://en.wikipedia.org/wiki/Equation_solving

[ In mathematics, to solve an equation is to find what values (numbers, functions, sets, etc.) fulfill a condition stated in the form of an equation (two expressions related by equality). These expressions contain one or more unknowns, which are free variables for which values are sought that cause the condition to be fulfilled. To be precise, what is sought are often not necessarily actual values, but, more in general, mathematical expressions. A solution of the equation is an assignment of expressions to the unknowns that satisfies the equation; in other words, expressions such that, when they are substituted for the unknowns, the equation becomes a tautology (a provably true statement).
 ...

It is also possible that the task is to find a solution, among possibly many, that is best in some respect. Problems of that nature are called optimization problems; solving an optimization problem is generally *not* referred to as "equation solving".
]

The last sentence is what in my mind. OP asked to *solve* an equation (not to approximate it). I just put three equations he stated in matrix form. I did *not* replace anything.

Bruno

Subject: Solving Matrix equation with contraints

From: John D'Errico

Date: 9 May, 2010 14:26:03

Message: 12 of 25

"Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <hs69sh$d2p$1@fred.mathworks.com>...
> "John D'Errico" <woodchips@rochester.rr.com> wrote in message <hs61va$k2g$1@fred.mathworks.com>...
> > "Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <hs3k1k$76b$1@fred.mathworks.com>...
> > > Actually since all the equation are equalities, no simple linear inversion (no need fancy toolbox) should do:
> > >
> >
> > NO! I strongly disagree with this statement.
> >
> > There is a very significant difference between what
> > Bruno shows and what I showed how to do.
> >
> > When you solve a linear system of equations subject to
> > equality constraints as did I, you force the equality
> > constraints to hold exactly true. Thus, solve the linear
> > system
> >
> > A*x = y
> >
> > subject to the linear equality constraints
> >
> > Aeq*x = beq
> >
> > When you do so with a tool designed to solve that
> > linear system, you force the constraints in Aeq to
> > be satisfied EXACTLY, while the equations in A*x = y
> > are satisfied as well as you can subject to the equality
> > constraints. You are then minimizing
> >
> > norm(A*x - y)
> >
> > What Bruno has suggested doing is to replace this
> > equality constrained problem with a very different
> > one!!!!!
>
> Sorry John, what did I replace? Let me allow to quote OP
>
> ["I am trying to solve AX=B;"]

SUBJECT to a set of equality constraints.

When you append the equality constraints to the linear
system A*X = B, the system becomes overdetermined.
It no longer has an exact solution in general.

Therefore, the equality constraints will no longer be
exactly satisfied. The problems are very different, although
one might argue that the difference is a subtle one, the
difference is important.

Surely you understand that there is a difference here?

For example, solve the linear system A*x = b.

  A = rand(2,3)
  b = rand(2,1)

A =
    0.814723686393179 0.126986816293506 0.63235924622541
    0.905791937075619 0.913375856139019 0.0975404049994095
b =
    0.278498218867048
    0.546881519204984

A\b
ans =
         0.293942721840442
         0.307245445468768
                         0

This is an underdetemined problem, as you well know.

However, suppose we choose to solve that linear
system subject to a set of EQUALITY constraints?

  Aeq = rand(2,3);
  beq = rand(2,1);

Aeq
Aeq =
         0.957506835434298 0.157613081677548 0.957166948242946
         0.964888535199277 0.970592781760616 0.485375648722841

beq
beq =
           0.8002804688888
         0.141886338627215

Here we solve the linear system A*x = b, subject to
the exact satisfaction of the linear equality constraints.

x0 = lsqlin(A,b,[],[],Aeq,beq)
x0 =
          1.45935801310315
         -1.08173016982432
        -0.445658909531652

What does this do in terms of the original system?

A*x0 - b
ans =
         0.491293318876076
        -0.256502768131462

Aeq*x0 - beq
ans =
                         0
      1.38777878078145e-16

See that the equality constraints are exactly that -
equality constraints. They are satisfied exactly. See
what happens when we combine the two matrices
into one and just use backslash.

x1 = [A;Aeq]\[b;beq]
x1 =
          1.07802458175086
         -0.58604426595662
        -0.394212388430651

See that the equality constraints are not satisfied.
In fact, they are no longer equality constraints.

Aeq*x1 - beq
ans =
        -0.237759874646762
         0.138135792868382

A*x1 - b
ans =
         0.276090198057855
        -0.144145864262541

John

Subject: Solving Matrix equation with contraints

From: Bruno Luong

Date: 9 May, 2010 15:01:07

Message: 13 of 25

"John D'Errico" <woodchips@rochester.rr.com> wrote in message <hs6glr$mf1$1@fred.mathworks.com>...

>
> A*x0 - b
> ans =
> 0.491293318876076
> -0.256502768131462
>
> Aeq*x0 - beq
> ans =
> 0
> 1.38777878078145e-16
>
> See that the equality constraints are exactly that -
> equality constraints. They are satisfied exactly. See
> what happens when we combine the two matrices
> into one and just use backslash.
>

But John, your example does NOT provide the solution for A*x0 = b, regardless which methods (since (A*x0 - b) is NOT zeros).

OP asked to SOLVE A*x = b, i.e., A*x - b = 0 at the numerical precision.

Bruno

Subject: Solving Matrix equation with contraints

From: Bruno Luong

Date: 9 May, 2010 16:09:04

Message: 14 of 25

Here is a more robust code using levenberg-marquardt damping:

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% script LMsolve.m
% Solve for X the system
% E*(X+X')+C*(X*X') = D
% diag(X) = 0
% sum(X,2) = 1
%
% All matrices are square and real and have the same size (n x n)

%% Data
n=4;

E = randn(n);
C = randn(n);

% Generate X that saitisfies constraints
XIN = rand(n);
XIN(1:n+1:end) = 0;
XIN = diag(1./sum(XIN,2))*XIN;

% rhs
D = C*(XIN*XIN')+E*(XIN+XIN');

fprintf('X input:\n')
disp(XIN);
fprintf('D\n')
disp(D);

%% Engine
n = length(D);
n2 = n*n;
idx = zeros(n);
idx(:) = 1:numel(idx);

% Newton pamameters
maxiter = 500;
tol = 1e-6;
% levenberg-marquardt parameters
lbdref = 0.01;
v = 2;

Aeq = [sparse(1:n,1:n+1:n^2,1,n,n*n);
       kron(ones(1,n),speye(n))];

beq = [zeros(n,1);
       ones(n,1)];
       
X0 = reshape(Aeq \ beq, [n n]);

Q = null(full(Aeq));
P = Q*Q'; % Projection operator on orthogonal span <rows Aeq>

% New matrix equation: EE*(X+X')+(X*X') = DD
EE = C\E;
DD = C\D;

% Matrix for EE*DX
M1 = kron(speye(n),EE);

% Matrix for EE*DX'
M2 = kron(EE',speye(n))';
M2 = M2(idx',:);

converge = false;
resfun = @(X) DD - (EE*(X+X') + X*X');

lbd = lbdref;
nresold = Inf;

% Random starting solution
X = zeros(n);

% Newton iteration
for i=1:maxiter

    % Project on the affine space
    X(:) = X0(:) + P*(X(:)-X0(:));
    
    % Matrix for X*DX'
    M3 = kron(X',speye(n))';
    M3 = M3(idx',:);
    
    % Matrix for DX*X'
    M4 = kron(X',speye(n))';
    
    % Group all together
    M = M1+M2+M3+M4;

    % Diagonal of Levenberg-marquardt
    LM = sqrt(sum(M.^2,1));
    LM = spdiags(LM(:), 0, n2, n2);

    % Rhs
    B = resfun(X);
    RHS = [B(:); zeros(n2,1)];
    
    % Keep track old matrix
    XOLD = X;
    
    % LM iteration on damping factor lbd
    while true
        J = [M; lbd*LM];
        
        DX = J \ RHS;
        DX = P*DX;
        DX = reshape(DX,[n n]);

        % Update X
        X(:) = XOLD(:) + DX(:);
        
        residual = resfun(X);
        nres = norm(residual(:));
        if nres<nresold || lbd>1e3
            nresold = nres;
            if lbd>1e-3
                lbd = lbd/v;
            end
            break % while-loop
        else
            lbd = lbd*v;
        end
    end % LM inner-loop
    nDX = norm(DX(:));
    if nDX<tol
        converge = true;
        break
    end
end % for-loop

X = XOLD;

%% Check
if converge
    fprintf('#iterations = %d\n', i);
else
    warning('not converge');
end
fprintf('X output:\n')
disp(X)
fprintf('E*(X+X'') + C*X*X''\n')
disp(E*(X+X') + C*(X*X'))

%% Test on command line

>> LMsolve
X input:
         0 0.2284 0.4930 0.2786
    0.7168 0 0.1256 0.1576
    0.1499 0.6263 0 0.2238
    0.4942 0.4217 0.0841 0

D
   -0.1010 -1.1638 -1.0334 -0.8023
    0.5220 0.0579 -1.1870 -0.9846
   -0.2692 -2.2210 -0.4314 -0.9152
   -0.2210 -0.7911 -1.0095 -1.0191

#iterations = 10
X output:
         0 0.2284 0.4930 0.2786
    0.7168 0 0.1256 0.1576
    0.1499 0.6263 0 0.2238
    0.4942 0.4217 0.0841 0

E*(X+X') + C*X*X'
   -0.1010 -1.1638 -1.0334 -0.8023
    0.5220 0.0579 -1.1870 -0.9846
   -0.2692 -2.2210 -0.4314 -0.9152
   -0.2210 -0.7911 -1.0095 -1.0191

% Bruno

Subject: Solving Matrix equation with contraints

From: Bruno Luong

Date: 9 May, 2010 21:15:21

Message: 15 of 25

I make an important change in a way to perform the internal projection. The new code is below:

function X = LMsolve(E, C, D)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% X = LMsolve(E, C, D)
% Solve for X the system
% E*(X+X')+C*(X*X') = D
% diag(X) = 0
% sum(X,2) = 1
%
% All matrices are square and real and have the same size (n x n)
% Levenberg-Marquardt method
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%% Engine, try to find X from E, C, D
n = length(D);
n2 = n*n;
idx = reshape(1:n2,[n n]);
p = idx.';
transm = sparse(p(:), idx(:), 1, n2, n2);

% Newton pamameters
maxiter = 1000;
tol = 1e-6;
% Levenberg-Marquardt parameters
lbdref = 0.01;
v = 4;

I = speye(n);

Aeq = [sparse(1:n,1:n+1:n^2,1,n,n*n); ...
       kron(ones(1,n),I)];

beq = [zeros(n,1); ...
       ones(n,1)];
       
X0 = reshape(Aeq \ beq, [n n]);

Q = sparse(null(full(Aeq)));
P = Q*Q'; % Projection operator on orthogonal span <rows Aeq>

% Reformulate matrix equation: EE*(X+X')+(X*X') = DD
% and other constraints
EE = C\E;
DD = C\D;

% Matrix for EE*DX
M1 = kron(I,EE);

% Matrix for EE*DX'
M2 = kron(EE',I)';
M2 = M2(idx',:);

M12 = M1+M2;

converge = false;
resfun = @(X) DD - (EE*(X+X') + X*X');

lbd = lbdref;
nresold = Inf;

% "Random" starting solution
X = zeros(n);

% Newton iteration
for i=1:maxiter

    % Project on the affine space
    X(:) = X0(:) + P*(X(:)-X0(:));

    % Matrix for DX*X'
    M4 = kron(X',I)';
    
    % Matrix for X*DX'
    M3 = transm*M4;
    
    % Group all together
    M = M12+M3+M4;
    
    % We look for DX := Q*DY
    % Such that M*DX = RHS
    M = M*Q;

    % Diagonal of Levenberg-marquardt damping
    LM = sqrt(sum(M.^2,1));
    m = size(M,2);
    LM = sparse(1:m, 1:m, LM, m, m);

    % Rhs
    B = resfun(X);
    RHS = [B(:); zeros(m,1)];
    
    % Keep track old matrix
    XOLD = X;
    
    % LM iteration on damping factor lbd
    while true
        J = [M; lbd*LM];
        
        DY = J\RHS;
        DX = Q*DY;
        DX = reshape(DX,[n n]);

        % Update X
        X(:) = XOLD(:) + DX(:);
        
        residual = resfun(X);
        nres = norm(residual(:));
        if nres<=nresold || lbd>1e3
            nresold = nres;
            if lbd>1e-6
                lbd = lbd/v;
            end
            break % while-loop
        else
            lbd = lbd*v;
        end
    end % LM inner-loop
    nDX = norm(DX(:));
    if nDX<tol
        converge = true;
        break
    end
end % for-loop

X = XOLD;

%% Check
if converge
    fprintf('#iterations = %d\n', i);
else
    warning('not yet converge');
end

end % LMsolve

%%%%%

% Test LMSolve, a Levenberg-Marquardt solver
%% Data
n=4;

E = rand(n)+diag(rand(1,n));
C = rand(n)+diag(rand(1,n));

% Generate matrix X that satisfies constraints
XIN = rand(n);
XIN(1:n+1:end) = 0;
XIN = diag(1./sum(XIN,2))*XIN;

% rhs
D = C*(XIN*XIN')+E*(XIN+XIN');

fprintf('X input:\n')
disp(XIN);
fprintf('D:\n')
disp(D);

%% Command line
X = LMsolve(E, C, D);

fprintf('------------------------\n')
fprintf('Levenberg-Marquardt: X output:\n')
disp(X)
fprintf('E*(X+X'') + C*X*X'':\n');
DX = E*(X+X') + C*(X*X');
disp(DX);

% Bruno

Subject: Solving Matrix equation with contraints

From: monalisa

Date: 11 May, 2010 20:51:06

Message: 16 of 25

Sorry for late reply..thanks a ton !
"Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <hs78l9$769$1@fred.mathworks.com>...
> I make an important change in a way to perform the internal projection. The new code is below:
>
> function X = LMsolve(E, C, D)
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
> % X = LMsolve(E, C, D)
> % Solve for X the system
> % E*(X+X')+C*(X*X') = D
> % diag(X) = 0
> % sum(X,2) = 1
> %
> % All matrices are square and real and have the same size (n x n)
> % Levenberg-Marquardt method
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>
> %% Engine, try to find X from E, C, D
> n = length(D);
> n2 = n*n;
> idx = reshape(1:n2,[n n]);
> p = idx.';
> transm = sparse(p(:), idx(:), 1, n2, n2);
>
> % Newton pamameters
> maxiter = 1000;
> tol = 1e-6;
> % Levenberg-Marquardt parameters
> lbdref = 0.01;
> v = 4;
>
> I = speye(n);
>
> Aeq = [sparse(1:n,1:n+1:n^2,1,n,n*n); ...
> kron(ones(1,n),I)];
>
> beq = [zeros(n,1); ...
> ones(n,1)];
>
> X0 = reshape(Aeq \ beq, [n n]);
>
> Q = sparse(null(full(Aeq)));
> P = Q*Q'; % Projection operator on orthogonal span <rows Aeq>
>
> % Reformulate matrix equation: EE*(X+X')+(X*X') = DD
> % and other constraints
> EE = C\E;
> DD = C\D;
>
> % Matrix for EE*DX
> M1 = kron(I,EE);
>
> % Matrix for EE*DX'
> M2 = kron(EE',I)';
> M2 = M2(idx',:);
>
> M12 = M1+M2;
>
> converge = false;
> resfun = @(X) DD - (EE*(X+X') + X*X');
>
> lbd = lbdref;
> nresold = Inf;
>
> % "Random" starting solution
> X = zeros(n);
>
> % Newton iteration
> for i=1:maxiter
>
> % Project on the affine space
> X(:) = X0(:) + P*(X(:)-X0(:));
>
> % Matrix for DX*X'
> M4 = kron(X',I)';
>
> % Matrix for X*DX'
> M3 = transm*M4;
>
> % Group all together
> M = M12+M3+M4;
>
> % We look for DX := Q*DY
> % Such that M*DX = RHS
> M = M*Q;
>
> % Diagonal of Levenberg-marquardt damping
> LM = sqrt(sum(M.^2,1));
> m = size(M,2);
> LM = sparse(1:m, 1:m, LM, m, m);
>
> % Rhs
> B = resfun(X);
> RHS = [B(:); zeros(m,1)];
>
> % Keep track old matrix
> XOLD = X;
>
> % LM iteration on damping factor lbd
> while true
> J = [M; lbd*LM];
>
> DY = J\RHS;
> DX = Q*DY;
> DX = reshape(DX,[n n]);
>
> % Update X
> X(:) = XOLD(:) + DX(:);
>
> residual = resfun(X);
> nres = norm(residual(:));
> if nres<=nresold || lbd>1e3
> nresold = nres;
> if lbd>1e-6
> lbd = lbd/v;
> end
> break % while-loop
> else
> lbd = lbd*v;
> end
> end % LM inner-loop
> nDX = norm(DX(:));
> if nDX<tol
> converge = true;
> break
> end
> end % for-loop
>
> X = XOLD;
>
> %% Check
> if converge
> fprintf('#iterations = %d\n', i);
> else
> warning('not yet converge');
> end
>
> end % LMsolve
>
> %%%%%
>
> % Test LMSolve, a Levenberg-Marquardt solver
> %% Data
> n=4;
>
> E = rand(n)+diag(rand(1,n));
> C = rand(n)+diag(rand(1,n));
>
> % Generate matrix X that satisfies constraints
> XIN = rand(n);
> XIN(1:n+1:end) = 0;
> XIN = diag(1./sum(XIN,2))*XIN;
>
> % rhs
> D = C*(XIN*XIN')+E*(XIN+XIN');
>
> fprintf('X input:\n')
> disp(XIN);
> fprintf('D:\n')
> disp(D);
>
> %% Command line
> X = LMsolve(E, C, D);
>
> fprintf('------------------------\n')
> fprintf('Levenberg-Marquardt: X output:\n')
> disp(X)
> fprintf('E*(X+X'') + C*X*X'':\n');
> DX = E*(X+X') + C*(X*X');
> disp(DX);
>
> % Bruno

Subject: Solving Matrix equation with contraints

From: Bruno Luong

Date: 11 May, 2010 21:01:07

Message: 17 of 25

"monalisa " <mde1506@gmail.com> wrote in message <hscfvq$dmh$1@fred.mathworks.com>...
> Sorry for late reply..thanks a ton !

Smile, good to know you are still reading and it helps. It takes me some effort to make the code working right (check up to n = 20).

Bruno

Subject: Solving Matrix equation with contraints

From: monalisa

Date: 11 May, 2010 21:22:04

Message: 18 of 25

I learnt from the discussions :)...I really appreciate your effort...it took me some time to understand the algorithm. For my case, n=46 and its taking some time...but I am trying it right now.
Thanks again.

"Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <hscgij$krq$1@fred.mathworks.com>...
> "monalisa " <mde1506@gmail.com> wrote in message <hscfvq$dmh$1@fred.mathworks.com>...
> > Sorry for late reply..thanks a ton !
>
> Smile, good to know you are still reading and it helps. It takes me some effort to make the code working right (check up to n = 20).
>
> Bruno

Subject: Solving Matrix equation with contraints

From: Bruno Luong

Date: 11 May, 2010 21:34:06

Message: 19 of 25

"monalisa " <mde1506@gmail.com> wrote in message <hschps$ae3$1@fred.mathworks.com>...
> I learnt from the discussions :)...I really appreciate your effort...it took me some time to understand the algorithm. For my case, n=46 and its taking some time...but I am trying it right now.
> Thanks again.

If you try such largen, the weak point of the algorithm is perhaps the line of computing the NULL of Aeq matrix. Actually I have replace this part by a new function where I just submitted on FEX:
http://www.mathworks.com/matlabcentral/fileexchange/27550

function X = LMsolve(E, C, D)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% X = LMsolve(E, C, D)
% Solve for X the system
% E*(X+X') + C*(X*X') = D
% diag(X) = 0
% sum(X,2) = 1
%
% All matrices are square and real and have the same size (n x n)
% Levenberg-Marquardt method
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%% Engine, try to find X from E, C, D
n = length(D);
n2 = n*n;
idx = reshape(1:n2,[n n]);
p = idx.';
TMat = sparse(p(:), idx(:), 1, n2, n2);

% Newton pamameters
maxiter = 1000;
tol = 1e-6;
% Levenberg-Marquardt parameters
lbdref = 0.01;
v = 4; % speed of damping factor

I = speye(n);

Aeq = [sparse(1:n,1:n+1:n2,1,n,n2); ...
       kron(ones(1,n),I)];

beq = [zeros(n,1); ...
       ones(n,1)];
       
X0 = reshape(Aeq \ beq, [n n]);

% SPNULL: http://www.mathworks.com/matlabcentral/fileexchange/27550
Q = spnull(Aeq);
P = Q*Q'; % Projection operator on orthogonal span <rows Aeq>

% Reformulate matrix equation: EE*(X+X')+(X*X') = DD
% (and other constraints)
EE = C\E;
DD = C\D;

% Matrix for EE*DX
M1 = kron(I,EE);

% Matrix for EE*DX'
M2 = TMat*kron(EE,I);

M12 = M1+M2;

converge = false;
resfun = @(X) DD - (EE*(X+X') + X*X');

lbd = lbdref;
nresold = Inf;

% "Random" starting solution
X = zeros(n);

% Project on the affine space
X(:) = X0(:) + P*(X(:)-X0(:));

TEMat = speye(n2) + TMat;

% Newton iteration
for i=1:maxiter

    % Matrix for DX*X' + X*DX'
    M34 = TEMat*kron(X,I);
    
    % Group all together
    M = M12+M34;
    
    % We look for DX := Q*DY
    % Such that M*DX = RHS
    M = M*Q;

    % Diagonal of Levenberg-marquardt damping
    LM = sqrt(sum(M.^2,1));
    m = size(M,2);
    LM = sparse(1:m, 1:m, LM, m, m);

    % Rhs
    B = resfun(X);
    RHS = [B(:); zeros(m,1)];
    
    % Keep track old matrix
    XOLD = X;
    
    % LM iteration on damping factor lbd
    while true
        J = [M; lbd*LM];
        
        DY = J\RHS;
        DX = Q*DY;
        DX = reshape(DX,[n n]);

        % Update X
        X = XOLD + DX;
        
        residual = resfun(X);
        nres = norm(residual(:));
        if nres<=nresold || lbd>1e3
            nresold = nres;
            if lbd>1e-6
                lbd = lbd/v;
            end
            break % while-loop
        else
            lbd = lbd*v;
        end
    end % LM inner-loop
    nDX = norm(DX(:));
    if nDX<tol
        converge = true;
        break % for-loop
    end
end % for-loop

if nres>nresold
    X = XOLD;
end

%% Check
if converge
    fprintf('#iterations = %d\n', i);
else
    warning('not yet converge');
end

end % LMsolve

% Bruno

Subject: Solving Matrix equation with contraints

From: monalisa

Date: 11 May, 2010 21:43:05

Message: 20 of 25

The solution is converging , but many x(i,j) are negative in the final X matrix.
does it mean one of the constraint condition is not working?


"monalisa " <mde1506@gmail.com> wrote in message <hschps$ae3$1@fred.mathworks.com>...
> I learnt from the discussions :)...I really appreciate your effort...it took me some time to understand the algorithm. For my case, n=46 and its taking some time...but I am trying it right now.
> Thanks again.
>
> "Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <hscgij$krq$1@fred.mathworks.com>...
> > "monalisa " <mde1506@gmail.com> wrote in message <hscfvq$dmh$1@fred.mathworks.com>...
> > > Sorry for late reply..thanks a ton !
> >
> > Smile, good to know you are still reading and it helps. It takes me some effort to make the code working right (check up to n = 20).
> >
> > Bruno

Subject: Solving Matrix equation with contraints

From: Bruno Luong

Date: 11 May, 2010 21:50:07

Message: 21 of 25

"monalisa " <mde1506@gmail.com> wrote in message <hscj19$pf5$1@fred.mathworks.com>...
> The solution is converging , but many x(i,j) are negative in the final X matrix.
> does it mean one of the constraint condition is not working?

Do you want to impose X them positive? May be the easiest option is to use one of the function (e.g., FMINCON) in optimization toolbox. May be John can guide you, I do not have the license for this toolbox.

Bruno

Subject: Solving Matrix equation with contraints

From: Bruno Luong

Date: 11 May, 2010 21:57:04

Message: 22 of 25

Sorry I overlooked your first post and the condition X>=0 is indeed required. Another option is just project X to the positive set after updating X
  
% ...
X = XOLD + DX;
% new codes
X = max(X,0);
DX = X-XOLD;
%
...

Bruno

Subject: Solving Matrix equation with contraints

From: monalisa

Date: 12 May, 2010 00:12:06

Message: 23 of 25

I really appreciate your patience. When I am projecting X on the positive set,after updating in the final iteration, its giving me for some of the row sums > 1.


"Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <hscjrg$g98$1@fred.mathworks.com>...
> Sorry I overlooked your first post and the condition X>=0 is indeed required. Another option is just project X to the positive set after updating X
>
> % ...
> X = XOLD + DX;
> % new codes
> X = max(X,0);
> DX = X-XOLD;
> %
> ...
>
> Bruno

Subject: Solving Matrix equation with contraints

From: Bruno Luong

Date: 12 May, 2010 05:24:04

Message: 24 of 25

"monalisa " <mde1506@gmail.com> wrote in message <hscrom$pvd$1@fred.mathworks.com>...
> I really appreciate your patience. When I am projecting X on the positive set,after updating in the final iteration, its giving me for some of the row sums > 1.

Sorry, I forget to take into account other constraints. What I can offer is making the following change: find the Newton step-size (called rho) so that the condition X>=0 is prevailed. I make two alterations in the code, finding the best maximum step-size and in the test accordingly for breaking the Newton's for-loop.

Remember that the more you increase the size, the more the problem becomes unstable. Indeed, the matrix equation looks like a non-linear first kind Fredholm's integral equation, which is a prototype of ill-posed problem. Any numerical solution will have a limitation.

Here is the new code.

function X = LMsolve(E, C, D)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% X = LMsolve(E, C, D)
% Solve for X the system
% E*(X+X') + C*(X*X') = D
% diag(X) = 0
% sum(X,2) = 1
% X >= 0
%
% All matrices are square and real and have the same size (n x n)
% Levenberg-Marquardt method
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%% Engine, try to find X from E, C, D
n = length(D);
n2 = n*n;
idx = reshape(1:n2,[n n]);
p = idx.';
TMat = sparse(p(:), idx(:), 1, n2, n2);

% Newton pamameters
maxiter = 1000;
tol = 1e-10;
% Levenberg-Marquardt parameters
lbdref = 0.01;
v = 2; % speed of damping factor

I = speye(n);

Aeq = [sparse(1:n,1:n+1:n2,1,n,n2); ...
       kron(ones(1,n),I)];

beq = [zeros(n,1); ...
       ones(n,1)];
       
X0 = reshape(Aeq \ beq, [n n]);

% SPNULL: http://www.mathworks.com/matlabcentral/fileexchange/27550
Q = spnull(Aeq);
P = Q*Q'; % Projection operator on orthogonal span <rows Aeq>

% Reformulate matrix equation: EE*(X+X')+(X*X') = DD
% (and other constraints)
EE = C\E;
DD = C\D;

% Matrix for EE*DX
M1 = kron(I,EE);

% Matrix for EE*DX'
M2 = TMat*kron(EE,I);

M12 = M1+M2;

converge = false;
resfun = @(X) DD - (EE*(X+X') + X*X');

lbd = lbdref;
nresold = Inf;

% "Random" starting solution
X = zeros(n);

TEMat = speye(n2) + TMat;

% Project on the affine space
X(:) = X0(:) + P*(X(:)-X0(:));

% Newton iteration
for i=1:maxiter

    % Matrix for DX*X' + X*DX'
    M34 = TEMat*kron(X,I);
    
    % Group all together
    M = M12+M34;
    
    % We look for DX := Q*DY
    % Such that M*DX = RHS
    M = M*Q;

    % Diagonal of Levenberg-marquardt damping
    LM = sqrt(sum(M.^2,1));
    m = size(M,2);
    LM = sparse(1:m, 1:m, LM, m, m);

    % Rhs
    B = resfun(X);
    RHS = [B(:); zeros(m,1)];
    
    % Keep track old matrix
    XOLD = X;
    
    % LM iteration on damping factor lbd
    while true
        J = [M; lbd*LM];
        
        DY = J\RHS;
        DX = Q*DY;
        DX = reshape(DX,[n n]);

        % Find the step maximum step size so as
        % to prevent X to be < 0 after updating
        DX(1:n+1:n) = 0;
        rho = -XOLD./DX;
        rho(DX>=0) = Inf;
        rho = min(min(rho(:)),1);
        % Update X
        X = XOLD + rho*DX;
        
        residual = resfun(X);
        nres = norm(residual(:));
        if nres<=nresold || lbd>1e3
            nresold = nres;
            if lbd>1e-6
                lbd = lbd/v;
            end
            break % while-loop
        else
            lbd = lbd*v;
        end
    end % LM inner-loop
    nDX = rho*norm(DX(:));
    if nDX<tol
        converge = true;
        break % for-loop
    end
end % for-loop

if nres>nresold
    X = XOLD;
end

%% Check
if converge
    fprintf('#iterations = %d\n', i);
else
    warning('not yet converge');
end

end % LMsolve

%%%%%

% Bruno

Subject: Solving Matrix equation with contraints

From: monalisa

Date: 12 May, 2010 17:15:23

Message: 25 of 25

Thanks a ton....I am unfamiliar with Fredholm's integral equation, but I will read it up.
Thanks again.


"Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <hsde1k$jd9$1@fred.mathworks.com>...
> "monalisa " <mde1506@gmail.com> wrote in message <hscrom$pvd$1@fred.mathworks.com>...
> > I really appreciate your patience. When I am projecting X on the positive set,after updating in the final iteration, its giving me for some of the row sums > 1.
>
> Sorry, I forget to take into account other constraints. What I can offer is making the following change: find the Newton step-size (called rho) so that the condition X>=0 is prevailed. I make two alterations in the code, finding the best maximum step-size and in the test accordingly for breaking the Newton's for-loop.
>
> Remember that the more you increase the size, the more the problem becomes unstable. Indeed, the matrix equation looks like a non-linear first kind Fredholm's integral equation, which is a prototype of ill-posed problem. Any numerical solution will have a limitation.
>
> Here is the new code.
>
> function X = LMsolve(E, C, D)
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
> % X = LMsolve(E, C, D)
> % Solve for X the system
> % E*(X+X') + C*(X*X') = D
> % diag(X) = 0
> % sum(X,2) = 1
> % X >= 0
> %
> % All matrices are square and real and have the same size (n x n)
> % Levenberg-Marquardt method
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>
> %% Engine, try to find X from E, C, D
> n = length(D);
> n2 = n*n;
> idx = reshape(1:n2,[n n]);
> p = idx.';
> TMat = sparse(p(:), idx(:), 1, n2, n2);
>
> % Newton pamameters
> maxiter = 1000;
> tol = 1e-10;
> % Levenberg-Marquardt parameters
> lbdref = 0.01;
> v = 2; % speed of damping factor
>
> I = speye(n);
>
> Aeq = [sparse(1:n,1:n+1:n2,1,n,n2); ...
> kron(ones(1,n),I)];
>
> beq = [zeros(n,1); ...
> ones(n,1)];
>
> X0 = reshape(Aeq \ beq, [n n]);
>
> % SPNULL: http://www.mathworks.com/matlabcentral/fileexchange/27550
> Q = spnull(Aeq);
> P = Q*Q'; % Projection operator on orthogonal span <rows Aeq>
>
> % Reformulate matrix equation: EE*(X+X')+(X*X') = DD
> % (and other constraints)
> EE = C\E;
> DD = C\D;
>
> % Matrix for EE*DX
> M1 = kron(I,EE);
>
> % Matrix for EE*DX'
> M2 = TMat*kron(EE,I);
>
> M12 = M1+M2;
>
> converge = false;
> resfun = @(X) DD - (EE*(X+X') + X*X');
>
> lbd = lbdref;
> nresold = Inf;
>
> % "Random" starting solution
> X = zeros(n);
>
> TEMat = speye(n2) + TMat;
>
> % Project on the affine space
> X(:) = X0(:) + P*(X(:)-X0(:));
>
> % Newton iteration
> for i=1:maxiter
>
> % Matrix for DX*X' + X*DX'
> M34 = TEMat*kron(X,I);
>
> % Group all together
> M = M12+M34;
>
> % We look for DX := Q*DY
> % Such that M*DX = RHS
> M = M*Q;
>
> % Diagonal of Levenberg-marquardt damping
> LM = sqrt(sum(M.^2,1));
> m = size(M,2);
> LM = sparse(1:m, 1:m, LM, m, m);
>
> % Rhs
> B = resfun(X);
> RHS = [B(:); zeros(m,1)];
>
> % Keep track old matrix
> XOLD = X;
>
> % LM iteration on damping factor lbd
> while true
> J = [M; lbd*LM];
>
> DY = J\RHS;
> DX = Q*DY;
> DX = reshape(DX,[n n]);
>
> % Find the step maximum step size so as
> % to prevent X to be < 0 after updating
> DX(1:n+1:n) = 0;
> rho = -XOLD./DX;
> rho(DX>=0) = Inf;
> rho = min(min(rho(:)),1);
> % Update X
> X = XOLD + rho*DX;
>
> residual = resfun(X);
> nres = norm(residual(:));
> if nres<=nresold || lbd>1e3
> nresold = nres;
> if lbd>1e-6
> lbd = lbd/v;
> end
> break % while-loop
> else
> lbd = lbd*v;
> end
> end % LM inner-loop
> nDX = rho*norm(DX(:));
> if nDX<tol
> converge = true;
> break % for-loop
> end
> end % for-loop
>
> if nres>nresold
> X = XOLD;
> end
>
> %% Check
> if converge
> fprintf('#iterations = %d\n', i);
> else
> warning('not yet converge');
> end
>
> end % LMsolve
>
> %%%%%
>
> % Bruno

Tags for this Thread

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us