Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Thread Subject:
minimization -least squares problem

Subject: minimization -least squares problem

From: Aidy

Date: 7 Jul, 2011 09:51:10

Message: 1 of 21

Hi everyone,

Please have a look at the link below:
http://imageshack.us/photo/my-images/269/objectivefunc.png/

On the image shown in the link,I have an objective cost function I would like to minimize. On the figure shown in the image it is basically I want to minimize the light blue triangles surface areas formed by the line segments and the point 'VP'.

By minimizing these areas I want to readjust the point named 'VP' shown on the uploaded image, so I have an initial approximation for 'VP' to input for the nonlinear least squares.I also have the coordinates of the line segments.

I have written the code below that contains all the data and the nonlinear least squares process but for some reason it does not seem to be working correctly. I know this because when I check the variable 'residual' at the end of the nonlinear least squares process , the residual numbers are large, instead they should be very small.

Any help is very much welcome.
all the best,
aidy

Here is my code :

%--------------- Objective Function shown in the uploaded image------------
function S = objective_TAM(vars,x1,y1,x2,y2,n)
for g = 1:n
S(g,1) = ( (vars(1).*(y1(g)-y2(g))) + (vars(2).*(x2(g)-x1(g))) + (x1(g).*y2(g) - y1(g).*x2(g)) );
end
end
%----------------------------------------------------------------------------------------------


%---------- Performing the non linear minimization---------------------------------

clc;clear all;close all;

vp = [ 1304.19245005493 46019.2746980242 ]

all_lines = [ 16.4596550655344 368.991790463273 19.0546306487513 438.008209536727;
          32.4068956982873 364.979622511419 36.6385588471673 452.020377488581;
          44.5636870941578 353.00450089814 47.7696462391755 423.99549910186 ]

n = size(all_lines,1)

     x1 =all_lines(:,1)
     y1 = all_lines(:,2)
     x2 = all_lines(:,3)
     y2 = all_lines(:,4)

init = [vp(:,1) , vp(:,2) ]
    
testing = @(xx) objective_TAM(xx,x1,y1,x2,y2,n)

options = optimset('Algorithm','levenberg-marquardt','display','off');

[my_results,resnorm,residual,exitflag,output,lambda,jacobian] = lsqnonlin(testing,init,[],[],options)
%------------------------------------------------------------------------

Subject: minimization -least squares problem

From: Torsten

Date: 7 Jul, 2011 10:42:08

Message: 2 of 21

On 7 Jul., 11:51, "Aidy " <aidenj...@gmail.com> wrote:
> Hi everyone,
>
> Please have a look at the link below:http://imageshack.us/photo/my-images/269/objectivefunc.png/
>
> On the image shown in the link,I have an objective cost function I would like to minimize. On the figure shown in the image it is basically I want to minimize the light blue triangles surface areas formed by the line segments and the point 'VP'.
>
> By minimizing these areas I want to readjust the point named 'VP' shown on the uploaded image, so I have an initial approximation for 'VP' to input for the nonlinear least squares.I also have the coordinates of the line segments.
>
> I have written the code below that contains all the data and the nonlinear least squares process but for some reason it does not seem to be working correctly. I know this because when I check the variable 'residual' at the end of the nonlinear least squares process , the residual numbers are large, instead they should be very small.
>
> Any help is very much welcome.
> all the best,
> aidy
>
> Here is my code :
>
> %--------------- Objective Function shown in the uploaded image------------
> function S = objective_TAM(vars,x1,y1,x2,y2,n)
> for g = 1:n
> S(g,1) = ( (vars(1).*(y1(g)-y2(g)))  +  (vars(2).*(x2(g)-x1(g))) + (x1(g).*y2(g) -  y1(g).*x2(g))  );
> end
> end
> %--------------------------------------------------------------------------­--------------------
>
> %---------- Performing the non linear minimization---------------------------------
>
> clc;clear all;close all;
>
> vp = [    1304.19245005493          46019.2746980242  ]
>
> all_lines = [  16.4596550655344          368.991790463273          19.0546306487513          438.008209536727;
>           32.4068956982873          364.979622511419          36.6385588471673          452.020377488581;
>           44.5636870941578           353.00450089814          47.7696462391755           423.99549910186 ]
>
> n = size(all_lines,1)
>
>      x1 =all_lines(:,1)
>      y1 = all_lines(:,2)
>      x2 = all_lines(:,3)
>      y2 = all_lines(:,4)
>
> init = [vp(:,1) , vp(:,2)   ]      
>
> testing = @(xx) objective_TAM(xx,x1,y1,x2,y2,n)
>
> options = optimset('Algorithm','levenberg-marquardt','display','off');
>
> [my_results,resnorm,residual,exitflag,output,lambda,jacobian] = lsqnonlin(testing,init,[],[],options)
> %------------------------------------------------------------------------

This is a problem which is linear in the parameters to be fitted.
Use the backslash operator instead of lsqnonlin:

What you want to solve in the least-squares sense is the system of
equations (linear in the parameters):
vars(1)*(y1(g)-y2(g)) + vars(2)*(x2(g)-x1(g)) = -(x1(g)*y2(g)-
y1(g)*x2(g))
for g = 1,...,n.
Thus build an (nx2) matrix A with row g given by [y1(g)-y2(g) x2(g)-
x1(g)]
and an (nx1)-vector b with element g given by [-(x1(g)*y2(g)-
y1(g)*x2(g))] .
Then
vars=A\b.

Best wishes
Torsten.

Subject: minimization -least squares problem

From: Aidy

Date: 7 Jul, 2011 11:10:11

Message: 3 of 21

Hi Torsten,

I did as you suggested. However, I am getting large residuals still. Please seethe code below. Again asking for your help.

thanks a lot

%------------------------------------------
clc;clear all;

all_lines = [ 16.4596550655344 368.991790463273 19.0546306487513 438.008209536727;
          32.4068956982873 364.979622511419 36.6385588471673 452.020377488581;
          44.5636870941578 353.00450089814 47.7696462391755 423.99549910186 ]


n = size(all_lines,1)

     x1 =all_lines(:,1)
     y1 = all_lines(:,2)
     x2 = all_lines(:,3)
     y2 = all_lines(:,4)

for g = 1:n
    
A(g,1:2) = [y1(g)-y2(g) x2(g)-x1(g) ]
b(g,1) = [-(x1(g)*y2(g)-y1(g)*x2(g))]

end

vars = A\b

%----- getting residuals -----
 
 vars_method2 =inv(A'*A) * A' * b
 
 residuals = A* vars_method2 - b
%-------------------------------

Subject: minimization -least squares problem

From: Torsten

Date: 7 Jul, 2011 11:23:15

Message: 4 of 21

On 7 Jul., 13:10, "Aidy " <aidenj...@gmail.com> wrote:
> Hi Torsten,
>
> I did as you suggested. However, I am getting large residuals still. Please seethe code below. Again asking for your help.
>
> thanks a lot
>
> %------------------------------------------
> clc;clear all;
>
> all_lines = [  16.4596550655344          368.991790463273          19.0546306487513          438.008209536727;
>           32.4068956982873          364.979622511419          36.6385588471673          452.020377488581;
>           44.5636870941578           353.00450089814          47.7696462391755           423.99549910186 ]
>
> n = size(all_lines,1)
>
>      x1 =all_lines(:,1)
>      y1 = all_lines(:,2)
>      x2 = all_lines(:,3)
>      y2 = all_lines(:,4)
>
> for g = 1:n
>
> A(g,1:2) = [y1(g)-y2(g)   x2(g)-x1(g) ]
> b(g,1) = [-(x1(g)*y2(g)-y1(g)*x2(g))]
>
> end
>
> vars = A\b
>
> %----- getting residuals -----
>
>  vars_method2 =inv(A'*A) * A' * b
>
>  residuals =  A* vars_method2 - b
> %-------------------------------

Make the test and only use two instead of three (non-parallel) lines.
Then the residuals should be 0 and var should be the
point where the two lines meet.

Best wishes
Torsten.

Subject: minimization -least squares problem

From: Aidy

Date: 7 Jul, 2011 11:39:10

Message: 5 of 21

hi torsten,

As you advised the observations I used for the lines were :

all_lines = [ 16.4596550655344 368.991790463273 19.0546306487513 438.008209536727;
          32.4068956982873 364.979622511419 36.6385588471673 452.020377488581
          ]

However ,I would like to include all the 3 lines observations in the least squares process and get small residuals.

I do not understand how this is, I always assumed that more redundant observations the better the solution and I will get small residuals.

If you can share some light please do.

many thanks
aiden

Subject: minimization -least squares problem

From: Torsten

Date: 7 Jul, 2011 12:08:01

Message: 6 of 21

On 7 Jul., 13:39, "Aidy " <aidenj...@gmail.com> wrote:
> hi torsten,
>
> As you advised the observations I used for the lines were :
>
> all_lines = [   16.4596550655344          368.991790463273          19.0546306487513          438.008209536727;
>           32.4068956982873          364.979622511419          36.6385588471673          452.020377488581
>           ]
>
> However ,I would like to include all the 3 lines observations in the least squares process and get small residuals.
>
> I do not understand how this is, I always assumed that more redundant observations the better the solution and I will get small residuals.
>
> If you can share some light please do.
>
> many thanks
> aiden

The more observations, the larger will be the sum of the residuals
squared.
Say you have n observations and an optimum fit for these observations.
Now add a new observation.
If this new observation already lies on the fitted line for the first
n observations,
the sum of the residuals squared will remain the same.
If the new observation does not lie on the fitted line for the first n
observations,
you will have to change the fitting parameters which means that the
sum of the residuals
of the first n observations squared will become larger.

Test your fitting routine with two parallel line segments and see
whether
the solution obtained looks reasonable.

Best wishes
Torsten.

Subject: minimization -least squares problem

From: Torsten

Date: 7 Jul, 2011 12:56:27

Message: 7 of 21

On 7 Jul., 14:08, Torsten <Torsten.Hen...@umsicht.fraunhofer.de>
wrote:
> On 7 Jul., 13:39, "Aidy " <aidenj...@gmail.com> wrote:
>
>
>
>
>
> > hi torsten,
>
> > As you advised the observations I used for the lines were :
>
> > all_lines = [   16.4596550655344          368.991790463273          19.0546306487513          438.008209536727;
> >           32.4068956982873          364.979622511419          36.6385588471673          452.020377488581
> >           ]
>
> > However ,I would like to include all the 3 lines observations in the least squares process and get small residuals.
>
> > I do not understand how this is, I always assumed that more redundant observations the better the solution and I will get small residuals.
>
> > If you can share some light please do.
>
> > many thanks
> > aiden
>
> The more observations, the larger will be the sum of the residuals
> squared.
> Say you have n observations and an optimum fit for these observations.
> Now add a new observation.
> If this new observation already lies on the fitted line for the first
> n observations,
> the sum of the residuals squared will remain the same.
> If the new observation does not lie on the fitted line for the first n
> observations,
> you will have to change the fitting parameters which means that the
> sum of the residuals
> of the first n observations squared will become larger.
>
> Test your fitting routine with two parallel line segments and see
> whether
> the solution obtained looks reasonable.
>
> Best wishes
> Torsten.- Zitierten Text ausblenden -
>
> - Zitierten Text anzeigen -

All the suggestions for two line segments are just for checking
if the solutions obtained are reasonable, not to restrict your
fitting to only two line segments.

Best wishes
Torsten.

Subject: Weights for minimization -least squares problem

From: Aidy

Date: 7 Jul, 2011 13:37:10

Message: 8 of 21

Hi Torsten,

The nonlinear method I had the code for in my very first post and the Linear method you outlined produces the same results.

I am interested in using the nonlinear method to introduce weights to the line segments.
I was hoping to set the weights initially to "1" in the first iteration.

Then ,in the following iteration, new weights are calculated for each observation(i..e each line segment) based on the residuals obtained in the previous iteration..... and so on , until the iterations have converged.

The weight function I was hoping to include into the nonlinear least squares is :

Weight = e^(-residuals.^2)

Can you suggest to me how to include this into the code I had in my first post.

many thanks again
-Aiden

Subject: Weights for minimization -least squares problem

From: Torsten

Date: 7 Jul, 2011 14:50:59

Message: 9 of 21

On 7 Jul., 15:37, "Aidy " <aidenj...@gmail.com> wrote:
> Hi Torsten,
>
> The nonlinear method I had the code for in my very first post and the Linear method you outlined produces the same results.
>
> I am interested in using the nonlinear method to introduce weights to the line segments.
> I was hoping to set the weights initially to "1" in the first iteration.
>
> Then ,in the following iteration, new weights are calculated for each observation(i..e each line segment) based on the residuals obtained in the previous iteration..... and so on , until the iterations have converged.
>
> The weight function I was hoping to include into the nonlinear least squares is :
>
> Weight  = e^(-residuals.^2)
>
> Can you suggest to me how to include this into the code I had in my first post.
>
> many thanks again
> -Aiden

You mean

for g = 1:n
Sg=vars(1)*(y1(g)-y2(g))+vars(2)*(x2(g)-x1(g)) + x1(g)*y2(g)-
y1(g)*x2(g);
S(g,1)= Sg*exp(-0.5*Sg^2);
end

instead of

for g = 1:n
S(g,1) = vars(1)*(y1(g)-y2(g))+vars(2)*(x2(g)-x1(g)) + x1(g)*y2(g)-
y1(g)*x2(g);
end

??

And what do you aim at with this setting ?
The solver will try to make Sg=Infinity to make the sum of squares
equal to 0.
Is it that what you want ?

Best wishes
Torsten

Subject: Weights for minimization -least squares problem

From: Aidy

Date: 7 Jul, 2011 15:41:10

Message: 10 of 21

Hi torsten,

what does it physically mean ? Isn't this an acceptable approach for weighting in least squares?Your comments are apppreciated.

thanks
aiden

Subject: Weights for minimization -least squares problem

From: Torsten

Date: 8 Jul, 2011 08:58:23

Message: 11 of 21

On 7 Jul., 17:41, "Aidy " <aidenj...@gmail.com> wrote:
> Hi torsten,
>
> what does it physically mean ? Isn't this an acceptable approach for weighting in least squares?Your comments are apppreciated.
>
> thanks
> aiden

The usual purpose of _constant_ (not changing with iteration)
weigthts
is that different measurements may have different measurement errors.
For measurements with probably high measurement errors, the weight is
chosen low, for measurements with low measurement error, the weight is
chosen high. That's reasonable if one knows about the errors of
individual measurements.
But I don't understand the relation to the problem you stated:
Why should the contribution of some triangles to the sum of the areas
squared
get higher weight than the area of other triangles ?
Why should the weighting of the individual triangles even change
within
the iteration process ?
But these are questions that _you_ will have to answer because you
are
one who best knows the final aim of your optimization.

Best wishes
Torsten.

Subject: Weights for minimization -least squares problem

From: Aidy

Date: 8 Jul, 2011 11:49:09

Message: 12 of 21

Hi torsten,


I have been trying to use weights to see how the residuals behave. I got the idea from the following paper that has the triangle area minimization model I am using and also describes the weighting approach , please look at equations (10) and (11) in the paper :

http://www.isprs.org/proceedings/XXXVI/5-W17/pdf/6.pdf

I think this is effectively what I have applied unless I missed something along the way.

My main concern with the residuals is that I am also computing the standard deviation of the parameters , i.e. the parameters are what we are calling 'vars'. The residuals value sizes propagate into the size of the std deviation for the 'vars'. I am getting some large std deviation numbers for the unknown 'vars' parameters when I use the 3lines as observations for input into the optimization.

Using the linear least squares approach here is how I compute the std deviations of the 'vars' :


%---------------------------------------------------------------------------------------------------------
clc;clear all;

all_lines = [ 16.4596550655344 368.991790463273 19.0546306487513 438.008209536727;
          32.4068956982873 364.979622511419 36.6385588471673 452.020377488581;
          44.5636870941578 353.00450089814 47.7696462391755 423.99549910186 ]

n = size(all_lines,1)

     x1 =all_lines(:,1)
     y1 = all_lines(:,2)
     x2 = all_lines(:,3)
     y2 = all_lines(:,4)



for g = 1:n
    
A(g,1:2) = [y1(g)-y2(g) x2(g)-x1(g) ];
b(g,1) = [-(x1(g)*y2(g)-y1(g)*x2(g))];

end

vars = A\b

%----- getting residuals -----
 
 vars_method2 =inv(A'*A) * A' * b
 
 residuals = A* vars_method2 - b
%-------------------------------

vv=residuals;
J= A ;

 
rows = size(A,1);
cols = 2;

% A posteriori variance factor
 Sigma_o = sqrt((vv'*vv) / (rows-cols))
 
% % Unscaled Covariance Matrix (Inverse of the Normal Equation matrix)
 Q_xx = inv(J'*J)

% Precision measures ,i.e. the standard deviation of each parameter
 diagonals_of_Q_xx = diag(Q_xx)
 Std_deviation_of_unknowns = Sigma_o .* sqrt(diagonals_of_Q_xx)
 
 %--------------------------------------------------------------------------------------


thanks torsten,
aiden

Subject: Weights for minimization -least squares problem

From: Aidy

Date: 9 Jul, 2011 10:18:08

Message: 13 of 21

any help welcome. I have all the data, explanation in the posts above. please if anyone can help me out ,please do. I've been stuck on this at my job several days now.

cheers,
aiden

Subject: Weights for minimization -least squares problem

From: Torsten

Date: 9 Jul, 2011 10:30:45

Message: 14 of 21

On 9 Jul., 12:18, "Aidy " <aidenj...@gmail.com> wrote:
> any help welcome. I have all the data, explanation in the posts above. please if anyone can help me out ,please do. I've been stuck on this at my job several days now.
>
> cheers,
> aiden

I think the iteration in the residuals according to the article is to
be done step by step:

1. Step:
Set the weights equal to 1 and calculate the optimal solution as
above.
2. Step:
From the residual r_i of this solution, calculate the weight w_i for
the i'th equation
according to w_i = exp(-r_i^2).
With these weights, recalculate an optimal solution.

Then repeat step 2 until the optimum solutions do not change any more.

Although I don't see why this method does not converge to all
residuals being +oo,
this seems to be the way suggested by the article.

Best wishes
Torsten.

Subject: Weights for minimization -least squares problem

From: Aidy

Date: 9 Jul, 2011 11:45:26

Message: 15 of 21


torsten,


are you suggesting that the method in the article has already been tried , that is :


for g = 1:n
Sg=vars(1)*(y1(g)-y2(g))+vars(2)*(x2(g)-x1(g)) + x1(g)*y2(g)-
y1(g)*x2(g);
S(g,1)= Sg*exp(-0.5*Sg^2);
end


and still not getting zero residuals ?

If so, yes I used the code above and still no zero residuals, which puzzles me.

-thanks

Subject: Weights for minimization -least squares problem

From: Aidy

Date: 9 Jul, 2011 11:54:10

Message: 16 of 21

HI Torsten,

I made an error on the previous post. When you run the code as suggested by the article which I have posted below , std deviation of parameters ,i.e the 'vars' tend to NaN. Instead they should be a real number that is ideally small



%------optimization function suggested by article -------
function S = objective_TAM_weights(vars,x1,y1,x2,y2,n)


for g = 1:n
    

Sg = ( (vars(1).*(y1(g)-y2(g))) + (vars(2).*(x2(g)-x1(g))) + (x1(g).*y2(g) - y1(g).*x2(g)) )
S(g,1)= Sg*exp(-abs(Sg).^2)% danish estimator weights

end

end
%--------------------------------------------------------




%------------ Run optimization with data------

clc;clear all;close all;

vp = [ 1304.19245005493 46019.2746980242 ]

all_lines = [ 16.4596550655344 368.991790463273 19.0546306487513 438.008209536727;
          32.4068956982873 364.979622511419 36.6385588471673 452.020377488581;
          44.5636870941578 353.00450089814 47.7696462391755 423.99549910186
 
          ]


n = size(all_lines,1)

     x1 =all_lines(:,1)
     y1 = all_lines(:,2)
     x2 = all_lines(:,3)
     y2 = all_lines(:,4)



init = [vp(:,1) , vp(:,2) ]
    
% testing = @(xx) objective_TAM(xx,x1,y1,x2,y2,n)
testing = @(xx) objective_TAM_weights(xx,x1,y1,x2,y2,n)


% options = optimset('Algorithm','levenberg-marquardt','display','off');
options.Algorithm = {'levenberg-marquardt',.005};

[my_results,resnorm,residual,exitflag,output,lambda,jacobian] = lsqnonlin(testing,init,[],[],options);




vv=residual;
J= jacobian ;

 
rows = size(jacobian,1);
cols = size(my_results,2);

% A posteriori variance factor
 Sigma_o = sqrt((vv'*vv) / (rows-cols))
 
% % Unscaled Covariance Matrix (Inverse of the Normal Equation matrix)
 Q_xx = inv(J'*J)

% Precision measures ,i.e. the standard deviation of each parameter
 diagonals_of_Q_xx = diag(Q_xx)
 stddev_parameters = Sigma_o .* sqrt(diagonals_of_Q_xx)
 

Subject: Weights for minimization -least squares problem

From: Torsten

Date: 9 Jul, 2011 11:56:11

Message: 17 of 21

On 9 Jul., 13:45, "Aidy " <aidenj...@gmail.com> wrote:
> torsten,
>
> are you suggesting that the method in the article has already been tried , that is :
>
> for g = 1:n
> Sg=vars(1)*(y1(g)-y2(g))+vars(2)*(x2(g)-x1(g)) + x1(g)*y2(g)-
> y1(g)*x2(g);
> S(g,1)= Sg*exp(-0.5*Sg^2);
> end
>
> and still not getting zero residuals ?
>
> If so, yes I used the code above and still no zero residuals, which puzzles me.
>
> -thanks

As long as the line segments you prescribe do not meet in a single
point,
you will never get zero residuals.
But you should _not_ adapt the residuals within one optimzation as
it is done with the above loop.
You should do a few optimzations one after the other where the
weights of optimization n are calculated from the solution of
optimization (n-1).
I think his is what the article suggests to do.

Best wishes
Torsten.

Subject: Weights for minimization -least squares problem

From: Aidy

Date: 9 Jul, 2011 12:08:09

Message: 18 of 21



torsten,

How can I modify the codeI posted so that I can do what you suggested below:


> You should do a few optimzations one after the other where the
> weights of optimization n are calculated from the solution of
> optimization (n-1).

Subject: Weights for minimization -least squares problem

From: Torsten

Date: 11 Jul, 2011 08:26:29

Message: 19 of 21

On 9 Jul., 14:08, "Aidy " <aidenj...@gmail.com> wrote:
> torsten,
>
> How can I modify the codeI posted so that I can do what you suggested below:
>
>
>
> > You should do a few optimzations one after the other where the
> > weights of optimization n are calculated from the solution of
> > optimization (n-1).- Zitierten Text ausblenden -
>
> - Zitierten Text anzeigen -


I did not include a stopping criterion, but I meant something like

clc;clear all;


all_lines = [ 16.4596550655344 368.991790463273 19.0546306487513
438.008209536727;
          32.4068956982873 364.979622511419 36.6385588471673
452.020377488581;
          44.5636870941578 353.00450089814 47.7696462391755
423.99549910186 ]


n = size(all_lines,1)


     x1 =all_lines(:,1)
     y1 = all_lines(:,2)
     x2 = all_lines(:,3)
     y2 = all_lines(:,4)

weights(1:size(all_lines,1),1) = 1.0;

itermax = 10;

for i = 1:itermax

for g = 1:n

A(g,1:2) = weights(g,1)*[y1(g)-y2(g) x2(g)-x1(g) ];
b(g,1) = weights(g,1)*[-(x1(g)*y2(g)-y1(g)*x2(g))];

end

vars = A\b

For g = 1:n
weights(g,1) = sqrt(exp(-(A*vars-b).^2));
end

end
%-------------------------------


Best wishes
Torsten.

Subject: Weights for minimization -least squares problem

From: Aidy

Date: 11 Jul, 2011 11:14:09

Message: 20 of 21

hi torsten

I am not sure if my code below is correct ?

Isn't the weight matrix suppose to be a diagonal ,sparse kind of matrix ?

thanks again,
aiden

%---------------------------------------------------------------

clc;clear all;


all_lines = [ 16.4596550655344 368.991790463273 19.0546306487513 438.008209536727;
          32.4068956982873 364.979622511419 36.6385588471673 452.020377488581;
          44.5636870941578 353.00450089814 47.7696462391755 423.99549910186 ]


n = size(all_lines,1)


     x1 =all_lines(:,1)
     y1 = all_lines(:,2)
     x2 = all_lines(:,3)
     y2 = all_lines(:,4)

weights(1:size(all_lines,1),1) = 1.0;

itermax = 10;

for i = 1:itermax

for g = 1:n

A(g,1:2) = weights(g,1)*[y1(g)-y2(g) x2(g)-x1(g) ];
b(g,1) = weights(g,1)*[-(x1(g)*y2(g)-y1(g)*x2(g))];

end

vars = A\b

 residuals = (A* vars - b)


for g = 1:n
weights = sqrt(exp(-( residuals ).^2))
end

end
%-------------------------------

Subject: Weights for minimization -least squares problem

From: Torsten

Date: 11 Jul, 2011 11:38:08

Message: 21 of 21

On 11 Jul., 13:14, "Aidy " <aidenj...@gmail.com> wrote:
> hi torsten
>
> I am not sure if my code below is correct ?
>
> Isn't the weight matrix suppose to be a diagonal ,sparse kind of matrix ?
>
> thanks again,
> aiden
>
> %---------------------------------------------------------------
>
> clc;clear all;
>
> all_lines = [ 16.4596550655344 368.991790463273 19.0546306487513 438.008209536727;
>           32.4068956982873 364.979622511419 36.6385588471673 452.020377488581;
>           44.5636870941578 353.00450089814 47.7696462391755 423.99549910186 ]
>
> n = size(all_lines,1)
>
>      x1 =all_lines(:,1)
>      y1 = all_lines(:,2)
>      x2 = all_lines(:,3)
>      y2 = all_lines(:,4)
>
> weights(1:size(all_lines,1),1) = 1.0;
>
> itermax = 10;
>
> for i = 1:itermax
>
> for g = 1:n
>
> A(g,1:2) = weights(g,1)*[y1(g)-y2(g) x2(g)-x1(g) ];
> b(g,1) = weights(g,1)*[-(x1(g)*y2(g)-y1(g)*x2(g))];
>
> end
>
> vars = A\b
>
>  residuals =  (A* vars - b)
>
> for g = 1:n
> weights = sqrt(exp(-( residuals ).^2))
> end
>
> end
> %-------------------------------


It's just equation (10) from your article.
In each iteration step, you have to solve (in the least-squares sense)
the linear system of equations
sqrt(w(g))*(vars(1)*(y1(g)-y2(g)) + vars(2)*(x2(g)-x1(g))) = -
sqrt(w(g))*(x1(g)*y2(g)-y1(g)*x2(g))
for g = 1,...,n.

Best wishes
Torsten.

Tags for this Thread

What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us