Discover MakerZone

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

Learn more

Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

Apply Today

Speed up (vectorize?) nlinfit

Asked by Ioannis on 13 Sep 2013

Hello all.

I have been using nlinfit to extract two parameters (p1 and p2) from a simple exponential equation on a pixel basis for medical images. The dependent variable values are stored in Y (e.g. 53248x5 for an image of 208x256 pixels and 5 variable values). The 5 independent variables values are stored in X (e.g. 1x5 for 5 values). I am doing the fitting using the following bit of code:

equation=@(p,X) p(1)*exp(-X*p(2));
options=statset('FunValCheck','off');
parfor i=1:d1*d2
  if sum(Y(i,:))>60;
     B(i,:)=nlinfit(X,Y(i,:),equation,[Y(i,1) 0.001],options);
  end
end 

The threshold of 60 is to make sure that fitting only takes place in pixels with useful signal (above noise level). The issue with the code above is that it takes almost 25sec to complete the pixel-based fitting for an image of 208x256 pixels and this is too long (even with parfor and 4 workers). I have tried various ways to vectorize the code above and speed things up, but with no success.

I would really appreciate any ideas.

Regards, Ioannis.

2 Comments

Matt J on 13 Sep 2013

I assume that you've pre-allocated B? Also, why not pre-eliminate the X and Y that you don't want to fit.

 idx=sum(Y,2)<60;
 X(idx,:)=[];
 Y(idx,:)=[];
Ioannis on 18 Sep 2013

Hello Matt and thanks for your comments. Yes, I have pre-allocated B and by pre-eliminating Ys (Xs are always the same), I saved 4-5 sec, which is not the acceleration factor I was hoping for... (also, regridding the data to get a meaningful image gets a bit tricky)

Ioannis

Products

No products are associated with this question.

3 Answers

Answer by Matt J on 13 Sep 2013
Edited by Matt J on 13 Sep 2013

If the fit can be done with 5 data points only, I assume the data errors are very small, in which case, you might get a pretty accurate fit just by doing a linear fit to the log of your model

 logY = log(p(1)) - X*p(2)

This can be solved for a=log(p(1)) and b=-p(2) using any of MATLAB's solvers, and of course it can be further vectorized across all Y(i,:),

Xcell=[ones(size(X)), X].';
    Xcell= num2cell(reshape(Xcell, 5,2,[]),[1,2]  );
    Xcell=cellfun(@sparse,Xcell,'uni',0);
X=X.';
Y=Y.';
ab=blkdiag(Xcell{:})\Y(:);
p1=exp(ab(1:2:end));
p2=-ab(2:2:end);

If nothing else, this might give you a better beta0 than [Y(i,1) 0.001] and that, of course, could speed things up.

3 Comments

Ioannis on 18 Sep 2013

Hello Matt and many thanks for the reply! Fitting is not always going to involve 5 points, so it is not going to be always so tight...

Also, ideally, I would love to have a way to speed nlinfit up for a more general model that cannot be solved by linearising (e.g. a bi-exponential model)

Ioannis.

Matt J on 18 Sep 2013

If neighboring image pixels are expected to have similar parameter values, you can speed things up by initializing the fit of a given pixel with the fit from the immediately preceding pixel in the loop.

Lennart on 19 Sep 2013

Matt thats a very helpful suggestion, thanks.

Matt J
Answer by Lennart on 19 Sep 2013

Hi Loannis, I have found this FEX file. It seems to do what you want. What was your final implementation? I am trying to solve exactly the same problem. My loops take hours on my data sets, now trying the FEX file.

1 Comment

Matt J on 19 Sep 2013

I don't have NLINFIT and can't test my theories, but the FEX file probably was not intended for combining such a large number (~1e4) of model functions as in Ioannis's case. In particular, it does not vectorize the function evaluations in an optimal way. It just takes the N original anonymous functions specifying the models and and nests them inside a larger anonymous function.

If you are going to try wrapping the N problems together into a single problem, it would probably be better to do so manually, by writing your own N-dimensional modelfun. It would also be a good idea to use the Jacobian option and supply the Jacobian as a sparse matrix. That way, the solver will save time on derivative calculations and also will have information about the separability of the problem into smaller problems.

However, as I mentioned in my last comment to Ioannis, if the solutions from neighboring data sets are expected to be similar, a for-loop can take advantage of that. Conversely, you lose that ability when you rewrite the problem as one big N-dimensional one. The trade-off and which approach will win speed-wise is unclear, though.

Lennart
Answer by Matt J on 19 Sep 2013
Edited by Matt J on 19 Sep 2013

I have tried various ways to vectorize the code above and speed things up, but with no success.

It would help to know what you tried. Obviously, if N is the number of pixels, you could write your modelfun in terms of your entire 5xN array X and a 2*N parameter vector beta

function Y=totalmodel(beta,X)
%beta - length 2N vector
%  X  - 5xN data set
   beta=reshape(beta,2,[]);
   p1=beta(1,:);
   p2=beta(2,:);
   Y=exp(bsxfun(@times,X,-p1));
     Y=bsxfun(@times,Y,p2);
     Y=Y(:);

As I mentioned to Lennart, it would probably also be of benefit to use nlinfit's Jacobian option with this. The Jacobian should turn out to be a very sparse block diagonal matrix. By computing it yourself, you would communicate that sparsity to nlinfit, as well as the separable structure of the problem.

0 Comments

Matt J

Contact us