Non-Linear Regression - Too many Input Arguments

1 view (last 30 days)
I am trying to perform a NL Regression via the following code:
%Non-Linear Regression
clear, clc, close all;
I = [50 80 130 200 250 350 450 550 700];
P = [99 177 202 248 229 219 173 142 72];
func = @(x) P - x(1)*(I/x(2))*exp((-I/x(2))+1);
[x,fval] = fminsearch(func,[1,1],[],I,P);
disp(x)
disp(fval)
I keep getting the 'Too many input arguments' error. I have tried all kinds of formatting to no avail. Help needed.
Thanks.
  1 Comment
dpb
dpb on 1 Mar 2014
Several problems here beginning with the functional definition --
FUN in fminsearch is a function handle to a function which must return a single value -- as written your function won't actually even run correctly owing to the definition of I and P as vectors. There's a dimension mismatch.
>> func = @(x) P - x(1)*(I/x(2))*exp((-I/x(2))+1);
>> func([1 1])
Error using *
Inner matrix dimensions must agree.
Error in @(x)P-x(1)*(I/x(2))*exp((-I/x(2))+1)
If you correct that by using the .* operator then you'll get a vector output the length of I and P which violates the fminsearch rules.
In the call to fminsearch once you iron that out, you can't pass I and P is the actual error you're bombing on now.

Sign in to comment.

Accepted Answer

Star Strider
Star Strider on 1 Mar 2014
Edited: Star Strider on 1 Mar 2014
The 'Too many input arguments' comes from fminsearch being an optimisation routine and not a curve-fitting routine. It minimises the function you provide to it, and so does not need the data you want to fit, because it doesn’t care. If those data are in the workspace your function shares, your function will use them, and fminsearch will evaluate your function.
There are a few other problems with your code:
  • In your curve-fitting application, you want to minimise the sum-of-squared differences between your function and your P data. The fminsearch function minimises the function you give it. Your function has to calculate the sum-of-squared differences.
  • You need to vectorise your code in this instance. Note that I replaced ‘*’ with ‘.*’ and ‘/’ with ‘./’. This does element-by-element operations, which is what you actually want to do here. Otherwise MATLAB interprets this as matrix division and throws an error. Exponentiation ( ‘^’ ) also requires this ( ‘.^’ ).
This version works, and provides an acceptably good fit:
% Non-Linear Regression
% clear, clc, close all;
I = [50 80 130 200 250 350 450 550 700];
P = [99 177 202 248 229 219 173 142 72];
% func = @(x) P - x(1).*(I./x(2)).*exp((-I./x(2))+1);
func2 = @(x) sum((P - x(1).*(I./x(2)).*exp((-I./x(2))+1)).^2);
[x,fval] = fminsearch(func2,[100,100])
Ivct = linspace(min(I),max(I));
fitfcn = @(x,I) x(1).*(I./x(2)).*exp((-I./x(2))+1);
figure(1)
plot(Ivct, fitfcn(x,Ivct), '-r')
hold on
plot(I, P, '.b')
hold off
grid
  2 Comments
Jeffrey Denomme
Jeffrey Denomme on 1 Mar 2014
So you took the sum of the squares as the function passed to fminsearch to evaluate? Then used the evaluated range of I to plot the function?
Star Strider
Star Strider on 1 Mar 2014
Edited: Star Strider on 1 Mar 2014
Yes. There are other cost functions you can use, but sum-of-squares is probably the easiest and most widely used. My func2 calculates the sum-of-squares difference between your function and your P variable. Then fminsearch minimises it.
I then used linspace to generate a vector of 100 I values to approximate the fitted function more smoothly. (My fitfcn simply makes the code for the plot a bit easier to write.)

Sign in to comment.

More Answers (0)

Categories

Find more on Problem-Based Optimization Setup in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!