Solve a system of equations
Show older comments
So I have written a system of equations and used ode45 to solve it. I was just wondering if there is a more efficient way to do it. I am creating an ODE model and will later use certain methods to find the unknown parameters, but for now I am just guessing random values.
I have written my model as
function dydt = ODE_Model(t,y,p_1,p_2,p_3,d_1,d_2,d_3,a,b,Tau)
dydt = zeros(3,1);
dydt(1) = p_1*(a*cos(t*2*pi/365-Tau)+b)-d_1*y(1);
dydt(2) = p_2*(a*cos(t*2*pi/365-Tau)+b)-d_2*y(2);
dydt(3) = p_3*y(1)-d_3*y(2)*y(3);
end
and then solved it using
tspan = [0 5];
y0 = [4 5 6];
p_1 = 1;
p_2 = 1;
p_3 = 1;
d_1 = 1;
d_2 = 1;
d_3 = 1;
a=1;
b=1;
Tau=1;
SolExp = ode45(@(t,y) ODE_Model(t,y,p_1,p_2,p_3,d_1,d_2,d_3,a,b,Tau), tspan, y0);
TimePoints = 0:0.1:2;
y = deval(SolExp, TimePoints);
clf
plot(TimePoints, y)
I wanted to do P and D as a combined vector, so i would have P(1) .. P(6) and then i could just write something like P = [1 1 1 1 1 1], but reading about ode45 it seems I can only use one variable? I am not sure. Is there a neater way to do what I want to do, or is this as neat it gets?
Thanks
Accepted Answer
More Answers (1)
Jacob Barrett-Newton
on 5 Mar 2018
16 Comments
Star Strider
on 5 Mar 2018
This is a guess, since I cannot run your code. The NaN values may be coming from the interp1 calls.
Try this:
f = interp1(ft, f, t, 'linear', 'extrap');
g = interp1(gt, g, t, 'linear', 'extrap');
Jacob Barrett-Newton
on 5 Mar 2018
Edited: Jacob Barrett-Newton
on 5 Mar 2018
Star Strider
on 5 Mar 2018
Thank you!
The interp1 function by default returns NaN for values interpolated beyond the range of the first argument. (The default interpolation method is 'linear'.) In order for interp1 to extrapolate, it is necessary to supply a method (here 'linear' since I assume that is what you want), and tell it specifically to extrapolate, by adding 'extrap' as the third argument.
Note that the other interpolation functions, such as interp2, deal with extrapolation differently, so investigate those options if you use the other functions.
Jacob Barrett-Newton
on 9 Mar 2018
Star Strider
on 9 Mar 2018
MCMC?
I’m not familiar with this acronym.
Jacob Barrett-Newton
on 9 Mar 2018
Star Strider
on 9 Mar 2018
Not that I’m aware of. I’ve not looked through the code for the interpolation functions. However they appear to use the stated method to extrapolate to new values.
Extrapolation, especially far from the data, is not generally considered appropriate, since the behavior of the data beyond the region-of-fit is by definition unknown.
Jacob Barrett-Newton
on 11 Mar 2018
Star Strider
on 11 Mar 2018
It makes sense, and lsqcurvefit (and ga and some other optimisation routines) can do it.
The ‘Parameter Estimation for a System of Differential Equations’ example is likely more applicable to your problem than the ‘Monod kinetics and curve fitting’ example, since the first fits a matrix dependent variable and the second fits a vector dependent variable.
Jacob Barrett-Newton
on 11 Mar 2018
Jacob Barrett-Newton
on 11 Mar 2018
Star Strider
on 11 Mar 2018
Scaling the data can be appropriate, although you have to scale the parameter estimates as well afterwards.
The ode45 function can be the reason the convergence is slow, because you could have a ‘stiff’ system. I would do the regression again with the unscaled numbers, and using ode15s instead. The advantage to using unscaled data and ode15s is that the parameter estimates would then be correct.
Jacob Barrett-Newton
on 11 Mar 2018
Star Strider
on 11 Mar 2018
It has to do with your data and probably your initial parameter estimates. And, of course, that you are using a gradient-descent solver.
When I have problems such as yours, I go to a derivative-free solver, my personal favourite being the Genetic Algorithm (the ga function). If you have the Global Optimization Toolbox, try something like the approach I used in How to find parameter estimation with fminsearch? And solve it with ode45? (link). It is not difficult to program your own fitness function, that simply takes your current objective function, compares it to your data, and outputs a scalar result that ga can use. In this problem, I began with a very large initial population, since a large initial population will quite likely will converge on an optimal parameter set. (With simple problems the ga defaults work. I rarely use ga with simple problems.)
Note that ga is robust but not fast, so it is likely best to get your optimization working and then go do something else for the few minutes it will take ga to converge. I also always define an 'OutputFcn' (see options in Input Arguments) to avoid anxiety about not knowing what ga is doing. Note that if you have defined parameter limits or other constraints with lsqcurvefit, you can use them in ga.
Jacob Barrett-Newton
on 23 Mar 2018
Star Strider
on 23 Mar 2018
When I ran your code with ‘y0(2)~=0’, the warning is:
Warning: Failure at t=2.354295e+02. Unable to meet integration tolerances without
reducing the step size below the smallest value allowed (4.547474e-13) at time t.
So it’s encountering a singularity there, and fails to complete the ‘dY’ vector. That’s the reason it throws the
Matrix dimensions must agree.
error in lsqcurvefit. I don’t see any obvious reason for that to occur. I defer to you to troubleshoot it, and be certain your kinetics equations in ‘DifEq’ are correct.
Categories
Find more on Get Started with Curve Fitting Toolbox in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!