iterative process for selecting optimum parameters in function

4 views (last 30 days)
I have the following data:
d = ...
EDIT: Matt J moved data to attached .mat file
and I have the following function where the data is used:
function [mae, mod] = my_func(obs, x2, S, a, b, c, alpha)
% start function
st = x2(1);
for i = 2:length(x2);
st(i) = alpha.*x2(i) + (1-alpha).*st(i-1);
end
st = st(:);
% main model equation
mod = a + b.*st + c.*S;
% calculate mean absolute error
mae = nanmean(abs(mod - obs));
end
I can use the data in the function as:
obs = d(:,1);
x2 = d(:,2);
S = d(:,3);
a = 0.5;
b = 0.5;
c = 0.5;
alpha = 0.5; % between 0 and 1
[mae, mod] = my_func(obs, x2, S, a, b, c, alpha);
However, I don't know what the values for a, b, c, and alpha should be. The only thing that I know is that alpha should be between 0 and 1. So, I have been told to use an iterative process to find the best values for each of these parameters for reducing the mean absolute error of the model (which is calculated in the function).
I was thinking of doing this by using every possible combination of values (between some certain limits) in a loop but then though that this will take ages to run and probably isn't the best way forward. Is there a more sophisticated way that someone can suggest for doing this, keeping in mind that alpha has to be between 0 and 1.
  4 Comments
Brendan Hamm
Brendan Hamm on 1 Sep 2015
Edited: Brendan Hamm on 1 Sep 2015
That's what I had in mind given the statement,"I have been told to use an iterative process to find the best values for each of these parameters for reducing the mean absolute error of the model". I figured this was a first step before considering changing the problem.
Richard Woolway
Richard Woolway on 2 Sep 2015
Yes, I have the optimization toolbox. This is a method that is suggested in a scientific paper and I am trying to use that method. In the paper they state that they used an 'iterative process to find the best values for each of these parameters for reducing the mean absolute error of the model.'

Sign in to comment.

Answers (1)

Matt J
Matt J on 1 Sep 2015
Edited: Matt J on 2 Sep 2015
This uses minL1lin from the File Exchange ( Download ). It is similar in its technique to fminspleas ( Download ), which would be a valid alternative if a least squares objective were used instead.
L=load('attachedData.mat');
fun=@(alpha) my_func(alpha,L.d);
alphaOptimal=fminbnd(fun,0,1);
[maeOptimal,abc,mod]=fun(alphaOptimal);
aOptimal = abc(1);
bOptimal = abc(2);
cOptimal = abc(3);
function [mae, abc, mod] = my_func(alpha,d)
obs = d(:,1);
x2 = d(:,2);
S = d(:,3);
N=length(x2);
st(1)=x2(1);
st(2:N) = filter(alpha,[1,-(1-alpha)],x2(2:end),(1-alpha)*x2(1));
C=[ones(N,1), st(:), S];
% calculate mean absolute error
opts=optimoptions(@linprog,'Display','none');
[abc,mae]=minL1lin(C,obs,[],[],[],[],[],[],[],opts);
if isempty(abc) || ~all(isfinite(abc))
mae=Inf;
end
if nargout>2
mod=C*abc;
end
  4 Comments
Matt J
Matt J on 2 Sep 2015
As a further check, you can see if FMINSEARCH improves the result when optimizing with respect to all 4 unknowns, as below. I don't see a big change in the result.
fun=@(p) mae4vars(p,L.d);
tic
[p,mae]=fminsearch(fun,[alphaOptimal;abc(:)]);
alpha=p(1);
a=p(2); b=p(3); c=p(4);
toc
function mae=mae4vars(p,d)
alpha=p(1);
a=p(2); b=p(3); c=p(4);
obs = d(:,1);
x2 = d(:,2);
S = d(:,3);
N=length(x2);
st(1)=x2(1);
st(2:N) = filter(alpha,[1,-(1-alpha)],x2(2:end),(1-alpha)*x2(1));
mod=a*ones(N,1) + b*st(:) +c*S;
mae = sum(abs(mod - obs));

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!