how to get the optimum value of a variable from many equations using the optimization app?
Show older comments
I have 9 equations, each equation has 7 variables, all the 9 equations have 6 same variables (x1-x6). so, totally i have 15 variables in 9 equations, a table that contains different values of the total 15 variables is attaches with the 9 equations. to sum it up, i need to substitute all the variables from the table to the equations in order to find the optimum values for variable X(1)-X(6). I really appreciate your help. Thanks in advance.
17 Comments
Walter Roberson
on 2 Apr 2018
Is this optimization to be done for each row of the table independently, or is it some kind of optimum over the whole table?
Rami Altam
on 2 Apr 2018
Rami Altam
on 3 Apr 2018
Torsten
on 3 Apr 2018
And what is the condition that x(1)-x(6) are optimal ?
John D'Errico
on 3 Apr 2018
Edited: John D'Errico
on 3 Apr 2018
I just noticed your equations attached, but it and your question leave me completely confused.
Since there are products of the variables x(1) - x(6), this is a large over-determined polynomial system of equations.
You have 9 equations. the 7 variables that you mention are known constants? Or are they unknowns, that you just don't wish to specify?
Then you apparently have 6 variables that you wish to solve for.
But counting those other variables, I do find 9 of them, so actually, it is the number 7 that is confusing where it arises.
So in reality, it appears that you have 9 equations, with 13 or maybe 15 variables. 7 or 9 of which are to be left as unknowns. So you want to solve for the 6 unknowns x(1) - x(6).
This is called an over-determined polynomial system.
Sorry, but that is essentially impossible in general to find an analytical solution. And since you have 9 variables that are left as unknowns, that makes it impossible to solve for a numerical solution, since those variables have no values given. You cannot use numerical rootfinding tools to solve problems with undefined variables.
But then, at the bottom of the attachment, you add sets of values for 15 variables. Are those just sets of values that you tried, hoping they would solve things? What are they?
Walter Roberson
on 3 Apr 2018
John: suppose eqn #N involves x(1) through x(6) together with ANOTHER(N), for a total of 7 variables per equation. With 9 equations that would give the six x(1) through x(6) plus the 9 ANOTHER(N), for a total of 15. The number of variables does work out (but it did take me a couple of minutes to realize what was going on.)
Rami Altam
on 3 Apr 2018
John D'Errico
on 3 Apr 2018
Edited: John D'Errico
on 3 Apr 2018
There are still 15 variables. In only 9 equations. This is insufficient information to estimate 15 variables. The result will be a 6-dimensional manifold, embedded in the 15 dimensional parameter space.
And if the 9 variables are left fixed, then you have 6 variables in 9 equations. Now you have too many equations for an exact solution, an over-determined nonlinear (polynomial) problem. And since that set of 9 variables are left as essential unknowns, the equations will have no analytical solution, no matter what you do since they are effectively a high order polynomial.
As usual with these things, you can eliminate some of the variables (in theory) but that leaves you with a high order polynomial problem with non-constant coefficients. Abel-Ruffini now comes into play, and worse, since the problem is still over-determined..
Finally, IF you give those 9 variables explicit numerical values, then you could in theory use a tool like lsqnonlin to solve for a least squares solution to the set. But this is all you can do. Note that there may be MULTIPLE solutions (local minimizers of the sum of squares) to that problem, and lsqnonlin (or ANY such tool) will find only one solution, based on the starting values. But this is as much as you can possibly do.
Rami Altam
on 3 Apr 2018
Walter Roberson
on 3 Apr 2018
I ran a minimization a couple of days ago, looking for the value of x(1) to x(6) that minimize the residue over the entire table. It is possible to get an answer -- but it is a useless answer.
0.617631659172905 0.60337855369917 1.39982766400487 0.822306952453468 1.08414067363206 3088.78724822114
The global residue this gives is within 2*eps (relatively speaking) to the slightly worse point
0.558183317363842 0.507508446535793 1.24098830665003 0.392072084136755 0.90990753703702 3508.60419108882
Basically anywhere near those values will be within floating point round-off error of as small a global residue minima as you are going to be able to get.
But as I said, this result is completely useless.
Imagine that you had a table of Fahrenheit to Celsius conversion, so many degrees in gives so many degrees out, listed for a bunch of temperatures. Now imagine you are asked to find the one temperature that has leads to the lowest overall residue compared to the table. The best you would be able to do in that simple case would be to find the mean value (you can prove that the mean gives the minimal least-squared residue.) And that would be pretty useless -- it wouldn't tell you anything useful about the equations, because a useless question was asked.
Likewise, if you try to work this as a residue matter then the best you are going to get is the centroid of the function space with respect to the equations and the particular readings that were taken.
Your first two lines of your table have the nine variables all 0 but the two lines have different x(1) through x(6). If we are to understand that the nine variables are inputs and the 6 are outputs, then that would imply that your system is about to produce two different outputs for the same inputs. is the implication that your system contains randomness? If so then that is not easy to take into account.
Rami Altam
on 4 Apr 2018
Edited: Rami Altam
on 4 Apr 2018
Walter Roberson
on 4 Apr 2018
The minimization I did was with a routine that I am not prepared to release as yet. But the structure looked like this:
t = readtable('objective_data.txt');
t.x_1_(ismissing(t.x_1_)) = 0.75;
t.x_2_(ismissing(t.x_2_)) = 0.25;
t.x_3_(ismissing(t.x_3_)) = 1.125;
t.x_4_(ismissing(t.x_4_)) = 0.75;
t.x_5_(ismissing(t.x_5_)) = 1.125;
then call a minimizer on @(x) OF(x,t)
function y= OF(x, t)
y_1_ = (t.FOPT - (55930.4 - 330502.*x(1) - 3.18673E6.*x(2) + 344393.*x(3) + 471232.*x(4) + 1.58821E6.*x(5) - 156.186.*x(6) - 690828.*x(1).^2 + ...
4.18254E6.*x(1).*x(2) - 181854.*x(1).*x(3) - 88184.6.*x(1).*x(4) + 43046.5.*x(1).*x(5) - 58.2454.*x(1).*x(6) - 1.55312E6.*x(2).^2 - ...
17964.5.*x(2).*x(3) + 5968.0.*x(2).*x(4) - 13083.4.*x(2).*x(5) + 156.855.*x(2).*x(6) - 30717.4.*x(3).^2 + 11409.6.*x(3).*x(4) - 161742.*x(3).*x(5) +...
89.0528.*x(3).*x(6) - 421722.*x(4).^2 + 137095.*x(4).*x(5) + 170.907.*x(4).*x(6) - 762843.*x(5).^2 + 201.494.*x(5).*x(6) + 0.051949.*x(6).^2)).^2;
y_2_ = (t.FGTP - (-58289.1 + 19388.1.*x(1) - 164219.*x(2) + 21682.6.*x(3) + 26505.9.*x(4) + 158322.*x(5) - 8.73255.*x(6) - 28345.9.*x(1).^2 + ...
200378.*x(1).*x(2) - 46355.9.*x(1).*x(3) - 6581.48.*x(1).*x(4) - 10652.9.*x(1).*x(5) + 3.59031.*x(1).*x(6) - 58333.1.*x(2).^2 + 4520.79.*x(2).*x(3) -...
1769.44.*x(2).*x(4) + 1072.04.*x(2).*x(5) + 4.48638.*x(2).*x(6) + 13089.*x(3).^2 + 848.737.*x(3).*x(4) - 11896.9.*x(3).*x(5) + 11.6597.*x(3).*x(6) + ...
1345.19.*x(4).^2 - 22064.1.*x(4).*x(5) + 9.60202.*x(4).*x(6) - 55133.4.*x(5).^2 + 3.28075.*x(5).*x(6) + 0.00469155.*x(6).^2)).^2;
y_3_ = (t.FWPT - (-795795. + 333736.*x(1) - 95084.6.*x(2) + 474404.*x(3) + 277230.*x(4) + 529837.*x(5) - 103.866.*x(6) + 64618.6.*x(1).^2 + ...
137731.*x(1).*x(2) + 427734.*x(1).*x(3) - 156845.*x(1).*x(4) - 617487.*x(1).*x(5) - 162.713.*x(1).*x(6) - 46866.3.*x(2).^2 + 24158.3.*x(2).*x(3) + ...
1255.28.*x(2).*x(4) - 32153.1.*x(2).*x(5) - 0.569499.*x(2).*x(6) - 79781.7.*x(3).^2 - 154761.*x(3).*x(4) - 478208.*x(3).*x(5) - 104.979.*x(3).*x(6) - ...
66917.3.*x(4).^2 + 113688.*x(4).*x(5) + 31.2191.*x(4).*x(6) + 145619.*x(5).^2 + 212.031.*x(5).*x(6) + 0.041897.*x(6).^2)).^2;
y_4_ = (t.WBHP_1_ - (112.43 + 24.6435.*x(1) - 34.3869.*x(2) - 13.7306.*x(3) - 54.3791.*x(4) + 170.973.*x(5) - 0.00602261.*x(6) + 0.643438.*x(1).^2 ...
+ 17.5735.*x(1).*x(2) - 16.6129.*x(1).*x(3) + 10.4442.*x(1).*x(4) - 25.5992.*x(1).*x(5) + 0.00466402.*x(1).*x(6) + 16.554.*x(2).^2 - ...
1.02055.*x(2).*x(3) + 0.696829.*x(2).*x(4) - 1.45535.*x(2).*x(5) + 0.00116119.*x(2).*x(6) + 13.3537.*x(3).^2 + 2.29935.*x(3).*x(4) - ...
1.04702.*x(3).*x(5) + 0.00349116.*x(3).*x(6) + 15.0195.*x(4).^2 + 20.9842.*x(4).*x(5) - 0.00126936.*x(4).*x(6) - 40.995.*x(5).^2 - 0.00488441.*x(5).*x(6) ...
- 0.0000010759.*x(6).^2)).^2;
y_5_ = (t.WBHP_2_ - (217.257 - 80.2807.*x(1) + 26.4349.*x(2) - 43.6652.*x(3) - 12.0739.*x(4) + 42.1795.*x(5) - 0.0102587.*x(6) + 28.9528.*x(1).^2 - ...
40.2188.*x(1).*x(2) + 13.1032.*x(1).*x(3) + 2.53167.*x(1).*x(4) + 21.4719.*x(1).*x(5) + 0.00548878.*x(1).*x(6) + 16.9037.*x(2).^2 - ...
1.8654.*x(2).*x(3) + 2.69922.*x(2).*x(4) + 2.652.*x(2).*x(5) - 0.00143161.*x(2).*x(6) + 5.32427.*x(3).^2 + 7.97545.*x(3).*x(4) + 9.76013.*x(3).*x(5) ...
+ 0.00179993.*x(3).*x(6) + 6.93362.*x(4).^2 - 0.159805.*x(4).*x(5) - 0.000366409.*x(4).*x(6) - 3.60608.*x(5).^2 - 0.0066793.*x(5).*x(6) - ...
3.38236E-7.*x(6).^2)).^2;
y_6_ = (t.WBHP_3_ - (166.423 + 27.5154.*x(1) - 39.1321.*x(2) + 191.72.*x(3) - 69.4435.*x(4) - 85.394.*x(5) - 0.00881759.*x(6) - 9.02332.*x(1).^2 + ...
33.6117.*x(1).*x(2) - 32.0101.*x(1).*x(3) - 2.90598.*x(1).*x(4) + 14.3007.*x(1).*x(5) + 0.00264206.*x(1).*x(6) + 1.10142.*x(2).^2 - ...
2.49883.*x(2).*x(3) + 0.012439.*x(2).*x(4) + 1.20293.*x(2).*x(5) + 0.00209554.*x(2).*x(6) - 52.7031.*x(3).^2 - 25.716.*x(3).*x(4) + ...
11.8914.*x(3).*x(5) + 0.00476638.*x(3).*x(6) - 28.6961.*x(4).^2 + 124.737.*x(4).*x(5) - 0.00199408.*x(4).*x(6) - 11.2814.*x(5).^2 - 0.0041272.*x(5).*x(6) ...
- 2.59924E-7.*x(6).^2)).^2;
y_7_ = (t.WBHP_4_ - (180.763 - 11.6212.*x(1) - 11.8332.*x(2) + 149.082.*x(3) - 96.8172.*x(4) - 51.5179.*x(5) - 0.00945883.*x(6) + 4.53155.*x(1).^2 ...
- 6.65685.*x(1).*x(2) - 1.30623.*x(1).*x(3) + 4.94878.*x(1).*x(4) + 1.93587.*x(1).*x(5) + 0.00310054.*x(1).*x(6) + 21.5073.*x(2).^2 - ...
0.416943.*x(2).*x(3) - 0.106049.*x(2).*x(4) - 0.889398.*x(2).*x(5) + 0.000197474.*x(2).*x(6) - 34.8886.*x(3).^2 - 10.3298.*x(3).*x(4) - ...
8.15057.*x(3).*x(5) + 0.00357232.*x(3).*x(6) - 34.3185.*x(4).^2 + 120.958.*x(4).*x(5) + 0.00239253.*x(4).*x(6) - 5.28872.*x(5).^2 - ...
0.00320331.*x(5).*x(6) - 0.00000123636.*x(6).^2)).^2;
y_8_ = (t.WBHP_5_ - (192.642 - 47.0351.*x(1) - 73.0649.*x(2) - 20.4026.*x(3) - 16.0969.*x(4) + 114.684.*x(5) - 0.0196958.*x(6) - 10.3205.*x(1).^2 + ...
111.171.*x(1).*x(2) - 14.9388.*x(1).*x(3) - 3.32829.*x(1).*x(4) + 19.444.*x(1).*x(5) + 0.00622797.*x(1).*x(6) - 33.7897.*x(2).^2 - 1.12198.*x(2).*x(3) ...
+ 0.451366.*x(2).*x(4) - 0.757496.*x(2).*x(5) - 0.00142821.*x(2).*x(6) + 20.5391.*x(3).^2 - 8.2265.*x(3).*x(4) - 2.24338.*x(3).*x(5) + ...
0.00541462.*x(3).*x(6) + 44.0278.*x(4).^2 - 27.9754.*x(4).*x(5) + 0.00154169.*x(4).*x(6) - 27.4073.*x(5).^2 + 0.000757436.*x(5).*x(6) - ...
0.00000104303.*x(6).^2)).^2;
y_9_ = (t.WBHP_6_ - (227.535 - 24.7576.*x(1) - 30.7689.*x(2) - 36.0309.*x(3) - 147.753.*x(4) - 33.2402.*x(5) + 0.020352.*x(6) - 13.1713.*x(1).^2 + ...
75.0485.*x(1).*x(2) - 3.07831.*x(1).*x(3) + 10.6107.*x(1).*x(4) + 0.0487154.*x(1).*x(5) + 0.00387367.*x(1).*x(6) - 32.8412.*x(2).^2 - ...
0.347415.*x(2).*x(3) + 1.82151.*x(2).*x(4) + 0.179057.*x(2).*x(5) - 0.00485434.*x(2).*x(6) + 7.57806.*x(3).^2 + 30.3197.*x(3).*x(4) + ...
0.964379.*x(3).*x(5) + 0.000932607.*x(3).*x(6) + 139.822.*x(4).^2 + 22.2055.*x(4).*x(5) - 0.0255241.*x(4).*x(6) + 9.20188.*x(5).^2 - ...
0.00169528.*x(5).*x(6) - 9.53375E-7.*x(6).^2)).^2;
y = sum(y_1_.^2 + y_2_.^2 + y_3_.^2 + y_4_.^2 + y_5_.^2 + y_6_.^2 + y_7_.^2 + y_8_.^2 + y_9_.^2);
end
Hmmmmm... I can think of a different interpretation of the question that I can work on.
Rami Altam
on 4 Apr 2018
Walter Roberson
on 4 Apr 2018
Sorry, the optimization routine that I used is not even in alpha release yet. Even if I were willing to provide it, explaining all of its features would take more time than either of us has at the moment.
If you use the code above together with fminsearch then you would get a result that was functionally equivalent to the ones I posted, both of which are mathematically useless for your purpose.
I already gave you the analogy of the temperature table: getting a single temperature that gives you the lowest least-squared residue for a given list of input temperatures is not of any benefit to anyone.
Walter Roberson
on 5 Apr 2018
I found a different interpretation of the problem and I am currently processing the data, which will take a while.
My first pass is:
best match at table entry #16, which is
ans =
1×15 table
x_1_ x_2_ x_3_ x_4_ x_5_ x_6_ FOPT FGPT FWPT WBHP_1_ WBHP_2_ WBHP_3_ WBHP_4_ WBHP_5_ WBHP_6_
_____ ____ _____ ____ _____ ____ ______ ________ ________ ________ ________ ________ ________ ________ ________
1.125 0.5 1.125 0.75 1.125 304 219417 16656.74 0.345634 188.7567 199.7044 228.8313 228.3929 207.6005 188.2294
Search found x(1):x(6) to be:
0.916 0.484 1.244 0.890 0.885 422.588
Euclidean distance: 118.588
but I already have other results better than that, using my private routine (which is still chugging through.)
Walter Roberson
on 5 Apr 2018
Edited: Walter Roberson
on 6 Apr 2018
After quite a number of hours of computation, the current solution is:
best match at table entry #6, which is
ans =
1×15 table
x_1_ x_2_ x_3_ x_4_ x_5_ x_6_ FOPT FGPT FWPT WBHP_1_ WBHP_2_ WBHP_3_ WBHP_4_ WBHP_5_ WBHP_6_
____ ____ _____ ____ _____ ____ _____ ________ ________ _______ ________ ________ ________ ________ ________
0.75 0.5 1.125 1 1.125 90 53994 3973.545 0.037613 175.479 176.3986 229.6768 229.7601 190.7061 152.5175
Search found x(1):x(6) to be:
0.806 0.532 1.228 0.581 0.609 584.578
Euclidean distance: 494.578
I will be looking further at the results.
Walter Roberson
on 6 Apr 2018
Round 3:
best match at table entry #6, which is
ans =
1×15 table
x_1_ x_2_ x_3_ x_4_ x_5_ x_6_ FOPT FGPT FWPT WBHP_1_ WBHP_2_ WBHP_3_ WBHP_4_ WBHP_5_ WBHP_6_
____ ____ _____ ____ _____ ____ _____ ________ ________ _______ ________ ________ ________ ________ ________
0.75 0.5 1.125 1 1.125 90 53994 3973.545 0.037613 175.479 176.3986 229.6768 229.7601 190.7061 152.5175
Search found x(1):x(6) to be:
0.806 0.532 1.228 0.581 0.609 584.437
Euclidean distance: 494.438
This is almost exactly the same place as above, shifts less than 1e-4, except for the final position, x(6) which operates on a different scale.
The euclidean distances are dominated by x(6).
Answers (1)
Walter Roberson
on 6 Apr 2018
Edited: Walter Roberson
on 6 Apr 2018
Below is the beginning of the code I put together to do the searching. Everything beyond this in my version of the code involves invoking my private routines to do more refined searching.
The point that is found to be best by the below pass turned out not to be in the top 25 when a more refined search is done, so the more refined search turned out to be worthwhile (but time consuming.)
I will post this and then explain what it is about.
skip_fminsearch = false;
t = readtable('objective_data.txt');
t.x_1_(ismissing(t.x_1_)) = 0.75;
t.x_2_(ismissing(t.x_2_)) = 0.25;
t.x_3_(ismissing(t.x_3_)) = 1.125;
t.x_4_(ismissing(t.x_4_)) = 0.75;
t.x_5_(ismissing(t.x_5_)) = 1.125;
h = height(t);
results = zeros(h, 6);
residues = zeros(h, 1);
Fv = cell(h, 1);
Fm = cell(h, 1);
for K = 1 : h
Fv{K} = @(x) OF(x,t(K,:));
Fm{K} = @(x1,x2,x3,x4,x5,x6) OFm(x1,x2,x3,x4,x5,x6,t(K,:));
end
fig = figure(7);
clf(fig);
ax = axes('Parent', fig);
legend('show');
title(ax, 'residues');
hold(ax, 'on');
if ~skip_fminsearch
options = struct('MaxFunEvals', 2000, 'MaxIter', 2000, 'Display', 'off');
wb = waitbar(0, 'Searching via fminsearch...');
wb_clean = onCleanup(@() delete(wb));
alh1 = animatedline('Color', 'b', 'DisplayName', 'via fminsearch');
for K = 1 : h
waitbar((K-1)/h, wb);
[results(K, :), residues(K)] = fminsearch(Fv{K}, t{K,1:6}, options);
addpoints(alh1, K, residues(K));
drawnow limitrate
end
clear wb_clean
[bestresidue, bestidx] = min(residues);
plot(ax, bestidx, bestresidue, 'b*', 'DisplayName', 'best fminsearch residue');
drawnow();
fprintf('best match at table entry #%d, which is\n', bestidx);
t(bestidx,:)
fprintf('Search found x(1):x(6) to be:\n');
fprintf('%8.3f %8.3f %8.3f %8.3f %8.3f %8.3f\n', results(bestidx,:));
fprintf('Euclidean distance: %g\n', norm(t{bestidx,1:6}-results(bestidx,:)));
end
together with
function y= OF(x, t)
%t is a table. Entries could be column vectors, so we might be forming a column vector of results
y_1_ = (t.FOPT - (55930.4 - 330502.*x(1) - 3.18673E6.*x(2) + 344393.*x(3) + 471232.*x(4) + 1.58821E6.*x(5) - 156.186.*x(6) - 690828.*x(1).^2 + ...
4.18254E6.*x(1).*x(2) - 181854.*x(1).*x(3) - 88184.6.*x(1).*x(4) + 43046.5.*x(1).*x(5) - 58.2454.*x(1).*x(6) - 1.55312E6.*x(2).^2 - ...
17964.5.*x(2).*x(3) + 5968.0.*x(2).*x(4) - 13083.4.*x(2).*x(5) + 156.855.*x(2).*x(6) - 30717.4.*x(3).^2 + 11409.6.*x(3).*x(4) - 161742.*x(3).*x(5) +...
89.0528.*x(3).*x(6) - 421722.*x(4).^2 + 137095.*x(4).*x(5) + 170.907.*x(4).*x(6) - 762843.*x(5).^2 + 201.494.*x(5).*x(6) + 0.051949.*x(6).^2)).^2;
y_2_ = (t.FGPT - (-58289.1 + 19388.1.*x(1) - 164219.*x(2) + 21682.6.*x(3) + 26505.9.*x(4) + 158322.*x(5) - 8.73255.*x(6) - 28345.9.*x(1).^2 + ...
200378.*x(1).*x(2) - 46355.9.*x(1).*x(3) - 6581.48.*x(1).*x(4) - 10652.9.*x(1).*x(5) + 3.59031.*x(1).*x(6) - 58333.1.*x(2).^2 + 4520.79.*x(2).*x(3) -...
1769.44.*x(2).*x(4) + 1072.04.*x(2).*x(5) + 4.48638.*x(2).*x(6) + 13089.*x(3).^2 + 848.737.*x(3).*x(4) - 11896.9.*x(3).*x(5) + 11.6597.*x(3).*x(6) + ...
1345.19.*x(4).^2 - 22064.1.*x(4).*x(5) + 9.60202.*x(4).*x(6) - 55133.4.*x(5).^2 + 3.28075.*x(5).*x(6) + 0.00469155.*x(6).^2)).^2;
y_3_ = (t.FWPT - (-795795. + 333736.*x(1) - 95084.6.*x(2) + 474404.*x(3) + 277230.*x(4) + 529837.*x(5) - 103.866.*x(6) + 64618.6.*x(1).^2 + ...
137731.*x(1).*x(2) + 427734.*x(1).*x(3) - 156845.*x(1).*x(4) - 617487.*x(1).*x(5) - 162.713.*x(1).*x(6) - 46866.3.*x(2).^2 + 24158.3.*x(2).*x(3) + ...
1255.28.*x(2).*x(4) - 32153.1.*x(2).*x(5) - 0.569499.*x(2).*x(6) - 79781.7.*x(3).^2 - 154761.*x(3).*x(4) - 478208.*x(3).*x(5) - 104.979.*x(3).*x(6) - ...
66917.3.*x(4).^2 + 113688.*x(4).*x(5) + 31.2191.*x(4).*x(6) + 145619.*x(5).^2 + 212.031.*x(5).*x(6) + 0.041897.*x(6).^2)).^2;
y_4_ = (t.WBHP_1_ - (112.43 + 24.6435.*x(1) - 34.3869.*x(2) - 13.7306.*x(3) - 54.3791.*x(4) + 170.973.*x(5) - 0.00602261.*x(6) + 0.643438.*x(1).^2 ...
+ 17.5735.*x(1).*x(2) - 16.6129.*x(1).*x(3) + 10.4442.*x(1).*x(4) - 25.5992.*x(1).*x(5) + 0.00466402.*x(1).*x(6) + 16.554.*x(2).^2 - ...
1.02055.*x(2).*x(3) + 0.696829.*x(2).*x(4) - 1.45535.*x(2).*x(5) + 0.00116119.*x(2).*x(6) + 13.3537.*x(3).^2 + 2.29935.*x(3).*x(4) - ...
1.04702.*x(3).*x(5) + 0.00349116.*x(3).*x(6) + 15.0195.*x(4).^2 + 20.9842.*x(4).*x(5) - 0.00126936.*x(4).*x(6) - 40.995.*x(5).^2 - 0.00488441.*x(5).*x(6) ...
- 0.0000010759.*x(6).^2)).^2;
y_5_ = (t.WBHP_2_ - (217.257 - 80.2807.*x(1) + 26.4349.*x(2) - 43.6652.*x(3) - 12.0739.*x(4) + 42.1795.*x(5) - 0.0102587.*x(6) + 28.9528.*x(1).^2 - ...
40.2188.*x(1).*x(2) + 13.1032.*x(1).*x(3) + 2.53167.*x(1).*x(4) + 21.4719.*x(1).*x(5) + 0.00548878.*x(1).*x(6) + 16.9037.*x(2).^2 - ...
1.8654.*x(2).*x(3) + 2.69922.*x(2).*x(4) + 2.652.*x(2).*x(5) - 0.00143161.*x(2).*x(6) + 5.32427.*x(3).^2 + 7.97545.*x(3).*x(4) + 9.76013.*x(3).*x(5) ...
+ 0.00179993.*x(3).*x(6) + 6.93362.*x(4).^2 - 0.159805.*x(4).*x(5) - 0.000366409.*x(4).*x(6) - 3.60608.*x(5).^2 - 0.0066793.*x(5).*x(6) - ...
3.38236E-7.*x(6).^2)).^2;
y_6_ = (t.WBHP_3_ - (166.423 + 27.5154.*x(1) - 39.1321.*x(2) + 191.72.*x(3) - 69.4435.*x(4) - 85.394.*x(5) - 0.00881759.*x(6) - 9.02332.*x(1).^2 + ...
33.6117.*x(1).*x(2) - 32.0101.*x(1).*x(3) - 2.90598.*x(1).*x(4) + 14.3007.*x(1).*x(5) + 0.00264206.*x(1).*x(6) + 1.10142.*x(2).^2 - ...
2.49883.*x(2).*x(3) + 0.012439.*x(2).*x(4) + 1.20293.*x(2).*x(5) + 0.00209554.*x(2).*x(6) - 52.7031.*x(3).^2 - 25.716.*x(3).*x(4) + ...
11.8914.*x(3).*x(5) + 0.00476638.*x(3).*x(6) - 28.6961.*x(4).^2 + 124.737.*x(4).*x(5) - 0.00199408.*x(4).*x(6) - 11.2814.*x(5).^2 - 0.0041272.*x(5).*x(6) ...
- 2.59924E-7.*x(6).^2)).^2;
y_7_ = (t.WBHP_4_ - (180.763 - 11.6212.*x(1) - 11.8332.*x(2) + 149.082.*x(3) - 96.8172.*x(4) - 51.5179.*x(5) - 0.00945883.*x(6) + 4.53155.*x(1).^2 ...
- 6.65685.*x(1).*x(2) - 1.30623.*x(1).*x(3) + 4.94878.*x(1).*x(4) + 1.93587.*x(1).*x(5) + 0.00310054.*x(1).*x(6) + 21.5073.*x(2).^2 - ...
0.416943.*x(2).*x(3) - 0.106049.*x(2).*x(4) - 0.889398.*x(2).*x(5) + 0.000197474.*x(2).*x(6) - 34.8886.*x(3).^2 - 10.3298.*x(3).*x(4) - ...
8.15057.*x(3).*x(5) + 0.00357232.*x(3).*x(6) - 34.3185.*x(4).^2 + 120.958.*x(4).*x(5) + 0.00239253.*x(4).*x(6) - 5.28872.*x(5).^2 - ...
0.00320331.*x(5).*x(6) - 0.00000123636.*x(6).^2)).^2;
y_8_ = (t.WBHP_5_ - (192.642 - 47.0351.*x(1) - 73.0649.*x(2) - 20.4026.*x(3) - 16.0969.*x(4) + 114.684.*x(5) - 0.0196958.*x(6) - 10.3205.*x(1).^2 + ...
111.171.*x(1).*x(2) - 14.9388.*x(1).*x(3) - 3.32829.*x(1).*x(4) + 19.444.*x(1).*x(5) + 0.00622797.*x(1).*x(6) - 33.7897.*x(2).^2 - 1.12198.*x(2).*x(3) ...
+ 0.451366.*x(2).*x(4) - 0.757496.*x(2).*x(5) - 0.00142821.*x(2).*x(6) + 20.5391.*x(3).^2 - 8.2265.*x(3).*x(4) - 2.24338.*x(3).*x(5) + ...
0.00541462.*x(3).*x(6) + 44.0278.*x(4).^2 - 27.9754.*x(4).*x(5) + 0.00154169.*x(4).*x(6) - 27.4073.*x(5).^2 + 0.000757436.*x(5).*x(6) - ...
0.00000104303.*x(6).^2)).^2;
y_9_ = (t.WBHP_6_ - (227.535 - 24.7576.*x(1) - 30.7689.*x(2) - 36.0309.*x(3) - 147.753.*x(4) - 33.2402.*x(5) + 0.020352.*x(6) - 13.1713.*x(1).^2 + ...
75.0485.*x(1).*x(2) - 3.07831.*x(1).*x(3) + 10.6107.*x(1).*x(4) + 0.0487154.*x(1).*x(5) + 0.00387367.*x(1).*x(6) - 32.8412.*x(2).^2 - ...
0.347415.*x(2).*x(3) + 1.82151.*x(2).*x(4) + 0.179057.*x(2).*x(5) - 0.00485434.*x(2).*x(6) + 7.57806.*x(3).^2 + 30.3197.*x(3).*x(4) + ...
0.964379.*x(3).*x(5) + 0.000932607.*x(3).*x(6) + 139.822.*x(4).^2 + 22.2055.*x(4).*x(5) - 0.0255241.*x(4).*x(6) + 9.20188.*x(5).^2 - ...
0.00169528.*x(5).*x(6) - 9.53375E-7.*x(6).^2)).^2;
%each of these might be a column vector. Adding the y_* gets a scalar result per row.
%for lack of anything better to do, we add all the rows for an aggregate total
y = sum(y_1_ + y_2_ + y_3_ + y_4_ + y_5_ + y_6_ + y_7_ + y_8_ + y_9_);
end
1 Comment
Walter Roberson
on 6 Apr 2018
So what I did is reinterpreted the question somewhat.
Suppose that the table entries except for x(1) through x(6) are somehow authoritative measurements, and suppose the OF equations you posted are the governing equations, and suppose that for any one row, the x(1) through x(6) entries are hypothesis (rather than actual measurements) about the best x(1:6) entries associated with the other measurements for that row.
Under those conditions, you could use the constants associated with a row in the table, and substitute in that row's x(1:6), and get out a residue. And then you could ask which row gives the best lowest residue, so as to find the existing row that gives the best match. (My above code skips that calculation)
Now, if you consider the x(1:6) as being hypotheses, then for any one row of the table, you can ask what the x(1:6) are that give the lowest residue for that row. And then you can compare the residues over the various rows in order to find the entry in the table for which some x(1:6) can be found such that the OF equations best model that row.
The code I posted with the fminsearch runs those per-row calculations to attempt to find the x(1:6) for that row that gives the minima for that row.
With the function count limit I imposed (2000, which you could raise), and using the row's x(1:6) as the starting point, then what it finds is that the 16th row is the most explainable through some x(1:6) using those model equations.
Other search routines can be substituted for fminsearch, such as ga() or fmincon(), and different starting points can be used. In my own code after the end of what I posted above, I used a custom minimizer to look for minima. My custom minimizer tries to be a global minimizer, but (like all global minimizers that do not work symbolically from derivatives) my custom minimizer cannot make any guarantees.
After one round of my custom minimizer, I identified row #6 as being the one that has some x(1:6) leading to the lowest residue based on those equations and the constants in that row.
After the first search round, I selected the residues that were less than 100, which gave 15 candidates for the short-list. I then ran a second round for those alone, telling my custom minimizer to search in smaller steps.
The second more careful round also identified row #6 as being the one that has some x(1:6) leading to the lowest residue based upon those equations and constants in that row, but the second round refined the position a little from the first round.
The second round also identified that row #4 of the table has a residue that is not a lot different. It is plausible that if an even more careful search were done for rows #4 and #6 that possibly row #4 might turn out to be the one for which some x(1:6) can be found as matching the OF equations. The third best was row #5 which also does fairly well, but I think it is a little less plausible that row #5 could win out overall. A little further back but decent considering are rows 20, 21, 22, 3, 19. Things get gradually worse after that, with there not really being a sharp jump until roughly half of the entries (so you could say that about half the of entries are difficult to explain as being consistent with the model, but below that you have to make some arbitrary decisions as to what is a "good enough" match.)
The "Euclidean distance" figures that I posted above are the Euclidean distance between the best x(1:6) I could find for that row, to the x(1:6) that was associated with the row in the table. The x(6) contribution pretty much ruins the other contributions for that purpose, so this turned out not to be useful.
Categories
Find more on Surrogate Optimization in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!