Efficiently adding scattered data points to iterative interpolation

3 views (last 30 days)
Dear all,
the task I want to solve is to efficiently approximate a function from scattered data.
I get an unknown amount of data samples each discrete time steps where these data-points of the function are randomly sampled from a continuous space.
I make use of the function TriScatteredInterpolant ( scatteredInterpolant is not available to me at the moment) and update the set of data points according to Interpolating Scattered Data Using the scatteredInterpolant Class - Section 6.
for k = ... % discrete time for-loop
if k == 1 % creating the initial interpolation object
xdata = x_locations(k)';
ydata = y_locations(k)';
zdata = measurements(k)'; %
F = TriScatteredInterp(xdata,ydata,zdata);
else % augment exisiting interpolation object
locations(k) = [x_locations(k), ylocations(k)]
F.X(end+(1:length(locations(k)),:) = locations(k)';
F.V(end+(1:length(locations(k))) = measurements(k)';
end % EO if
end % EO for
Now, the challenge is that the time horizon, i,e., the for-loop, can be really long, leading to a huge object or array F.X. This decreases the execution time of the loop significantly the longer it runs.
I would like to limit the number of data points in F.X and F.V such that I obtain constant execution times. The documentation linked above describes a way on how to merge data values at the same data point under Handling Duplicate Point Locations .
However there are two problems with this:
1) Recursivelz adding duplicate data-points such that an averaging is performed doesn't work since the duplicate points in F.X are immediately removed, even if the value F.V is added before the location. (I guess the auto-averaging of duplicate points only works at the time instance the interplation object is created, and would deliver wrong results for iteratively added data since the normalisation constant is not known).
2) As mentioned above I have a continuous space, therefore the probability of getting a duplicate data point is zero. The documentation even states my case: "However in some instances, data points can be close rather than coincident, and the values at those locations can be different." Unfortunately it nowhere comments on how to tackle this.
My question is, are there any solutions of incoorporating data-points in a smart way such that points close to each other get merged and the max number of data points is kept constant?
In essence the ideal outcome would be that the scattered data points considered converge to a (possibly predefined) grid that is euqally spaced or even has more supporting points in areas where more samples are drawn.
Regarding a manual implementation, a sub-question is, if there is a possibility of assigning a "tolerance" interval to the unique function or similar? This would check whether a point is in the vicinity close to another.
I would be glad if someone could help out with some suggestions.
best,
Stephan

Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!