Rank: 420183 based on 0 downloads (last 30 days) and 0 file submitted
photo

neutrino4242

E-mail

Personal Profile:
Professional Interests:

 

Watch this Author's files

 

Comments and Ratings by neutrino4242 View all
Updated File Comments Rating
06 Jun 2013 SLM - Shape Language Modeling Least squares spline modeling using shape primitives Author: John D'Errico

Dear John,

thanks for your rapid answer. Your reference to Mark Twain gave me a new inside view about his mathematical abilities :-). You are right, it's a bit philosophical question and of course, extrapolation may results in unexpected results. But my background is numerical/physical motivated. I'm interest in of deconvolution of a given time series. The frequency response = susceptibility in fourier space is known. It's is well known, extending the data set = padding is mandatory to avoid boundary effects like ringing. In case of image deconvolution, used as deblurring see e.g. R. Liu, "REDUCING BOUNDARY ARTIFACTS IN IMAGE DECONVOLUTION" + google. The suggestion in the book Numerical Recipes, Chapter 13.1.1 is zero padding. This is fine is the data set starts and ends with zeros but fails in all other cases. Zero padding leads to strong ringing at the beginning and end of the deconvoluted time series, independently of the padding length. The reason is the discontinuity in the data set before deconvolution. A better idea is the padding with constants to avoid the discontinuity. Next better idea is a extrapolation in the (unphysical) padded region where is no jump in first and second derivative. If i perform the deconvolution with such extrapolation, the ringing artifacts disappears. Of course, after deconvolution, only the time span without the padding regions in front and the end of the data set has a physical interpretation. I hope it explains my physical/numerical motivation of extrapolation with the first and last spline.

BTW: i) I'm a German and my surname is John :-).
ii) I use the slmengine mostly to obtain the numerical derivative of noisy data. From my point of view, the slmengine has many advantages in control of the necessary smoothing of the noisy data set, e.g. concaveup or integral. It's not possible to implement such (physical motivated) features in more sophisticated algorithms like higher order methods or Savitzky-Golay-filters.

05 Jun 2013 SLM - Shape Language Modeling Least squares spline modeling using shape primitives Author: John D'Errico

Dear John, excellent tool, I use it extensively in my daily work with noisy data. A reference in the next paper will be given.

May be, i found a tiny inconsistency regarding the extrapolation of data set. Have a look on this sample code:

% aim: find a good extrapolation of noisy data, green line in final plot
clear all;close all;
x=0:0.01:1.5*pi;
y=sin(x)+rand(size(x))*0.1;
xinterp=-2:0.01:3*pi;
% matlab interp1 is not designed for noisy data, fails
yinterp=interp1(x,y,xinterp,'pchip','extrap');
figure(1);plot(xinterp,yinterp);
% first try with slm, only the first and last data point are padded in extrapolation
yslm=slmengine(x,y,'endconditions','estimate','plot','on');
yslmeval=slmeval(xinterp,yslm);
figure(1);hold on;plot(xinterp,yslmeval,'-r');ylim([-3 3]);
% second try, this works, calulation of the polynomial in the extrapolated regions
ypp=slmengine(x,y,'endconditions','estimate','plot','on','result','pp');
yppeval=ppval(xinterp,ypp);
figure(1);hold on;plot(xinterp,yppeval,'-g');ylim([-3 3]);

Contact us