There are potential problems with the technique used to eliminate duplicates as consideration has not been given that the duplicate time values might be associated with different values of "x".
Hence comparisons of accuracy or timing between these m-files and alternative routines needs to be careful of different processing of duplicate input data points.
On another matter, it would also be advantageous to open the parameter MACC to the user to choose a value. As it stands, errors from the fast algorithm could be of order 2% (with limited testing) using the default MACC=4 [following Press et al.]. However, the discrepancy can be reduced by a factor of circa 100 (to an error of order 0.02%) by increasing MACC to 100; while this does reduce the speed of the "fast" algorithm, it remained quicker than the "slow" algorithm (in limited testing), and therefore also remained quicker than the two older routines from Shoelson and Savransky respectively.
For me, on an old PC (Intel Core2 Duo E8500 @ 3.16/3.17 GHz, 8 GB RAM, Win7 64bit) both of these algorithms are so far (with limited testing) significantly faster than the alternatives of Brett Shoelson (File ID: #993) and Dmitry Savransky (File ID: #20004).
FOR EXAMPLE, with a data set of 4403 data points (with no time duplicates):
Shoelson ~ 69 s
Shoelson (with processing for duplicates) ~ 84 s
Savransky ~ 93 s
Saragiotis (slow) ~ 10.5 s
Saragiotis (fast) ~ 0.2 s
Saragiotis (fast, with MACC=100) ~ 2.3 s