The use of a datastore is what comes to mind at first - you could try using read to obtain a chunk of data at once and select required fields efficiently using logical indexing..
r = read(ds);
r_sel = r(r.Time<t1 & r.Time>t2, :);
It's not clear which factor would cause datastore operation to be too slow for your case.
Perhaps it may be faster if you could process one whole csv file at once, allocating all the data into different time slots and then moving on to the next CSV file. You could consider implementing Bucket Sort for the different time stamps.
Finally, you may consider using MapReduce. It's more flexible and powerful than using a datastore, but may require a learning curve. You may come up with the same solution you've tried before on MapReduce and find it working faster.
Hope it helps!