Don't know that it'll help your run time unless you can minimize the lags over which you run it, but the basic way to overlay the two is...
ADDENDUM 1: Certainly if the data as given are representative, one way to speed it up would be to eliminate the data where both are zero or even at/below the threshold level that you've set. Whether you need to keep those pre- and post-trigger lengths around to adjust the overall sample length in the end depends on the application, of course.
ALTERNATIVE: Again on the presumption the data are representative; w/o the cross-correlation bottleneck. This does depend upon the rise/fall times being as clean as are here and that one can find a suitable threshold for each pair expeditiously...
It needs must be above the early noise into the fast-rise/fall area and yet must not intersect the lower middle level of the lower=magnitude signal. Also it must be a value that isn't in the dataset to make the subsequent test robust. Since it appears that the data are integer-valued, one can assure that by using a fractional value as the threshold.
id=[find(d(:,1)==2) find(d(:,2)==2);find(d(:,1)==-2) find(d(:,2)==-2)]
Quick, but more sensitive to noise by far...salt to suit! :)