It is because when you change the sampling time, you are basically playing with the fft of the signal. You should use Ts value of whatever you used in your experiment to get correct transfer function. For instance, if you collected data every 10 ms, you should use a Ts value of 10 ms.
The reason behind the change of frequency but not the magnitude is simple. Assume you have a sine wave;
t = 0:1e-3:10;
y = sin(t);
If you take the fft of the signal 'y', and use the sampling rate of 1 ms, since the sine wave is created that way, you will see a magnitude of 1 at frequency of 1 rad/s. However if you do the fft assuming you have a sampling rate of 10 ms, the magnitude in the fft will still be 1 but ferquency will be 0.1 rad/s. Which is what you observed in the graphs you provided. You increased the sampling time from 1e-4 to 1e-2 sec, which is a 100 time increase, which decreased your frequency 100 times.