"Is it better to simply call the function as kstest(x) or correct the data so that its standard deviation and mean is 1 and 0 respectively"
The one-sample Kolmogorov-Smirnov test tests the null hypothesis that the data comes from a standard normal distribution (mean 0, std 1). If you correct your data so that it does have a mean of 0 and std of 1, what's the point of testing it?
Null hypotheses (from the documentation)
One-sample Kolmogorov-Smirnov test: the data in vector x comes from a standard normal distribution (mean 0, std 1).
Lilliefors test: the data in vector x comes from a distribution in the normal family.
Anderson-Darling test: the data in vector x is from a population with a normal distribution.
If the null hypothesis is rejected (an outcome of 1 for all three tests), the data do not come from those distributions at a 5% significance level.
Here's a domonstration showing the difference between the kstest and the two other ones.
x0 = randn(1,10000);
x1 = x0*2 + 5;
Notice how this creates two normal distribtions. The blue distribtuion has a mean of 0 and std of 1 while the reddish distribution has a mean of 5 and std of 2 (approximately).
ks0 = kstest(x0)
ks1 = kstest(x1)
lt0 = lillietest(x0)
lt1 = lillietest(x1)
ad0 = adtest(x0)
ad1 = adtest(x1)
As you can see, the blue distribution is identified as a standard normal distribution and rightfully so since it has a mean of 0 and std of 1 (approximately) while the other distribution does not. However, both distributions are normal as indicated by both the lillietest() and adtest().