Yes, this is known for a long time already. As Euler has known already and as you learn in the first year of numerical maths, the local truncation error of the Euler method is proportional to h^2. This means that reducing the step size will decrease the local truncation error also. In addition this will increase the number of function evaluations and the accumulated rounding errors. For many real world problems the later effect will dominate the run time and the total error of the computed trajectory. E.g. the ODEs to simulate the deformation of a car during a crash can include thousands or millions of variables and a function evaluation can take seconds or minutes. Then it is a bad idea to reduce the step size by a factor of 10'000. In addition this can let the accumulated rounding error explode. In consequence the Euler method is not used for scientific applications, but higher order methods with a smart step size control to find the minimal total error.