Recently, I been pretty doing image analysis like measuring distance between two points in a image and thresholding the image to pick out specific points.
Being a still-learning scientist, I wondering if anyone had tried to measure the measurement error and more importantly propagation of error in image analysis.
Has anyone tried to evaluate error in images beyond simple standard deviation?
Has anyone in the history of computer science ever tried to evaluate the measurement errors associated with cropping an image, rotating image, using a particular transformation.
For example most people don't realize that anything involving a Fourier Transform, and other mathamatical transforms, actually introduces a propogation of error into the measurement that is magnifies width of the error in the numbers you measured.
In fact it was due to Dirac analysis of basic quantum mechanics that we would eventually realize that the famous Heighsburg uncertaincy principle is not just the measurement in statistical joint uncertainty of two Hamalitonians but also the measurement of Error of using a Fourier Transform in quantum mechanics.
So the fact that measurement errors exists in any serious information analysis is not a trival question. The question is do we ignore it?
This isn't really related to MATLAB, but here goes. All image analysis has A LOT of error in digitization. The rest of the error is probably trivial comparatively :)
To answer your specific questions, yes, a lot of research has been done on it. Look through the compendex libraries and you'll find anything you want probably (and more than likely in many languages!)
A personal story about it: I just finished the first draft of my MS thesis on Wednesday. In the literature review and solution I discussed three ways to measure the surface area of digitized three dimensional binary objects: a digital scheme using the sum of voxel faces, a marching cubes algorithm, and summing the area of an isosurface generated by MATLAB's isosurface function. I then promptly said that any surface area measurement I make is pretty much BS as the shapes' surfaces are cracks and thus fractal in nature (Measure the coast of England). Any measurement we make is completely prone to error and to being wrong - but it is a measurement and it is reproducible and effective for calculating some properties.
Now: what is the biggest error?
Anyway, that's just a thought for you. Losing a few insignificant bytes while converting to Fourier domain doesn't really bother me. (Someone has to fill in for Walter since it appears he's on vacation :) )