Asked by michael
on 8 Jul 2011

Recently, I been pretty doing image analysis like measuring distance between two points in a image and thresholding the image to pick out specific points.

Being a still-learning scientist, I wondering if anyone had tried to measure the measurement error and more importantly propagation of error in image analysis.

Has anyone tried to evaluate error in images beyond simple standard deviation?

Has anyone in the history of computer science ever tried to evaluate the measurement errors associated with cropping an image, rotating image, using a particular transformation.

*_________________________*

For example most people don't realize that anything involving a Fourier Transform, and other mathamatical transforms, actually introduces a propogation of error into the measurement that is magnifies width of the error in the numbers you measured.

In fact it was due to Dirac analysis of basic quantum mechanics that we would eventually realize that the famous Heighsburg uncertaincy principle is not just the measurement in statistical joint uncertainty of two Hamalitonians but also the measurement of Error of using a Fourier Transform in quantum mechanics.

So the fact that measurement errors exists in any serious information analysis is not a trival question. The question is do we ignore it?

*No products are associated with this question.*

Answer by Sean de Wolski
on 8 Jul 2011

Accepted answer

This isn't really related to MATLAB, but here goes. All image analysis has **A LOT** of error in digitization. The rest of the error is probably trivial comparatively :)

To answer your specific questions, yes, a lot of research has been done on it. Look through the *compendex* libraries and you'll find anything you want probably (and more than likely in many languages!)

A personal story about it:
I just finished the first draft of my MS thesis on Wednesday. In the literature review and solution I discussed three ways to measure the surface area of digitized three dimensional binary objects: a digital scheme using the sum of voxel faces, a marching cubes algorithm, and summing the area of an isosurface generated by MATLAB's `isosurface` function. I then promptly said that any surface area measurement I make is pretty much BS as the shapes' surfaces are cracks and thus fractal in nature (Measure the coast of England). Any measurement we make is completely prone to error and to being wrong - but it is a measurement and it is reproducible and effective for calculating some properties.

Now: what is the biggest error?

- The fact that I didn't use a marching cubes algorithm which makes a better approximation to the surface v. a voxel face count (which is shown in multiple publications to be an overestimate for euclidean shapes)?
- The fact that my binary objects were converted to binary (two colors) from uint8 (256 colors) from int16 (65536 colors) from float 32 (a lot of colors) from real life (practically infinite)?
- The fact that images were reconstructed using an inverse radon transform which involves a layer of interpolation for our images on which the edge up to just less than one pixel thick?
- The fact that our voxel resolution is 6 microns; so anything in between is digitized by some type of interpolation?
- The fact that there are non-linearities in the x-ray detector which produce ring and beam artifacts that either represent false data or require a second layer of interpolation to remove?
- The fact that there are fluctuations in the synchrotron energy that effect the number of particles being shot through our specimen?
- Anything more, such as x-ray scattering or others, is over my knowledge of physics but I'm sure there are more...

Anyway, that's just a thought for you. Losing a few insignificant bytes while converting to Fourier domain doesn't really bother me. (Someone has to fill in for Walter since it appears he's on vacation :) )

Sarah Miller
on 5 Dec 2012

Opportunities for recent engineering grads.

## 0 Comments