::Introduction::
People can see depth because they look at the 3D world from two slightly different angles (one from each eye). Our brains then figure out how close things are by determining how far apart they are in the two images from our eyes. The idea here is to do the same thing with a computer. The algorithm is based on “Segment-Based Stereo Matching Using Belief Propogation and a Self-Adapting Dissimilarity Measure” by Klaus, Sormann, and Karner.
[ http://www.vrvis.at/publications/pdfs/VRVis_2006_05_22_16_20_00.pdf ]
(Mind that the algorithm here is *inspired* by the algorithm of Klaus et al. Theirs is much more complete)
::Getting Pixel Disparity::
The first step is to get an estimate of the disparity at each pixel in the image. A reference image is chosen, and the other image slides across it. As the two images ’slide’ over one another we subtract their intensity values. Additionally, we subtract gradient information (spatial derivatives). We record the offset at which the difference is the smallest, and call that the disparity.
::Filtering the Pixel Disparity::
Next we combine image information with the pixel disparities to clean up the disparity map. First, we segment the reference image using a technique called “Mean Shift Segmentation.” Then, for each segment, we look at the associated pixel disparities. In my simple implementation, we assign each segment to have the median disparity of all the pixels within that segment. This gives a nice final result.
::More Information::
Download, unzip, and run >>demo to see the code in action.
For more information, videos, and example images check here.
[ http://www.shawnlankton.com/2007/12/3d-vision-with-stereo-disparity/ ] |