You can specify locations in images using various coordinate systems. Coordinate systems are used to place elements in relation to each other. Coordinates in pixel and spatial coordinate systems relate to locations in an image. Coordinates in 3-D coordinate systems describe the 3-D positioning and origin of the system.
Pixel coordinates enable you to specify locations in images. In the pixel coordinate system, the image is treated as a grid of discrete elements, ordered from top to bottom and left to right.
For pixel coordinates, the number of rows, r, downward, while the number of columns, c, increase to the right. Pixel coordinates are integer values and range from 1 to the length of the row or column. The pixel coordinates used in Computer Vision Toolbox™ software are one-based, consistent with the pixel coordinates used by Image Processing Toolbox™ and MATLAB®. For more information on the pixel coordinate system, see Pixel Indices (Image Processing Toolbox).
Spatial coordinates enable you to specify a location in an image with greater granularity than pixel coordinates. Such as, in the pixel coordinate system, a pixel is treated as a discrete unit, uniquely identified by an integer row and column pair, such as (3,4). In the spatial coordinate system, locations in an image are represented in terms of partial pixels, such as (3.3, 4.7).
For more information on the spatial coordinate system, see Spatial Coordinates (Image Processing Toolbox).
When you reconstruct a 3-D scene, you can define the resulting 3-D points in one of two coordinate systems. In a camera-based coordinate system, the points are defined relative to the center of the camera. In a calibration pattern-based coordinate system, the points are defined relative to a point in the scene.
Vision Toolbox functions use the right-handed
world coordinate system. In this system, the x-axis
points to the right, the y-axis points down, and
the z-axis points away from the camera. To display
3-D points, use
Points represented in a camera-based coordinate system are described with the origin located at the optical center of the camera.
In a stereo system, the origin is located at the optical center of Camera 1.
When you reconstruct
a 3-D scene using a calibrated stereo camera, the
triangulate functions return 3-D points
with the origin at the optical center of Camera 1. When you use Kinect® images,
pcfromkinect function returns
3-D points with the origin at the center of the RGB camera.
Points represented in a calibration pattern-based coordinate system are described with the origin located at the (0,0) location of the calibration pattern.
When you reconstruct a 3-D scene from multiple views containing a calibration pattern, the resulting 3-D points are defined in the pattern-based coordinate system. The Structure From Motion From Two Views example shows how to reconstruct a 3-D scene from a pair of 2-D images containing a checkerboard pattern.