# Documentation

### This is machine translation

Translated by
Mouseover text to see original. Click the button below to return to the English version of the page.

To view all translated materials including this page, select Japan from the country navigator on the bottom of this page.

# cameraParameters

Object for storing camera parameters

## Description

The `cameraParameters` object stores the intrinsic, extrinsic, and lense distortion parameters of a camera.

## Creation

### Description

````cameraParams = cameraParameters` returns an object that contains the intrinsic, extrinsic, and lens distortion parameters of a camera.```

example

````cameraParams = cameraParameters(Name,Value)` configures the camera parameters by setting the `cameraParams` object properties, specified as one or more `Name,Value` pair arguments. Unspecified properties use default values.```
````cameraParams = cameraParameters(paramStruct)` returns a `cameraParameters` object containing the parameters specified by `paramStruct` input. `paramStruct` is returned by the `toStruct` function.```

## Properties

expand all

Intrinsic camera parameters:

Projection matrix, specified as a 3-by-3 identity matrix. The object uses the following format for the matrix format:

`$\left[\begin{array}{ccc}{f}_{x}& 0& 0\\ s& {f}_{y}& 0\\ {c}_{x}& {c}_{y}& 1\end{array}\right]$`
The coordinates [cx cy] represent the optical center (the principal point), in pixels. When the x and y axis are exactly perpendicular, the skew parameter, s, equals `0`.
 fx = F*sx fy = F*sy
 F, is the focal length in world units, typically expressed in millimeters. [sx, sy] are the number of pixels per world unit in the x and y direction respectively. fx and fy are expressed in pixels.

Optical center, specified as a 2-element vector [cx,cy] in pixels. The vector contains the coordinates of the optical center of the camera.

Focal length in x and y, specified as a 2-element vector [fx, fy].

 fx = F * sx fy = F * sy
where, F is the focal length in world units, typically in millimeters, and [sx, sy] are the number of pixels per world unit in the x and y direction respectively. Thus, fx and fy are in pixels.

Camera axes skew, specified as a scalar. If the x and the y axes are exactly perpendicular, then set the skew to `0`.

Camera lens distortion:

Radial distortion coefficients, specified as either a 2- or 3-element vector. When you specify a 2-element vector, the object sets the third element to `0`. Radial distortion occurs when light rays bend more near the edges of a lens than they do at its optical center. The smaller the lens, the greater the distortion. The camera parameters object calculates the radial distorted location of a point. You can denote the distorted points as (xdistorted, ydistorted), as follows:

xdistorted = x(1 + k1*r2 + k2*r4 + k3*r6)

ydistorted= y(1 + k1*r2 + k2*r4 + k3*r6)

 x, y = undistorted pixel locations k1, k2, and k3 = radial distortion coefficients of the lens r2 = x2 + y2
Typically, two coefficients are sufficient. For severe distortion, you can include k3. The undistorted pixel locations appear in normalized image coordinates, with the origin at the optical center. The coordinates are expressed in world units.

Tangential distortion coefficients, specified as a 2-element vector. Tangential distortion occurs when the lens and the image plane are not parallel. The camera parameters object calculates the tangential distorted location of a point. You can denote the distorted points as (xdistorted, ydistorted), as follows:

xdistorted = x + [2 * p1 * y + p2 * (r2 + 2 * x2)]

ydistorted = y + [p1 * (r2 + 2*y2) + 2 * p2 * x]

 x, y = undistorted pixel locations p1 and p2 = tangential distortion coefficients of the lens r2 = x2 + y2
The undistorted pixel locations appear in normalized image coordinates, with the origin at the optical center. The coordinates are expressed in world units.

Extrinsic camera parameters:

3-D rotation matrix, specified as a 3-by-3-by-P, with P number of pattern images. Each 3-by-3 matrix represents the same 3-D rotation as the corresponding vector.

The following equation provides the transformation that relates a world coordinate in the checkerboard’s frame [X Y Z] and the corresponding image point [x y]:

`$s\left[\begin{array}{ccc}x& y& 1\end{array}\right]=\left[\begin{array}{cccc}X& Y& Z& 1\end{array}\right]\left[\begin{array}{c}R\\ t\end{array}\right]K$`
 R is the 3-D rotation matrix. t is the translation vector. K is the `IntrinsicMatrix`. s is a scalar.
This equation does not take distortion into consideration. Distortion is removed by the `undistortImage` function.

3-D rotation vectors , specified as a M-by-3 matrix containing M rotation vectors. Each vector describes the 3-D rotation of the camera’s image plane relative to the corresponding calibration pattern. The vector specifies the 3-D axis about which the camera is rotated, where the magnitude is the rotation angle in radians. The corresponding 3-D rotation matrices are given by the `RotationMatrices` property

Camera translations, specified as an M-by-3 matrix. This matrix contains translation vectors for M images. The vectors contain the calibration pattern that estimates the calibration parameters. Each row of the matrix contains a vector that describes the translation of the camera relative to the corresponding pattern, expressed in world units.

The following equation provides the transformation that relates a world coordinate in the checkerboard’s frame [X Y Z] and the corresponding image point [x y]:

`$s\left[\begin{array}{ccc}x& y& 1\end{array}\right]=\left[\begin{array}{cccc}X& Y& Z& 1\end{array}\right]\left[\begin{array}{c}R\\ t\end{array}\right]K$`
 R is the 3-D rotation matrix. t is the translation vector. K is the `IntrinsicMatrix`. s is a scalar.
This equation does not take distortion into consideration. Distortion is removed by the `undistortImage` function.

You must set the `RotationVectors` and `TranslationVectors` properties in the constructor to ensure that the number of rotation vectors equals the number of translation vectors. Setting only one property but not the other results in an error.

Estimated camera parameter accuracy:

Average Euclidean distance between reprojected and detected points, specified as a numeric value in pixels.

Estimated camera parameters accuracy, specified as an M-by-2-by-P array of [x y] coordinates. The [x y] coordinates represent the translation in x and y between the reprojected pattern key points and the detected pattern key points. The values of this property represent the accuracy of the estimated camera parameters. P is the number of pattern images that estimates camera parameters. M is the number of keypoints in each image.

World points reprojected onto calibration images, specified as an M-by-2-by-P array of [x y] coordinates. P is the number of pattern images and M is the number of keypoints in each image.

Estimate camera parameters settings:

Number of calibration patterns that estimates camera extrinsics, specified as an integer. The number of calibration patterns equals the number of translation and rotation vectors.

World coordinates of key points on calibration pattern, specified as an M-by-2 array. M represents the number of key points in the pattern.

World points units, specified as a character vector. The character vector describes the units of measure.

Estimate skew flag, specified as a logical scalar. When you set the logical to `true`, the object estimates the image axes skew. When you set the logical to `false`, the image axes are exactly perpendicular.

Number of radial distortion coefficients, specified as the number '`2`' or '`3`'.

Estimate tangential distortion flag, specified as the logical scalar `true` or `false`. When you set the logical to `true`, the object estimates the tangential distortion. When you set the logical to `false`, the tangential distortion is negligible.

## Object Functions

 `pointsToWorld` Determine world coordinates of image points `toStruct` Convert a camera parameters object into a struct `worldToImage` Project world points into image

## Examples

expand all

Use the camera calibration functions to remove distortion from an image. This example creates a vision.CameraParameters object manually, but in practice, you would use the estimateCameraParameters or the cameraCalibrator app to derive the object.

Create a vision.CameraParameters object manually.

```IntrinsicMatrix = [715.2699 0 0; 0 711.5281 0; 565.6995 355.3466 1]; radialDistortion = [-0.3361 0.0921]; cameraParams = cameraParameters('IntrinsicMatrix',IntrinsicMatrix,'RadialDistortion',radialDistortion); ```

Remove distortion from the images.

```I = imread(fullfile(matlabroot,'toolbox','vision','visiondata','calibration','mono','image01.jpg')); J = undistortImage(I,cameraParams);```

Display the original and the undistorted images.

```figure; imshowpair(imresize(I, 0.5),imresize(J, 0.5),'montage'); title('Original Image (left) vs. Corrected Image (right)');```

## References

[1] Zhang, Z. “A flexible new technique for camera calibration”. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 11, pp. 1330–1334, 2000.

[2] Heikkila, J, and O. Silven. “A Four-step Camera Calibration Procedure with Implicit Image Correction”, IEEE International Conference on Computer Vision and Pattern Recognition, 1997.