cameraParams = cameraParameters returns
an object that contains the intrinsic, extrinsic, and lens distortion
parameters of a camera.

cameraParams = cameraParameters(Name,Value) configures
the camera parameters object properties, specified as one or more Name,Value pair
arguments. Unspecified properties use default values.

The object contains intrinsic, extrinsic, lens distortion, and
estimation properties.

Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments.
Name is the argument
name and Value is the corresponding
value. Name must appear
inside single quotes (' ').
You can specify several name and value pair
arguments in any order as Name1,Value1,...,NameN,ValueN.

Example: 'RadialDistortion',[0 0
0] sets the 'RadialDistortion' to [0
0 0].

Projection matrix, specified as the comma-separated pair consisting
of 'IntrinsicMatrix' and a 3-by-3 identity matrix.
For the matrix format, the object uses the following format:

The coordinates [c_{x}c_{y}]
represent the optical center (the principal point), in pixels. When
the x and y axis are exactly
perpendicular, the skew parameter, s, equals 0.

f_{x} = F*s_{x}

f_{y} = F*s_{y}

F, is the focal length in world units,
typically expressed in millimeters.

[s_{x}, s_{y}]
are the number of pixels per world unit in the x and y direction
respectively.

Radial distortion coefficients, specified as the comma-separated
pair consisting of 'RadialDistortion' and either
a 2- or 3-element vector. If you specify a 2-element vector, the object
sets the third element to 0.

Radial distortion occurs when light rays bend more near the
edges of a lens than they do at its optical center. The smaller the
lens, the greater the distortion.

The camera parameters object calculates the radial distorted
location of a point. You can denote the distorted points as (x_{distorted}, y_{distorted}),
as follows:

k_{1}, k_{2},
and k_{3} = radial distortion
coefficients of the lens

r^{2} = x^{2} + y^{2}

Typically, two coefficients are sufficient. For severe
distortion, you can include k_{3}.
The undistorted pixel locations appear in normalized image coordinates,
with the origin at the optical center. The coordinates are expressed
in world units.

Tangential distortion coefficients, specified as the comma-separated
pair consisting of 'TangentialDistortion' and
a 2-element vector. Tangential distortion occurs when the lens and
the image plane are not parallel.

The camera parameters object calculates the tangential distorted
location of a point. You can denote the distorted points as (x_{distorted}, y_{distorted}),
as follows:

x_{distorted} = x +
[2 * p_{1} * x * y + p_{2} *
(r^{2} + 2 * x^{2})]

y_{distorted} = y +
[p_{1} * (r^{2} +
2*y^{2}) + 2 * p_{2} * x * y]

x, y = undistorted pixel
locations

p_{1} and p_{2} =
tangential distortion coefficients of the lens

r^{2} = x^{2} + y^{2}

The undistorted pixel locations appear in normalized
image coordinates, with the origin at the optical center. The coordinates
are expressed in world units.

Camera rotations, specified as the comma-separated pair consisting
of 'RotationVectors' and an M-by-3
matrix. The matrix contains rotation vectors for M images,
which contain the calibration pattern that estimates the calibration
parameters. Each row of the matrix contains a vector that describes
the 3-D rotation of the camera relative to the corresponding pattern.

Each vector specifies the 3-D axis about which the camera is
rotated. The magnitude of the vector represents the angle of rotation
in medians. You can convert any rotation vector to a 3-by-3 rotation
matrix using the Rodrigues formula.

You must set the RotationVectors and TranslationVectors properties
together in the constructor to ensure that the number of rotation
vectors equals the number of translation vectors. Setting only one
property but not the other results in an error.

Camera translations, specified as the comma-separated pair consisting
of 'RotationVectors' and an M-by-3
matrix. This matrix contains translation vectors for M images.
The vectors contain the calibration pattern that estimates the calibration
parameters. Each row of the matrix contains a vector that describes
the translation of the camera relative to the corresponding pattern,
expressed in world units.

The following equation provides the transformation that relates
a world coordinate [XYZ]
and the corresponding image point [xy]:

This equation does not take distortion into consideration.
Distortion is removed by the undistortImage function.

You must set the RotationVectors and TranslationVectors properties
together in the constructor to ensure that the number of rotation
vectors equals the number of translation vectors. Setting only one
property results in an error.

World coordinates of key points on calibration pattern, specified
as the comma-separated pair consisting of 'WorldPoints'
and an M-by-2 array. M represents
the number of key points in the pattern.

Estimate skew flag, specified as the comma-separated pair consisting
of 'EstimateSkew' and a logical scalar. When
you set the logical to true, the object estimates
the image axes skew. When you set the logical to false,
the image axes are exactly perpendicular.

Number of radial distortion coefficients, specified as the comma-separated
pair consisting of 'NumRadialDistortionCoefficients'
and the number '2' or '3'.

Number of radial distortion coefficients, specified as the comma-separated
pair consisting of 'NumRadialDistortionCoefficients'
and the number '2' or '3'.

Estimate tangential distortion flag, specified as the comma-separated
pair of consisting of 'EstimateTangentialDistortion'
and the logical scalar true or false.
When you set the logical to true, the object estimates
the tangential distortion. When you set the logical to false,
the tangential distortion is negligible.

The coordinates [c_{x}c_{y}]
represent the optical center (the principal point), in pixels. When
the x and y axis are exactly
perpendicular, the skew parameter, s, equals 0.

f_{x} = F*s_{x}

f_{y} = F*s_{y}

F, is the focal length in world units,
typically expressed in millimeters.

[s_{x}, s_{y}]
are the number of pixels per world unit in the x and y direction
respectively.

Radial distortion coefficients, specified as either a 2- or
3-element vector. When you specify a 2-element vector, the object
sets the third element to 0. Radial distortion
occurs when light rays bend more near the edges of a lens than they
do at its optical center. The smaller the lens, the greater the distortion.
The camera parameters object calculates the radial distorted location
of a point. You can denote the distorted points as (x_{distorted}, y_{distorted}),
as follows:

k_{1}, k_{2},
and k_{3} = radial distortion
coefficients of the lens

r^{2} = x^{2} + y^{2}

Typically, two coefficients are sufficient. For severe
distortion, you can include k_{3}.
The undistorted pixel locations appear in normalized image coordinates,
with the origin at the optical center. The coordinates are expressed
in world units.

Tangential distortion coefficients, specified as a 2-element
vector. Tangential distortion occurs when the lens and the image plane
are not parallel. The camera parameters object calculates the tangential
distorted location of a point. You can denote the distorted points
as (x_{distorted}, y_{distorted}),
as follows:

x_{distorted} = x +
[2 * p_{1} * y + p_{2} *
(r^{2} + 2 * x^{2})]

p_{1} and p_{2} =
tangential distortion coefficients of the lens

r^{2} = x^{2} + y^{2}

The undistorted pixel locations appear in normalized
image coordinates, with the origin at the optical center. The coordinates
are expressed in world units.

3-D rotation matrix, specified as a 3-by-3-by-P,
with P number of pattern images. Each 3-by-3 matrix
represents the same 3-D rotation as the corresponding vector.

The following equation provides the transformation that relates
a world coordinate [XYZ]
and the corresponding image point [xy]:

Camera translations, specified as an M-by-3
matrix. This matrix contains translation vectors for M images.
The vectors contain the calibration pattern that estimates the calibration
parameters. Each row of the matrix contains a vector that describes
the translation of the camera relative to the corresponding pattern,
expressed in world units.

The following equation provides the transformation that relates
a world coordinate [XYZ]
and the corresponding image point [xy]:

This equation does not take distortion into consideration.
Distortion is removed by the undistortImage function.

You must set the RotationVectors and TranslationVectors properties
in the constructor to ensure that the number of rotation vectors equals
the number of translation vectors. Setting only one property but not
the other results in an error.

Estimated camera parameters accuracy, specified as an M-by-2-by-P array
of [xy] coordinates. The [xy]
coordinates represent the translation in x and y between
the reprojected pattern key points and the detected pattern key points.
The values of this property represent the accuracy of the estimated
camera parameters. P is the number of pattern images
that estimates camera parameters. M is the number
of keypoints in each image.

World points reprojected onto calibration images, specified
as an M-by-2-by-P array of [xy]
coordinates. P is the number of pattern images
and M is the number of keypoints in each image.

Number of calibration patterns that estimates camera extrinsics,
specified as an integer. The number of calibration patterns equals
the number of translation and rotation vectors.

Estimate skew flag, specified as a logical scalar. When you
set the logical to true, the object estimates the
image axes skew. When you set the logical to false,
the image axes are exactly perpendicular.

Estimate tangential distortion flag, specified as the logical
scalar true or false. When you
set the logical to true, the object estimates the
tangential distortion. When you set the logical to false,
the tangential distortion is negligible.

This example shows you how to use the cameraParameters object in a workflow to remove distortion from an image. The example creates a cameraParameters object manually. In practice, use the estimateCameraParameters or the cameraCalibrator app to derive the object.

[1] Zhang, Z. "A flexible new technique
for camera calibration". IEEE Transactions on Pattern
Analysis and Machine Intelligence, Vol. 22, No. 11, pp.
1330–1334, 2000.

[2] Heikkila, J, and O. Silven. "A
Four-step Camera Calibration Procedure with Implicit Image Correction", IEEE
International Conference on Computer Vision and Pattern Recognition,
1997.