# Documentation

### This is machine translation

Translated by
Mouseover text to see original. Click the button below to return to the English verison of the page.

# relativeCameraPose

Compute relative rotation and translation between camera poses

## Syntax

``````[relativeOrientation,relativeLocation] = relativeCameraPose(M,cameraParams,inlierPoints1,inlierPoints2)``````
``````[relativeOrientation,relativeLocation] = relativeCameraPose(M,cameraParams1,cameraParams2,inlierPoints1,inlierPoints2)``````
``````[relativeOrientation,relativeLocation,validPointsFraction] = relativeCameraPose(M, ___)``````

## Description

``````[relativeOrientation,relativeLocation] = relativeCameraPose(M,cameraParams,inlierPoints1,inlierPoints2)``` returns the orientation and location of a calibrated camera relative to its previous pose. The two poses are related by `M`, which must be either a fundamental matrix or an essential matrix. The function computes the camera location up to scale and returns `relativeLocation` as a unit vector.```
``````[relativeOrientation,relativeLocation] = relativeCameraPose(M,cameraParams1,cameraParams2,inlierPoints1,inlierPoints2)``` returns the orientation and location of the second camera relative to the first one.```
``````[relativeOrientation,relativeLocation,validPointsFraction] = relativeCameraPose(M, ___)``` additionally returns the fraction of the inlier points that project in front of both cameras.```

## Input Arguments

collapse all

Fundamental matrix, specified as a 3-by-3 matrix. You can obtain this matrix using the `estimateFundamentalMatrix` or the `estimateEssentialMatrix`function.

Data Types: `single` | `double`

Camera parameters, specified as a `cameraParameters` or `cameraIntrinsics` object. You can return the `cameraParameters` object using the `estimateCameraParameters` function. The `cameraParameters` object contains the intrinsic, extrinsic, and lens distortion parameters of a camera.

Camera parameters for camera 1, specified as a `cameraParameters` or `cameraIntrinsics` object. You can return the `cameraParameters` object using the `estimateCameraParameters` function. The `cameraParameters` object contains the intrinsic, extrinsic, and lens distortion parameters of a camera.

Camera parameters for camera 2, specified as a `cameraParameters` or `cameraIntrinsics` object. You can return the `cameraParameters` object using the `estimateCameraParameters` function. The `cameraParameters` object contains the intrinsic, extrinsic, and lens distortion parameters of a camera.

Coordinates of corresponding points in view 1, specified as an M-by-2 matrix of M number of [x,y] coordinates, or as a `SURFPoints`, `MSERRegions`, or `cornerPoints` object. You can obtain these points using the `estimateFundamentalMatrix` function or the `estimateEssentialMatrix`.

Coordinates of corresponding points in view 2, specified as an M-by-2 matrix of M number of [x,y] coordinates, or as a `SURFPoints`, `MSERRegions`, or `cornerPoints` object. You can obtain these points using the `estimateFundamentalMatrix` function or the `estimateEssentialMatrix`.

## Output Arguments

collapse all

Orientation of camera, returned as a 3-by-3 matrix. If you use only one camera, the matrix describes the orientation of the second camera pose relative to the first camera pose. If you use two cameras, the matrix describes the orientation of camera 2 relative to camera 1.

Data Types: `single` | `double`

Location of camera, returned as a 1-by-3 unit vector. If you use only one camera, the vector describes the location of the second camera pose relative to the first camera pose. If you use two cameras, the vector describes the location of camera 2 relative to camera 1.

Data Types: `single` | `double`

Fraction of valid inlier points that project in front of both cameras, returned as a scalar. If `validPointsFraction` is too small, e.g. less than 0.9, it can indicate that the fundamental matrix is incorrect.

## Tips

• You can compute the camera extrinsics, `rotationMatrix` and `translationVector`, corresponding to the camera pose, from `relativeOrientation` and `relativeLocation`:

`[rotationMatrix,translationVector] = cameraPoseToExtrinsics(relativeOrientation,relativeLocation)`
The orientation of the previous camera pose is the identity matrix, `eye(3)`, and its location is, `[0,0,0]`.

• You can then use `rotationMatrix` and `translationVector` as inputs to the `cameraMatrix` function.

• You can compute four possible combinations of orientation and location from the input fundamental matrix. Three of the combinations are not physically realizable, because they project 3-D points behind one or both cameras. The `relativeCameraPose` function uses `inlierPoints1` and `inlierPoints2` to determine the realizable combination.