MATLAB Simulate 3D Camera: why is there no focal length (world units) attribute in the sensor model?

In the 3D camera simulation (https://www.mathworks.com/help/driving/ref/simulation3dcamera.html) - there is no specific focal length (mm) for the camera intrinsics. I'm working on feature tracking using the camera feed and I cannot estimate the 3D location of a point without knowing the focal length or the scaling factor (w) in (https://www.mathworks.com/help/vision/ug/camera-calibration.html).
It would also be great if MATLAB provides a DEM of the UAV scenarios - ground truth for many applications.
Please advise.

Answers (1)

Please take a look at this page:
If you know the size of the pixel in world units, you can convert focal length in world units (usually mm) to focal length in pixels.

6 Comments

Calibration wise - understood.
But the simulate 3D camera object has focal length specified in pixels with no mention of focal length in world units (mm). How will I know where my camera plane is in world units? If I do not know the focal length (mm).
How do I determine the scale factor 'w' in the perspective projection?
I'm in need of ground truth to use the 2D to 3D transformation in image formation using UAV toolbox.
By camera plane, do you mean where the camera is placed? The Orientation and Location outports output the location and orientation of the camera sensor. In addition, the Depth outport gives you the distance from an object to the camera plane.
No, the orientation and location places the camera on the body frame. Sorry for the ambiguous reply - but let us consider this as the camera frame. I'm looking for the image frame that is at a focal length of f (world units) in front of the camera frame and has the same orientation.
Now, [fx fy] in pixel coordinates would give me the camera intrinsic parameters - GREAT!
How do we correlate an observed pixel [u, v] with a 3D point (say) on the ground? The camera intrinsic extrinsic parameters when combined would give me the homogenous coordinates w * [x y 1] (In - https://www.mathworks.com/help/vision/ug/camera-calibration.html#bu0ni74).
x/w = u, y/w = v isn't it? Because capturing images from the sensor model gives me [u, v] - hiding the scaling factor w. This is the information I need to identify the inverse transformation from pixel coordinates to world coordinates (say of an object on the ground). This factor is what I believe the focal length (in mm) can give us. As
P{camera frame} = [(cx - u)f/fx (cy - v)f/fy f] | f - focal length (mm)
Is imageToVehicle what you need? It converts a 2-D image point to a 3-D point on the ground in the vehicle coordinate system.
Given the following equations:
X = (u - cx) * Z / fx;
Y = (v - cy) * Z / fy;
where [X, Y, Z] is the 3-D point location, [u, v] is the corresponding image point location, and [fx, fy] is the focal lengths. You should be able to compute the factor you were talking about. Note that the Depth port outputs the Z values.
Thanks for the reply, Qu!
For my analysis, I don't have depth info directly - but I'm computing the depth (approximately) from knowing my UAV altitude (say from GPS). If I have my feature's pixel coordinates [u, v] - I am aiming to compute the same feature coordinates in scene units. That's all I need - a vector from the optical center (not frame center) to the feature coordinates. The z-component of this vector is the focal length in mm.
P{camera frame} = [ (cx - u) * f / fx (cy - v) * f / fy f ]
From the intrinsics I devise in MATLAB simulate 3D camera, how do I get to this P (in the camera's reference frame)?

Sign in to comment.

Categories

Find more on MATLAB Support Package for USB Webcams in Help Center and File Exchange

Products

Release

R2021b

Asked:

on 31 May 2022

Edited:

on 2 Jun 2022

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!