Visual simultaneous localization and mapping (vSLAM), refers to the process of calculating the position and orientation of a camera, with respect to its surroundings, while simultaneously mapping the environment. The process uses only visual inputs from the camera. Applications for visual SLAM include augmented reality, robotics, and autonomous driving. For more details, see Implement Visual SLAM in MATLAB.
|Estimate 2-D geometric transformation from matching point pairs|
|Estimate 3-D geometric transformation from matching point pairs|
|Estimate fundamental matrix from corresponding points in stereo images|
|Estimate camera pose from 3-D to 2-D point correspondences|
|Find world points observed in view|
|Find world points that correspond to point tracks|
|Compute relative rotation and translation between camera poses|
|Optimize absolute poses using relative pose constraints|
|Create pose graph|
|Refine 3-D points and camera poses|
|Refine camera pose using motion-only bundle adjustment|
|Refine 3-D points using structure-only bundle adjustment|
Process image data from a stereo camera to build a map of an outdoor environment and estimate the trajectory of the camera.
Develop a visual localization system using synthetic image data from the Unreal Engine® simulation environment.
Develop a visual SLAM algorithm for a UAV equipped with a stereo camera.
Develop Visual SLAM Algorithm Using Unreal Engine Simulation (Automated Driving Toolbox)
Develop a visual simultaneous localization and mapping (SLAM) algorithm using image data from the Unreal Engine® simulation environment.
Understand the visual simultaneous localization and mapping (vSLAM) workflow and how to implement it using MATLAB.
Choose the right simultaneous localization and mapping (SLAM) workflow and find topics, examples, and supported features.