Main Content

Mapping and Localization Using Vision and Lidar Data

Simultaneous localization and mapping, map building, odometry using vision and lidar data

Use simultaneous localization and mapping (SLAM) algorithms to build a map of the environment while estimating the pose of the ego vehicle at the same time. You can use SLAM algorithms with either visual or point cloud data. For more information on implementing visual SLAM using camera image data, see Implement Visual SLAM in MATLAB and Develop Visual SLAM Algorithm Using Unreal Engine Simulation. For more information on implementing point cloud SLAM using lidar data, see Implement Point Cloud SLAM in MATLAB and Design Lidar SLAM Algorithm Using Unreal Engine Simulation Environment.

You can use measurements from sensors such as inertial measurement units (IMU) and global positioning system (GPS) to improve the map building process with visual or lidar data. For an example, see Build a Map from Lidar Data.

In environments with known maps, you can localize the ego vehicle by estimating its pose relative to the map coordinate frame origin. For an example on localization using a known visual map, see Visual Localization in a Parking Lot. For an example on localization using a known point cloud map, see Lidar Localization with Unreal Engine Simulation.

In environments without known maps, you can use visual-inertial odometry by fusing visual and IMU data to estimate the pose of the ego vehicle relative to the starting pose. For an example, see Visual-Inertial Odometry Using Synthetic Data.

For an application of mapping and location algorithms to detect empty parking spots in a parking lot, see Perception-Based Parking Spot Detection Using Unreal Engine Simulation.

Functions

expand all

quaternionCreate quaternion array
distAngular distance in radians
rotateframeQuaternion frame rotation
rotatepointQuaternion point rotation
rotmatConvert quaternion to rotation matrix
rotvecConvert quaternion to rotation vector (radians)
rotvecdConvert quaternion to rotation vector (degrees)
partsExtract quaternion parts
eulerConvert quaternion to Euler angles (radians)
eulerdConvert quaternion to Euler angles (degrees)
compactConvert quaternion array to N-by-4 matrix
monovslamVisual simultaneous localization and mapping (vSLAM) and visual-inertial sensor fusion with monocular camera (Since R2023b)
imageviewsetManage data for structure-from-motion, visual odometry, and visual SLAM
optimizePosesOptimize absolute poses using relative pose constraints
createPoseGraphCreate pose graph
relativeCameraPose(Not recommended) Calculate relative rotation and translation between camera poses
triangulate3-D locations of undistorted matching points in stereo images
bundleAdjustmentAdjust collection of 3-D points and camera poses
bundleAdjustmentMotionAdjust collection of 3-D points and camera poses using motion-only bundle adjustment
bundleAdjustmentStructureRefine 3-D points using structure-only bundle adjustment
pcviewsetManage data for point cloud based visual odometry and SLAM
optimizePosesOptimize absolute poses using relative pose constraints
createPoseGraphCreate pose graph
scanContextDistanceDistance between scan context descriptors (Since R2020b)
scanContextDescriptorExtract scan context descriptor from point cloud (Since R2020b)
pctransformTransform 3-D point cloud
pcalignAlign array of point clouds (Since R2020b)
pcregistercorrRegister two point clouds using phase correlation (Since R2020b)
pcregistercpdRegister two point clouds using CPD algorithm
pcregistericpRegister two point clouds using ICP algorithm
pcregisterndtRegister two point clouds using NDT algorithm
pcregisterloamRegister two point clouds using LOAM algorithm (Since R2022a)
pcmapndtLocalization map based on normal distributions transform (NDT) (Since R2021a)

Topics

Featured Examples