This example shows how to visualize depth and semantic segmentation data captured from a camera sensor in a simulation environment. This environment is rendered using the Unreal Engine® from Epic Games®.
You can use depth visualizations to validate depth estimation algorithms for your sensors. You can use semantic segmentation visualizations to analyze the classification scheme used for generating synthetic semantic segmentation data from the Unreal Engine environment.
The model used in this example simulates a vehicle driving in a city scene.
A Simulation 3D Vehicle with Ground Following block specifies the driving route of the vehicle. The waypoint poses that make up this route were obtained using the technique described in the Select Waypoints for Unreal Engine Simulation example.
A Simulation 3D Camera block mounted to the rearview mirror of the vehicle captures data from the driving route. This block outputs the camera, depth, and semantic segmentation displays by using To Video Display (Computer Vision Toolbox) blocks.
Load the MAT-file containing the waypoint poses. Add timestamps to the poses and then open the model.
load smoothedPoses.mat; refPosesX = [linspace(0,20,1000)', smoothedPoses(:,1)]; refPosesY = [linspace(0,20,1000)', smoothedPoses(:,2)]; refPosesYaw = [linspace(0,20,1000)', smoothedPoses(:,3)]; open_system('DepthSemanticSegmentation.slx')
A depth map is a grayscale representation of camera sensor output. These maps visualize camera images in grayscale, with brighter pixels indicating objects that are farther away from the sensor. You can use depth maps to validate depth estimation algorithms for your sensors.
The Depth port of the Simulation 3D Camera block outputs a depth map of values in the range of 0 to 1000 meters. In this model, for better visibility, a Saturation block saturates the depth output to a maximum of 150 meters. Then, a Gain block scales the depth map to the range [0, 1] so that the To Video Display block can visualize the depth map in grayscale.
Semantic segmentation describes the process of associating each pixel of an image with a class label, such as road, building, or traffic sign. In the 3D simulation environment, you generate synthetic semantic segmentation data according to a label classification scheme. You can them use these labels to train a neural network for automated driving applications, such as road segmentation. By visualizing the semantic segmentation data, you can verify your classification scheme.
The Labels port of the Simulation 3D Camera block outputs a set of labels for each pixel in the output camera image. Each label corresponds to an object class. For example, in the default classification scheme used by the block,
1 corresponds to buildings. A label of
0 refers to objects of an unknown class and appears as black. For a complete list of label IDs and their corresponding object descriptions, see the Labels port description on the Simulation 3D Camera block reference page.
The MATLAB Function block uses the
function to convert the labels to a matrix of RGB triplets for visualization. The colormap is based on the colors used in the CamVid dataset, as shown in the example Semantic Segmentation Using Deep Learning (Computer Vision Toolbox). The colors are mapped to the predefined label IDs used in the default 3D simulation scenes. The helper function
label2rgb (Image Processing Toolbox)
sim3dColormap defines the colormap. Inspect these colormap values.
Run the model.
When the simulation begins, it can take a few seconds for the visualization engine to initialize, especially when you are running it for the first time. The
AutoVrtlEnv window displays the scene from behind the ego vehicle. In this scene, the vehicles drives several blocks around the city. Because this example is mainly for illustrative purposes, the vehicle does not always follow the direction of traffic or the pattern of the changing traffic lights.
The Camera Display, Depth Display, and Semantic Segmentation Display blocks display the outputs from the camera sensor.
To change the visualization range of the output depth data, try updating the values in the Saturation and Gain blocks.
To change the semantic segmentation colors, try modifying the color values defined in the
sim3dColormap function. Alternatively, in the
sim3dlabel2rgb MATLAB Function block, try replacing the input colormap with your own colormap or a predefined colormap. See