Key Features

  • Ground-truth labeling workflow app to automate labeling, and tools to compare simulation output with ground truth
  • Sensor fusion and tracking algorithms, including Kalman filters, multiobject tracking framework, detection-track assignment, and motion models
  • Driving scenario generation, including road, actor, and vehicle definition and scenario visualizations
  • Sensor simulation for camera and radar, with object lists as output
  • Computer vision algorithms, including lane detection and classification, vehicle detection, and image-vehicle coordinate transforms
  • Visualizations, including bird’s-eye-view plots of sensor coverage, detections, and tracks, and video overlays for lane markers and vehicle detection
  • C-code generation for sensor fusion and tracking algorithms (with MATLAB Coder™)

Ground-Truth Labeling

Ground-truth labeling is the process of annotating recorded sensor data with information on objects, conditions, and events in a vehicle’s surrounding. Labeled ground-truth data is then used to test the performance of perception systems by comparing the output of a perception algorithm with the labeled ground truth. Ground-truth labeling is typically performed on video data and is usually a time-intensive manual process. Automated Driving System Toolbox™ provides an app and workflow to automate the labeling of ground-truth data.

Automated Driving System Toolbox uses deep learning and computer vision algorithms to automate the labeling of ground truth with detection and tracking algorithms in the Ground Truth Labeler app. The app lets you import your own algorithms to automate the labeling of ground truth.

The system toolbox also provides tools to compare the output of a perception algorithm versus labeled ground truth.

Explore gallery (3 images)


Sensor Fusion and Tracking

Automated driving systems use vision, radar, ultrasound, and combinations of sensor technologies to automate dynamic driving tasks. These tasks include steering, braking, and acceleration. Automated driving spans a wide range of automation levels — from advanced driver assistance systems (ADAS) to fully autonomous driving. The complexity of automated driving tasks makes it necessary for the system to use information from multiple complementary sensors, making sensor fusion a critical component of any automated driving workflow.

Automated Driving System Toolbox provides functions and tools to track outputs from various sensors over time, and to combine output from multiple sensors to perform sensor fusion. For object tracking, the system toolbox provides several Kalman filters including linear, extended, and unscented variants.

Track multiple vehicles using a camera.
Track pedestrians from a moving car.

For rapid prototyping and embedded implementation, the tracking and sensor fusion algorithms in Automated Driving System Toolbox generate C code using MATLAB Coder™.

Explore gallery (2 images)


Vision System Design

Automated Driving System Toolbox provides a suite of computer vision algorithms that use data from cameras to detect and track objects of interest such as lane markers, vehicles, and pedestrians. Algorithms in the system toolbox are tailored to ADAS and autonomous driving applications.

Object detection is used to locate objects of interest such as pedestrians and vehicles to help perception systems automate braking and steering tasks. The system toolbox provides functionality to detect vehicles, pedestrians, and lane markers through pretrained detectors using machine learning, including deep learning, as well as functionality to train custom detectors.

Use Faster-RCNN to detect vehicles.
Use aggregate channel features to detect pedestrians in images.
Find parabolic lane boundaries.

Automated Driving System Toolbox provides the ability to create the output of a monocular camera sensor from raw video. The output is in the format of a list of detected vehicles, lane boundaries, and estimated distance to objects.

Output from monocular camera sensor simulation, including the ability to detect vehicle and lane boundaries in raw video, and to estimate the distance to objects.


Scenario Generation

Sensor fusion and control algorithms for automated driving systems require rigorous testing. Vehicle-based testing is not only time consuming to set up, but also difficult to reproduce. Automated Driving System Toolbox provides functionality to define road networks, actors, vehicles, and traffic scenarios, as well as statistical models for simulating synthetic radar and camera sensor detection.

The object lists generated for each traffic scenario can also be used to setup HIL tests using this synthetic data.

The system toolbox provides a workflow that enables the testing of control or sensor fusion algorithms using synthetic data generated from a specific traffic scenario.

Model and simulate the output of an automotive radar sensor for different driving scenarios.
Model and simulate the output of an automotive vision sensor for different driving scenarios.

Explore gallery (2 images)


Visualization Tools

The system toolbox provides visualization tools customized for ADAS and autonomous driving workflows to aid with the design, debugging, and testing of automated driving systems.

Visualize sensor coverage, detections, and tracks.
Display information provided in vehicle coordinates on a video display.
Create a bird's-eye view using inverse perspective mapping.
Detecting the ground plane and finding nearby obstacles in 3D LiDAR data.