Computer Vision Toolbox™ provides algorithms, functions, and apps for designing and testing computer vision, 3D vision, and video processing systems. You can perform object detection and tracking, as well as feature detection, extraction, and matching. For 3D vision, the toolbox supports single, stereo, and fisheye camera calibration; stereo vision; 3D reconstruction; and lidar and 3D point cloud processing. Computer vision apps automate ground truth labeling and camera calibration workflows.
You can train custom object detectors using deep learning and machine learning algorithms such as YOLO v2, Faster R-CNN, and ACF. For semantic segmentation you can use deep learning algorithms such as SegNet, U-Net, and DeepLab. Pretrained models let you detect faces, pedestrians, and other common objects.
You can accelerate your algorithms by running them on multicore processors and GPUs. Most toolbox algorithms support C/C++ code generation for integrating with existing code, desktop prototyping, and embedded vision system deployment.
Learn the basics of Computer Vision Toolbox
Image registration, interest point detection, extracting feature descriptors, and point feature matching
Deep learning and convolutional networks, semantic image segmentation, object detection, recognition, ground truth labeling, bag of features, template matching, and background estimation.
Estimate camera intrinsics, distortion coefficients, and camera extrinsics, extract 3-D information from 2-D images, perform stereo rectification, depth estimation, 3-D reconstruction, triangulation, and structure from motion
Downsample, denoise, transform, visualize, register, and fit geometrical shapes of 3-D point clouds
Optical flow, activity recognition, motion estimation, and tracking
Simulink support for computer vision applications
Generate C code, learn about OCR language data support, use the OpenCV interface, learn about fixed-point data type support, and generate HDL code
Support for third-party hardware, such as Xilinx Zynq with FMC HDMI CAM