MATLAB Examples

Lane Detection with Zynq-Based Hardware

This example shows how to use the Computer Vision System Toolbox™ Support Package for Xilinx® Zynq-Based Hardware to target a lane detection algorithm to the Zynq board. The inverse perspective mapping and lane-marking candidate extraction are targeted to the FPGA, and the ARM processor performs lane-fitting and overlay.

This algorithm operates on 640x480 source video from a front-facing camera mounted on a vehicle. The camera position and characteristics (focal length, pitch, height and principle point) are fixed parameters for this example. The example includes a source video file corresponding to these camera parameters, vzLaneDetection640.mp4.

Required products:

  • Simulink®
  • Computer Vision System Toolbox
  • Vision HDL Toolbox™
  • HDL Coder™
  • HDL Coder Support Package for Xilinx Zynq-7000 Platform

Optionally, to generate, compile, and target a Zynq ARM® software generation model:

  • Embedded Coder®
  • Embedded Coder Support Package for Xilinx Zynq-7000 Platform

Contents

Introduction

This example follows the algorithm development workflow that is detailed in the Developing Vision Algorithms for Zynq-Based Hardware example. If you have not already done so, please work through that example to gain a better understanding of the required workflow.

This algorithm corresponds to the Vision HDL Toolbox example, Lane Detection. With the Support Package for Zynq-Based Vision Hardware, you get a hardware reference design that allows for easy integration of your targeted algorithm in the context of a vision system. The support package also provides a Video Capture (for software interface) block that, when deployed to the Zynq board, routes the video from FPGA output to the ARM processor for further computation.

Setup

If you have not yet done so, run through the guided setup wizard portion of the Zynq support package installation. You might have already completed this step when you installed this support package.

On the MATLAB Home tab, in the Environment section of the Toolstrip, click Add-Ons > Manage Add-Ons. Locate Computer Vision System Toolbox Support Package for Xilinx Zynq-Based Hardware, and click Setup.

The guided setup wizard performs a number of initial setup steps, and confirms that the target can boot and that the host and target can communicate.

For more information, see Guided Setup for Vision Hardware.

Pixel-Stream Model

This model contains two major subsystems: Lane Detection Algorithm which performs inverse perspective mapping and lane-marking candidate extraction, and Lane Fit and Overlay Algorithm which does lane fit and overlays the lanes onto the birds-eye view.

Open the model.

The Lane Detection Algorithm subsystem is targeted to the FPGA, and the Lane Fit and Overlay Algorithm subsystem is targeted to the ARM processor. The birds-eye view and lane coordinates outputs of the Lane Detection Algorithm subsystem are passed to the Lane Fit and Overlay Algorithm subsystem. The result is displayed in Simulink by the BirdsEyeLaneOverlay block.

Hardware/Software Synchronization: Some logic is required to synchronize communication between the FPGA design and the ARM processor portion of the design. The Lane Detection Algorithm subsystem contains two shift registers that accumulate the lane coordinates for each birds-eye view frame. The Lane Detection Algorithm and Lane Fit and Overlay Algorithm subsystems use state machines based on the dataReady and swStart signals to keep the two subsystems synchronized. The state machines must see dataReady before exchanging the lane coordinates, and then wait for swStart before accumulating the next set of coordinates.

The inverse perspective mapping algorithm takes 2.5 frames to complete, so the lane fitting must be done in 0.5 frames. If the lane fitting takes longer, it would miss the next set of lane coordinates. To meet this constraint, the Simulink rates of the signals dataReady, swStart, LeftLane and RightLane must be: 1/720 (swStart) + 1/720 (dataReady) + 1/180 (LeftLane and RightLane) = 1/120 (0.5 frames at 60fps).

Instead of working on full images, the HDL-ready lane detection algorithm works on streaming pixel data. The blocks in the shaded areas convert to and from pixel stream signals in preparation for targeting.

Video Source: The source video for this example comes from either the From Multimedia File block, that reads video data from a multimedia file, or from the Video Capture block, that captures live video frames from an HDMI source connected to the Zynq-based hardware. To configure the source, right-click on the variant selection icon in the lower-left corner of the Image Source block, choose Override using, and select either File or HW.

For this algorithm, the model must be configured with a pixel format of RGB, and frame size of 640x480. Both the From Multimedia File and Video Capture blocks are configured to deliver video frames in this format.

During the first two frames of simulation output, the To Video Display shows a black image. This condition indicates that no image data is available. This behavior is because the inverse perspective mapping algorithm takes 2.5 frames to complete.

Target the Algorithm

After you are satisfied with the pixel streaming algorithm simulation, you can target the pixel algorithm to the FPGA on the Zynq board.

In preparation for targeting, set up the Xilinx tool chain by invoking hdlsetuptoolpath. For example:

>> hdlsetuptoolpath('ToolName','Xilinx Vivado','ToolPath','C:\Xilinx\Vivado\2016.4\bin\vivado.bat');

Execute help hdlsetuptoolpath for more information.

Start the targeting workflow by right clicking the Lane Detection Algorithm subsystem and selecting HDL Code > HDL Workflow Advisor.

  • In Step 1.1, select IP Core Generation workflow and the appropriate platform from the choices: ZedBoard FMC-HDMI-CAM, ZC706 FMC-HDMI-CAM, ZC702 FMC-HDMI-CAM, PicoZed FMC-HDMI-CAM.
  • In Step 1.2, select RGB reference design to match the pixel format of the Lane Detection Algorithm subsystem. Set Source Video Resolution to 640x480p. Map the other ports of the hardware user logic to the available hardware interface. Map swStart, dataReady, LeftLane, and RightLane to AXI4-Lite for software interaction.
  • Step 2 prepares the design for generation by doing some design checks.
  • Step 3 generates HDL code for the IP core.
  • Step 4 integrates the newly generated IP core into the larger Vision Zynq reference design.

Execute each step in sequence to experience the full workflow, or, if you are already familiar with preparation and HDL code generation phases, right-click Step 4.1 in the table of contents on the left hand side and select Run to selected task.

  • In Step 4.2, the workflow generates a targeted hardware interface model and, if the Embedded Coder Zynq-7000 support package has been installed, a Zynq software interface model. Click Run this task button with the default settings.

Steps 4.3 and 4.4

The rest of the workflow generates a bitstream for the FPGA, downloads it to the target, and reboots the board.

Because this process can take 20-40 minutes, you can choose to bypass this step by using a pre-generated bitstream for this example that ships with product and was placed on the SDCard during setup.

To use this pre-generated bitstream execute the following:

>> vz = visionzynq();
>> changeFPGAImage(vz,'visionzynq-zc706-hdmicam-lane_detection.bit');

Replace 'zc706' with 'zedboard', 'zc702' or 'picozed' if appropriate.

Alternatively, you can continue with Steps 4.3 and 4.4.

Using the Generated Models from the HDL Workflow Advisor

Step 4.2 generated two, or four, models depending on whether Embedded Coder is installed: A 'targeted hardware interface' model and associated library model, and a 'software interface' model and associated library model. The 'targeted hardware interface' model can be used to run Simulink algorithms that interact with the pixel-streaming design running on the FPGA. These algorithms work on video captured from the output of the FPGA, and also can read and write AXI-Lite registers. The 'software interface' model supports full software targeting to the ARM processor when Embedded Coder and the Zynq-7000 (Embedded Coder) support package are installed, enabling External mode simulation, Processor-in-the-loop, and full deployment.

The library models are created so that any changes to the hardware generation model are propagated to any custom targeted hardware simulation or software interface models that exist.

Setup Video Playback

When running either of the generated models, which run the targeted portions of the algorithm on the board, you must provide an HDMI input source. For instance, replay the provided 640x480 front-facing vehicle camera source video, vzLaneDetection640.mp4, by connecting the HDMI input of the board as the secondary display of the Simulink host computer. To configure your secondary display to use 640x480 resolution, see Configure Display for VGA Resolution.

In this example, the HDMI output on the board is not active because the video timing after inverse perspective mapping is not compliant with HDMI standard. To view the input video and confirm the video source setup, open the Getting Started example model.

In the Getting Started model, configure these Video Capture block parameters:

  • Video source - HDMI input
  • Frame size - 640x480p
  • Pixel format - RGB

On the To Video Display block, set these parameters:

  • Input Color Format - RGB
  • Input Signal - Separate color signals

Run the Getting Started model to view the secondary display output in Simulink. Once you can see the secondary display, play the source video using full-screen mode and set to repeat.

Alternatively, check the Bypass FPGA user logic option on the Video Capture block. This option reroutes the input video directly to the output HDMI display.

Leave the source video running and close the Getting Started model.

Targeted Hardware Interface Model

Open the model.

The Video Capture block in this model returns the birds-eye view video from the output of the FPGA to Simulink. The FPGA processes the HDMI input video that you set up in the previous step. The Lane Detection Algorithm block is the generated interface to the FPGA. It returns lane coordinates to Simulink corresponding to each birds-eye view frame. Using this captured data, the lane fit and overlay algorithm runs in Simulink.

Software Interface Model

You can run this model in External mode on the ARM processor, or you can use this model to fully deploy a software design. (This model is generated only if Embedded Coder and the Zynq 7000 (Embedded Coder) support package are installed.)

Open the model.

Before running this model, you must perform additional setup steps to configure the Xilinx cross-compiling tools. For more information, see Setup for ARM targeting.

The software interface model included with this example has some setting changes to enable running the algorithm on the ARM processor. When you generate your own software interface model, make the following changes.

To avoid buffering errors when running the Video Viewer in External mode, reduce the duration of the External mode trigger. In the Code menu, select External Mode Control Panel. Click the Signal & Triggering button. In the Trigger options section, set Duration to 1.

Change these Configuration Parameters:

  • In the Solver pane, uncheck Treat each discrete rate as a separate task
  • In the Code Generation pane, set Build configuration to Faster runs
  • In the Code Generation > Interface pane, check variable-size signals

Run the model in External mode. This mode runs the algorithm on the ARM processor on the Zynq board.

The Video Capture and Lane Detection Algorithm blocks in this model work the same as in the Targeted Hardware Interface Model. When you run in External mode, the entire model runs on the ARM processor. The birds-eye view output video from the FPGA is captured to the ARM processor. The lane coordinates from the FPGA are passed to the ARM processor using an AXI-Lite interface. The result of the ARM processing is displayed in Simulink by the Video Viewer.