Polyspace Code Prover
In this example, you will learn how Polyspace code verification products prove code correctness, find run-time errors, and check MISRA-C:2004 compliance on generated and hand-written code.
This example shows how Polyspace code verification products help you achieving the following tasks
Before running the Polyspace products, you need to generate code from your model. Prior to generating the code, you need to define the scope of the code verification and apply the following settings in Embedded Coder.
We can simulate the model and verify that the functional behavior is as expected. We then generate the code for the controller that interfaces with the legacy code.
Once you have confidence in the functionality of the model, you can start the code verification process. The goal is to prove the correctness of the code. To do so, we need to make sure the code will behave correctly under all types of situations. We will do this without executing the code by running the Polyspace Code Prover.
Once you start the Polyspace verification process with the Polyspace Code Prover, you will first see two sets of outputs in your command window.
The first output highlights errors found using the MISRA-C:2004 rules as shown in the following illustration. Depending on the MISRA policy applied to your project, you might decide to fix or ignore some errors. Alternatively, you can enable or disable specific MISRA-C:2004 rules from a graphical interface included in the Polyspace Code Prover.
The second set of output shows the dynamic range of input data extracted from the model. Two choices are available:
In this example, we will use the block design min/max values for the code verification process. These block min/max values were determined from requirements made for the unit. Using the worst case values may lead to different verification results.
We can use the Polyspace Code Prover GUI to see the results of our code verification process.
The GUI shows the list of C-files, including generated and legacy code, and highlights the number of checks categorized using the following colors:
For this model, we have code correctness of about 97%. We can improve upon this rate by reviewing some checks. To determine the number of checks to be reviewed and the order in which to review the checks, we can use the Methodological assistant by pressing a simple button.
When investigating the source of the checks, there are two methods:
You can also do a combination of both methods.
Checks may or may not be an issue. When reviewing results, we will:
The following shows an example of proven code:
The checks can highlight any of the following types of errors:
When tracing back to the model, the above errors may be caused by any of the following:
Once you identify the cause of the error, you can modify the model appropriately to fix the issue.
To run this example, you need the following products to be installed