This example shows how to address common traceability issues in model requirements and tests by using the Model Testing Dashboard. The dashboard analyzes the testing artifacts in a project and reports metric data on quality and completeness measurements such as traceability and coverage, which reflect guidelines in industry-recognized software development standards, such as ISO 26262 and DO-178C. The dashboard widgets summarize the data so that you can track your requirements-based testing progress and fix the gaps that the dashboard highlights. You can click the widgets to open tables with detailed information, where you can find and fix the testing artifacts that do not meet the corresponding standards.
The dashboard displays testing data for a model and the artifacts that the model traces to within a project. For this example, open the project and collect metric data for the artifacts.
Open the project. At the command line, type
Open the dashboard. On the Project tab, click Model Testing Dashboard.
If you have not previously opened the dashboard for the project, the dashboard must identify the artifacts in the project and trace them to the models. To run the analysis and collect metric results, click Trace and Collect All.
In the Artifacts pane, the dashboard organizes artifacts such as requirements, test cases, and test results under the models that they trace to. View the metric results for the model
db_DriverSwRequest. In the Artifacts pane, click the name of the model. The dashboard populates the widgets with data from the most recent metric collection for the model.
Next, use the data in the Artifacts panel and the dashboard widgets to find and address issues in the requirements and tests for the model.
On the Artifacts panel, the Untraced folder shows artifacts that do not trace to the models in the project. You can check the artifacts in this folder to see if there are any requirements that should be implemented by the models but are missing links. For this example, link one of these requirements to the model block that implements it and update the Artifacts panel to reflect the link.
In the Artifacts panel, navigate to the requirement Untraced > Functional Requirements >
db_req_func_spec.slreqx > Switch precedence.
Open the requirement in the Requirements Editor. On the Artifacts panel, double-click Switch precedence. This requirement describes the order in which the cruise control system takes action if multiple switches are enabled at the same time. Keep the Requirements Editor open with the requirement selected.
Open the model
db_Controller. To open the model from the Model Testing Dashboard, in the Artifacts panel, expand the folder db_Controller > Design and double-click
The Model block
DriverSwRequest references the model
db_DriverSwRequest, which controls the order in which the cruise control system takes action when the switches are enabled. Link this model block to the requirement. Right-click the model block and select
Link to Selection in Requirements Browser.
Save the model. On the Simulation tab, click Save.
Save the requirements set. In the Requirements Editor, click the Save icon.
To update the artifact traceability information, in the Model Testing Dashboard, click Trace Artifacts.
The Artifacts panel shows the Switch precedence requirement under db_Controller > Functional Requirements >
db_req_func_spec.slreqx. Next, find traceability issues in the artifacts by collecting metrics in the dashboard.
Open the dashboard for the component db_DriverSwRequest by clicking the name of the component in the Artifacts panel. Because you changed the requirements file by adding a link, the dashboard widgets are highlighted in gray to show that the results might represent stale data. To update the results for the component, click Collect Results.
The widgets in the Test Case Analysis section of the dashboard show data about the model requirements, test cases for the model, and links between them. The widgets indicate if there are gaps in testing and traceability for the implemented requirements.
Link Requirements and Test Cases
In the model
db_DriverSwRequest, the Requirements Linked to Tests section shows that some of the requirements in the model are missing links to test cases. Examine the requirements by clicking one of the dashboard widgets. Then, use the links in the table to open the artifacts and fix the traceability issues.
To see detailed information about the unlinked requirements, in the Requirements Linked to Tests section, click the widget Unlinked. The table shows the requirements that are implemented in the model, but do not have links to a test case. The table is filtered to show only requirements that are missing links to test cases. For this example, link a test for the requirement
Set Switch Detection.
Open the requirement in the Requirements Editor. In the table, click
Set Switch Detection.
In the Requirements Editor, examine the details of the requirement. This requirement describes the behavior of the
Set switch when it is pressed. Keep the requirement selected in the Requirements Editor.
Check if there is already a test case for the switch behavior. To return to the metric results, at the top of the Model Testing Dashboard, click db_DriverSwRequest. The Tests Linked to Requirements section shows that one test case is not linked to requirements.
To see the unlinked test cases, in the Tests Linked to Requirements section, click Unlinked.
To open the test in the Test Manager, in the table, click the test case
Set button. The test case verifies the behavior of the
Resume switch. If there were not already a test case for the switch, you would add a test case by using the Test Manager.
Link the test case to the requirement. In the Test Manager, for the test case, expand the Requirements section. Click Add > Link to Selected Requirement. The traceability link indicates that the test case
Set button verifies the requirement
Set Switch Detection.
The metric results in the dashboard reflect only the saved artifact files. To save the test suite
db_DriverSwRequest_Tests.mldatx, in the Test Browser, right-click
db_DriverSwRequest_Tests and click Save.
Save the requirements file
db_req_func_spec.slreqx. In the Requirements Editor, click the Save button.
Next, update the metric data in the dashboard to see the effect of adding the link.
Update Metric Results in the Dashboard
Update the metric results in the Model Testing Dashboard so that they reflect the traceability link between the requirement and the test case.
To analyze the artifact changes in the Model Testing Dashboard, click Trace Artifacts. The button is enabled when there are changes in the project artifacts that the dashboard has not analyzed.
At the top of the dashboard, the Stale Metrics icon indicates that at least one metric widget shows stale data for the model. Widgets that show stale metric data appear highlighted in grey. To refresh the widgets, re-collect the metric data for the model by clicking Collect Results.
The Test Case Analysis widgets show that there are 11 remaining unlinked requirements. The Tests Linked to Requirements section shows that there are no unlinked tests. Typically, before running the tests, you investigate and address these testing traceability issues by adding tests and linking them to the requirements. For this example, leave the unlinked artifacts and continue to the next step of running the tests.
After you create and link unit tests that verify the requirements, run the tests to check that the functionality of the model meets the requirements. To see a summary of the test results and coverage measurements, use the widgets in the Test Result Analysis section of the dashboard. The widgets highlight testing failures and gaps. Use the metric results for the underlying artifacts to address the issues.
Perform Unit Testing
Run the test cases for the model by using the Test Manager. Save the results as an artifact in the project and review them in the Model Testing Dashboard.
Open the unit tests for the model in the Test Manager. In the Model Testing Dashboard, in the Artifacts pane, expand the model
db_DriverSwRequest. Expand the Test Cases folder and double-click the test file
In the Test Manager, click Run.
To use the test results in the Model Testing Dashboard, export the test results and save the file in the project. On the Tests tab, in the Results section, click Export. Name the results file
Results1.mldatx and save the file under the project root folder.
The Model Testing Dashboard detects that you exported the results and automatically updates the Artifacts panel to reflect the new results. The widgets in the Test Result Analysis section are highlighted in grey to indicate that they are showing stale data. To update the data in the dashboard widgets, click Collect Results.
Address Testing Failures and Gaps
In the model
db_DriverSwRequest, the Model Test Status section indicates that one test failed and one test was disabled during the latest test run. Open the tests and fix these issues.
To view the disabled test, in the dashboard, click the Disabled widget. The table shows the disabled test cases for the model.
Open the disabled test in the Test Manager. In the table, click the test
Decrement button hold.
Enable the test. In the Test Browser, right-click the test case and click Enabled. Save the test suite file.
To view the failed test, in the dashboard, click the Failed widget.
Open the failed test in the Test Manager. In the table, click the test
Examine the test failure in the Test Manager. You can determine if you need to update the test or the model by using the test results and links to the model. For this example, instead of fixing the failure, continue on to examine test coverage.
Check if the tests that you ran fully exercised the model design by using the coverage metrics. For this example, the Model Coverage section of the dashboard indicates that some conditions in the model were not covered. Place your cursor over the bar in the widget to see what percent of condition coverage was achieved. For this example, 86.4% of decisions were covered by the tests and 4.55% of the decisions were justified in a coverage filter.
View the decision coverage details. Click the Decision bar.
In the table, expand the model artifact. The table shows the test case results for the model and the results file that contains them. Open the results file
Results1.mldatx in the Test Manager.
To see detailed coverage results, open the model in the Coverage perspective. In the Test Manager, in the Aggregated Coverage Results section, in the Analyzed Model column, click
Coverage highlighting on the model shows the points that were not covered by the test cases. For a point that is not covered, add a test that covers it. Find the requirement that is implemented by the model element or, if there is none, add a requirement for it. Link the new test case to the requirement. If the point should not be covered, justify the missing coverage by using a filter. For this example, do not fix the missing coverage.
Once you have updated the unit tests to address failures and gaps, run the tests and save the results. Then examine the results by collecting the metrics in the dashboard.
In a project with many artifacts and traceability connections, you can monitor the status of the design and testing artifacts whenever there is a change to a file in the project. After you change an artifact, check if there are downstream testing impacts by updating the tracing data and metric results in the dashboard. Use the tables to find and fix the affected artifacts. Track your progress by updating the dashboard widgets until they show that the model testing quality meets the standards for the project.