Documentation

Test Sections

To view or edit the test sections, select a test file, suite, or case in the Test Browser pane. For information on the types of test cases, see Introduction to the Test Manager.

Select Releases for Testing

You can select MATLAB® releases installed on your system to create and run tests in. Use this preference to specify the MATLAB installations that you want to make available for testing with Test Manager. You can use releases from R2011b forward. The releases you add become available to select from the Select releases for simulation list when you design the test.

You can add releases to the list and delete them. You cannot delete the release you started MATLAB in.

To add a release, click Add, navigate to the location of the MATLAB installation you want to add, and click OK.

For more information, see Run Tests in Multiple Releases.

Set Preferences to Display Test Sections

To simplify the Test Manager layout, you can select the sections of the test case, test suite, or test file that appear in the Test Manager. Test case sections that were modified appear in the Test Manager, regardless of the preference setting.

  1. In the toolstrip, click Preferences.

  2. Select the Test File, Test Suite, or Test Case tab.

  3. Select sections to show, or clear sections to hide. To show only sections where settings are set, clear all selections in the Preferences dialog box.

  4. Click OK.

Also see sltest.testmanager.getpref and sltest.testmanager.setpref.

Select releases for simulation

Select the releases that you want available for running test cases. Build the list of releases using the Release pane in the Test Manager Preferences dialog box. For more information, see Run Tests in Multiple Releases.

Tags

Tag your tests with useful categorizations, such as safety, logged-data, or burn-in. Filter tests using these tags when executing tests or viewing results. See Filter Test Execution and Results.

Description

In this section, add descriptive text to your test case, test suite, or test file.

Requirements

If you have a Simulink® Requirements™ license, you can create, edit, and delete requirements traceability links for a test case, test suite, or test file. To add requirements links:

  1. Click Add.

  2. In the Link Editor dialog box, click New to add a requirement link to the list.

  3. Type the name of the requirement link in the Description box.

  4. Click Browse and locate the requirement file. Click Open. For more information on supported requirements document types, see Supported Requirements Document Types (Simulink Requirements).

  5. Click OK. The requirement link appears in the Requirements list if a document is specified in the Link Editor.

If you have a section of a document open and ready to add as a requirement, then you can add it quickly. Highlight the section, click the Add arrow, and select the section type.

For more information about the Link Editor, see Requirements Traceability Link Editor (Simulink Requirements).

System Under Test

Specify the model you want to test in the System Under Test section. To use an open model in the currently active Simulink window, click the Use current model button .

Note

The model must be available on the path to run the test case. You can set the path programmatically using the preload callback. See Callbacks.

Specifying a new model in the System Under Test section can cause the model information to be out of date. To update the model test harnesses, Signal Builder groups, and available configuration sets, click the Refresh button .

Test Harness

If you have a test harness in your system under test, then you can select the test harness to use for the test case. If you have added or removed testy harnesses in the model, click the Refresh button to view the updated test harness list.

For more information about using test harnesses, see Refine, Test, and Debug a Subsystem.

Simulation Settings

You can override the System Under Test simulation settings such as the simulation mode, start time, stop time, and initial state.

Parameter Overrides

In this section, you can specify parameter values in the test case to override the parameter values in the model workspace, data dictionary, or base workspace. Parameters are grouped into sets. You can turn parameter sets and individual parameter overrides on or off by using the check box next to the set or parameter.

To add a parameter override:

  1. Click Add.

    A dialog box opens with a list of parameters. If the list of parameters is not current, click the Refresh button in the dialog box.

  2. Select the parameter you want to override.

  3. To add the parameter to the parameter set, click OK.

  4. Enter the override value in the parameter Override Value column.

To restore the default value of a parameter, clear the value in the Override Value column and press Enter.

You can also add a set of parameter overrides from a MAT-file. Click the Add arrow and select Add File to create a parameter set from a MAT-file.

For an example that uses parameter overrides, see Overriding Model Parameters in a Test Case.

Callbacks

Test-File Level Callbacks

Two callback scripts are available in each test suite that execute at different times during a test:

  • Setup runs before test file executes.

  • Cleanup runs after test file executes.

Test-Suite Level Callbacks

Two callback scripts are available in each test suite that execute at different times during a test:

  • Setup runs before the test suite executes.

  • Cleanup runs after the test suite executes.

Test-Case Level Callbacks

Three callback scripts are available in each test case that execute at different times during a test:

  • Pre-load runs before the model loads and before the model callbacks.

  • Post-load runs after the model loads and the PostLoadFcn model callback.

  • Cleanup runs after simulations and model callbacks.

To run a single callback script, click the Run button above the corresponding script.

See Test Manager Limitations for the limitations of callback scripts in test cases. For information on Simulink model callbacks, see Model Callbacks (Simulink).

You can use these predefined variables in the test case callbacks:

  • sltest_bdroot available in Post-Load: The model simulated by the test case. The model can be a harness model.

  • sltest_sut available in Post-Load: The system under test. For a harness, it is the component under test.

  • sltest_isharness available in Post-Load: Returns true if sltest_bdroot is a harness model.

  • sltest_simout available in Cleanup: Simulation output produced by simulation.

  • sltest_iterationName available in Pre-Load, Post-Load, and Cleanup: Name of the currently executing test iteration.

Inputs

For test inputs, you can use inputs from signal builder groups in the model, or you can use external inputs from MAT-files or Microsoft® Excel® files.

  • To use inputs from a Signal Builder block group in the model or test harness, select the Signal Builder Group check box, and then select the group from the list.

  • To select an external input set in the External Inputs table to run when the test case executes, click Add. Select a MAT-file or Microsoft Excel file to import as inputs to your test case. Select the file you want to use from the table.

    For more information on using external files as inputs, see Use External Inputs in Test Cases. For information about the file format for Microsoft Excel files in Test Manager, see Specify Microsoft Excel File Format for Signal Data.

Edit Input Data Files in Test Manager

From the Test Manager, you can edit your input data files.

To edit a file, select the file and click Edit. You can then edit the data in the signal editor for MAT-files or Microsoft Excel for Excel files.

To learn about the syntax for Excel files, see Specify Microsoft Excel File Format for Signal Data.

Simulation Outputs

Use the Simulation Outputs section to add signal outputs to your test results. Signals logged in your model or test harness show up in the results. You add signals to log or add a signal set.

Adding signals to log to the test case does not alter the model or test harness.

  1. Under Simulation Outputs, click Add.

  2. In the system under test, select the signals you want to log.

  3. In the Signal Selection dialog box, select the check box next to the signals whose output you want to capture, and click Add.

To add a signal set, click the Add arrow and select Signal Set.

Configuration Setting Overrides

In the test case, you can specify configuration settings that differ from the settings in the model. Setting the configuration settings in the test case enables you to try different configurations without modifying your model.

Simulation 1 and Simulation 2

These sections appear in equivalence test cases. Use them to specify the details about the simulations that you want to compare. Enter the system under test, the test harness if applicable, and simulation setting overrides under Simulation 1. You can then click Copy settings from Simulation 1 under Simulation 2 to use a starting point for your second set of simulation settings.

For the test to pass, Simulation 1 and Simulation 2 must log the same signals.

Use these sections with the Equivalence Criteria section to define the premise of your test case. For an example of an equivalence test, see Test Two Simulations for Equivalence.

Equivalence Criteria

This section appears in equivalence test cases. The equivalence criteria is a set of signal data to compare in Simulation 1 and Simulation 2. Specify tolerances to regulate pass-fail criteria of the test. You can specify absolute, relative, leading, and lagging tolerances for the signals.

To specify tolerances, first click Capture to run the system under test in Simulation 1 and add signals marked for logging to the table. Specify the tolerances in the table.

After you capture the signals, you can select signals from the table to narrow your results. If you do not select signals under Equivalence Criteria, running the test case compares all the logged signals in Simulation 1 and Simulation 2.

For an example of an equivalence test case, see Test Two Simulations for Equivalence.

Baseline Criteria

The Baseline Criteria section appears in baseline test cases. When a baseline test case executes, the Test Manager captures signal data from signals in the model marked for logging and compares them to the baseline criteria.

Capture Baseline Criteria

To capture logged signal data from the system under test to use as the baseline criteria, click Capture. Then follow the prompts in the Capture Baseline dialog box. Capturing the data compiles and simulates the system under test and stores the output from the logged signals to the baseline. For a baseline test example, see Test Model Output Against a Baseline.

You can save the signal data to a MAT-file.

You can capture the baseline criteria using the current release for simulation or another release installed on your system. Add the releases you want to use in the Test Manager preferences. Then select the releases you want available in your test case using the Select releases for simulation option in the test case. When you run the test, you can compare the baseline against the release you created the baseline in or against another release. For more information, see Run Tests in Multiple Releases.

Specify Tolerances

You can specify tolerances to determine the pass-fail criteria of the test case. You can specify absolute, relative, leading, and lagging tolerances for individual signals or the entire baseline criteria set.

After you capture the baseline, the baseline file and its signals appear in the table. In the table, you can set the tolerances for the signals. To see tolerances used in an example for baseline testing, see Test Model Output Against a Baseline.

Import File as Baseline

By clicking Add, you can select an existing file as a baseline, either from another test case or from an external file. You can import MAT-files and Microsoft Excel files as the baseline. Format Microsoft Excel files as described in Specify Microsoft Excel File Format for Signal Data.

To learn how to import MAT-files and Microsoft Excel files as the baseline, see Use External Inputs in Test Cases.

Update Signal Data in Baseline

You can edit the signal data in your baseline, for example, if your model changed and you expect different values. To open the signal editor or the Microsoft Excel file for editing, select the baseline file from the list and click Edit. See Manually Update Signal Data in a Baseline.

You can also update your baseline when you examine test failures in the data inspector view. See Examine Test Failures and Modify Baselines.

Custom Criteria

This section includes an embedded MATLAB editor to define custom pass/fail criteria for your test. Select function customCriteria(test) to enable the criteria script in the editor. Custom criteria operate outside of model run time; the script evaluates after model simulation.

Common uses of custom criteria include verifying signal characteristics or verifying test conditions. MATLAB Unit Test qualifications provide a framework for verification criteria. For example, this custom criteria script gets the last value of the signal PhiRef and verifies that it equals 0:

% Get the last value of PhiRef from the dataset Signals_Req1_3
lastValue = test.sltest_simout.get('Signals_Req1_3').get('PhiRef').Values.Data(end);

% Verify that the last value equals 0
test.verifyEqual(lastValue,0);

See Apply Custom Criteria to Test Cases. For a list of MATLAB Unit Test qualifications, see Types of Qualifications (MATLAB).

You can also define plots in the Custom Criteria section. See Create, Store, and Open MATLAB Figures.

Iterations

Use this test case section to generate test iterations for multiple combinations of test settings. Iterations are helpful for Monte Carlo or parameter sweep tests. For more information about test iterations, see Run Combinations of Tests Using Iterations.

Coverage Settings

Use this test section to configure coverage collection for test files, test suites, and test cases. For more information about collecting coverage in your test, see Collect Coverage in Tests.

Test File Options

Close open figures at the end of execution

When your tests generate figures, select this option to clear the working environment of figures after the test execution completes.

Store MATLAB figures

Select this option to store figures generated during the test with the test file. You can enter MATLAB code that creates figures and plots as a callback or in the test case Custom Criteria section. See Create, Store, and Open MATLAB Figures.

Generate report after execution

Select Generate report after execution to create a report after the test executes. Selecting this option displays report options that you can set. The settings are saved with the test file.

For detailed reporting information, see Export Test Results and Generate Reports and Customize Test Reports.

See Also

|

Was this topic helpful?