Documentation

Run Tests in Multiple Releases

If you have more than one release of MATLAB® installed, you can run tests in multiple releases. This option lets you run tests in releases that do not have Simulink® Test™, starting with Rt011b.

While you can run test cases on models in previous releases, the release you run the test in must support the features of the test. If, for example, your test involves test harnesses or test sequences, the release must support those features for the test to run.

Before you can create tests that use additional releases, add them to your list of available releases using Test Manager preferences. See Add Releases Using Test Manager Preferences.

Considerations for Testing in Multiple Releases

Testing Models in Previous or Later Releases

Your model or test harness must be compatible with the MATLAB version running your test.

  • If you have a model created in a newer version of MATLAB, to test the model in a previous version of MATLAB, export the model to a previous version and simulate the exported model with the previous MATLAB version. For more information, see the information on exporting a model in Save a Model (Simulink).

  • To test a model in a more recent version of MATLAB, consider using the Upgrade Advisor to upgrade your model for the more recent release. For more information, see Consult the Upgrade Advisor (Simulink).

Test Case Compatibility with Previous Releases

When performing testing in multiple releases, the MATLAB version must support the features of your test case. Previous MATLAB versions do not support test case features unavailable in that release. For example:

  • Test harnesses are supported for R2015a and later.

  • The Test Sequence block is supported for R2015a and later.

  • verify() statements are supported for R2016b and later.

Test Case Limitations with Multiple Release Testing

Certain features are not supported for multiple release testing:

  • Parallel test execution

  • Running test cases with the MATLAB Unit Test framework

  • Real-time tests

  • Input data defined in an external Excel® document

  • Coverage collection in the Test Manager

  • Generating additional tests using Simulink Design Verifier™ to increase coverage

  • Including custom figures from test case callbacks

Add Releases Using Test Manager Preferences

Use a Test Manager preference to add to the list of release to run tests in. You can delete a release that you added to the list. You cannot delete the release from which you are running Test Manager.

  1. In the Test Manager toolstrip, click Preferences.

  2. In the Preferences dialog box, click Release. The Release pane lists the release you are running Test Manager from.

  3. In the Release pane, click Add.

  4. Browse to the location of the release you want to add and click OK.

Run Baseline Tests in Multiple Releases

When you run a baseline test with Test Manager set up for multiple releases, you can:

  • Create the baseline in the release you want to see the results in, for example, to try different parameters and apply tolerances.

  • Create the baseline in one release and run it in another release. Using this approach you can, for example, know whether a newer release produces the same simulation outputs as an earlier release.

Create the baseline.

  1. Make sure that the release has been added to your Test Manager preferences.

  2. Create a test file, if necessary, and add a baseline test case to it.

  3. In the test case, from the Select release for simulation list, select the releases you want to run the test case in.

  4. Under System Under Test, enter the name of the model you want to test.

  5. Set up the rest of the test.

  6. Capture the baseline. Under Baseline Criteria, click Capture.

  7. Select the release you want to use for the baseline simulation. Specify the file format and save and name the baseline.

For more information about capturing baselines, see Capture Baseline Criteria.

After you create the baseline, you can run the test in a release available in the Test Manager. Each release you select generates a set of results.

  1. In the test case, set Select releases for simulation to the releases you want to use to compare against your baseline. For example, select only the release for which you created the baseline to perform a baseline comparison against the same release.

  2. Specify the test options.

  3. From the toolstrip, click Run.

    For each release that you select when you run the test case, pass-fail results appear in the Results and Artifacts pane. For results from a release other than the one you are running Test Manager from, the release number appears in the name.

Run Equivalence Tests in Multiple Releases

When you run an equivalence test, you compare two simulations from the same release to see if differences in the simulations are within the specified tolerance.

  1. Make sure that the release has been added to your Test Manager preferences.

  2. Create a test file, if necessary, and add an equivalence test case to it.

  3. In the test case, from the Select release for simulation list, select the releases you want to run the test case in.

  4. Under System Under Test, enter the model you want to test.

  5. Set the values under Simulation 1 and Simulation 2 to use as the basis for testing.

  6. To set tolerances for the logged signals, under Equivalence Criteria, click Capture. Select the release you want to use for capturng the signals, and click OK. Clicking Capture copies the list of the signals being logged in Simulation 1. Then set the tolerances as desired.

  7. In the toolstrip, click Run.

    The test runs for each release you selected, running the two simulations in the same release and comparing the results for equivalence. For each release that you selected when you ran the test case, pass-fail results appear in the Results and Artifacts pane. For results from a release other than the one you are running Test Manager from, the release number appears in the name.

Run Simulation Tests in Multiple Releases

Running a simulation test simulates the model in each release you select using the criteria you specify in the test case.

  1. Make sure that the release has been added to your Test Manager preferences.

  2. Create a test file, if necessary, and add a simulation test case template to it.

  3. In the test case, from the Select release for simulation list, select the releases you want to run the test case in.

  4. Under System Under Test, enter the model you want to test.

  5. Under Simulation Outputs, select the signals to log.

  6. In the toolstrip, click Run.

    The test runs, simulating for each release you selected. For each release, pass-fail results appear in the Results and Artifacts pane. For results from a release other than the one you are running Test Manager from, the release number appears in the name.

See Also

|

Related Topics

Was this topic helpful?