The performance test interface leverages the script, function, and class-based unit testing interfaces. You can perform qualifications within your performance tests to ensure correct functional behavior while measuring code performance. Also, you can run your performance tests as standard regression tests to ensure that code changes do not break performance tests.
This table indicates what code is measured for the different types of tests.
|Type of Test||What Is Measured||What Is Excluded|
|Script-based||Code in each section of the script||
|Function-based||Code in each test function||
|Class-based||Code in each method tagged with the ||
|Class-based deriving from ||Code between calls to ||
|Class-based deriving from ||Code inside each ||
You can create two types of time experiments.
A frequentist time experiment collects a variable number of measurements to achieve a specified margin of error and confidence level. Use a frequentist time experiment to define statistical objectives for your measurement samples. Generate this experiment using the
runperf function or the
limitingSamplingError static method of the
A fixed time experiment collects a fixed number of measurements. Use a fixed time experiment to measure first-time costs of your code or to take explicit control of your sample size. Generate this experiment using the
withFixedSampleSize static method of the
This table summarizes the differences between the frequentist and fixed time experiments.
|Frequentist time experiment||Fixed time experiment|
|Warm-up measurements||4 by default, but configurable through ||0 by default, but configurable through |
|Number of samples||Between 4 and 256 by default, but configurable through ||Defined during experiment construction|
|Relative margin of error||5% by default, but configurable through ||Not applicable|
|Confidence level||95% by default, but configurable through ||Not applicable|
|Framework behavior for invalid test result||Stops measuring a test and moves to the next one||Collects specified number of samples|
If your class-based tests derive from
matlab.perftest.TestCase instead of
matlab.unittest.TestCase, then you can use the
stopMeasuring methods or the
keepMeasuring method multiple times to define boundaries for performance test measurements. If a test method has multiple calls to
keepMeasuring, then the performance framework accumulates and sums the measurements. The performance framework does not support nested measurement boundaries. If you use these methods incorrectly in a
Test method and run the test as a
TimeExperiment, then the framework marks the measurement as invalid. Also, you still can run these performance tests as unit tests. For more information, see Test Performance Using Classes.
There are two ways to run performance tests:
runperf function to run the tests. This function uses a variable number of measurements to reach a sample mean with a 0.05 relative margin of error within a 0.95 confidence level. It runs the tests four times to warm up the code and between 4 and 256 times to collect measurements that meet the statistical objectives.
Generate an explicit test suite using the
testsuite function or the methods in the
TestSuite class, and then create and run a time experiment.
withFixedSampleSize method of the
TimeExperiment class to construct a time experiment with a fixed number of measurements. You can specify a fixed number of warm-up measurements and a fixed number of samples.
limitingSamplingError method of the
TimeExperiment class to construct a time experiment with specified statistical objectives, such as margin of error and confidence level. Also, you can specify the number of warm-up measurements and the minimum and maximum number of samples.
You can run your performance tests as regression tests. For more information, see Run Tests for Various Workflows.
In some situations, the
MeasurementResult for a test result is marked invalid. A test result is marked invalid when the performance testing framework sets the
Valid property of the
MeasurementResult to false. This invalidation occurs if your test fails or is filtered. Also, if your test incorrectly uses the
stopMeasuring methods of
matlab.perftest.TestCase, then the
MeasurementResult for that test is marked invalid.
When the performance testing framework encounters an invalid test result, it behaves differently depending on the type of time experiment:
If you create a frequentist time experiment, then the framework stops measuring for that test and moves to the next test.
If you create a fixed time experiment, then the framework continues collecting the specified number of samples.