compare expected results and actual result in simulink

4 views (last 30 days)
how to use generated test cases to compare expected results and actual result in simulink?
I want to test/check custom C code using simulink and for that I used 'Simulink test manager' to import external C code. then I used "Design verifire" to generate test cases. from here how can I compare the expected results and actual result of each units?
help me out :)
  3 Comments
Udara Darshana
Udara Darshana on 4 Jan 2023
Thanks for the fast response.
This is the first time I am using the "Simulink test" and "Design verifier" tools. So I do not have deep knowledge about them due to my lack of experience.
I think what you were saying in the first line of your comment is the thing I am looking for. But I got some additional questions related to that.
1. Can we get the pass-fail result on the report if we set the expected results for the generated test cases?
2. Is there any way to export generated test cases to the "logical and Temporal assessment" section of the "Test manager"?
3. If we set the expected value as explained in the link you provided, is it the same thing that the "logical and Temporal Assessment" section of the test manager does?
Thank you
Pat Canny
Pat Canny on 4 Jan 2023
Just to clarify, the optional Expected Output is simply the values of the outputs based on the automatically generated inputs. The generated inputs are meant to achieve given coverage objectives, and often need to be modified to be considered requirements-based tests. You would still need to created "Expected Results" (for instance, via an assertion) which would use Expected Output as the input. For instance, in a simple model where an output is simply a Gain of 2 on an input, the Expected Output for outport 1 would be 2. That might be correct, but I would need to create an assertion to be sure.
Note: this workflow is uncommon and quite cumbersome.
In almost all cases, users typically export the generated tests (often to Simulink Test) and then create the Expected Results. Again - the tests are generate with respect to coverage, so they may need to be modified to determine how they related to requirements. Users often then simulate the model and determine if the test is passed.
Hopefully that answers Question #1.
Regarding Question #2 - this is not supported, because the generated tests do not have a notion of pass/fail (as it is only the inputs which are actually generated).
Regarding Question #3 - we do not currently support automatically generating Logical and Temporal Assessments. The more common approach is to either define the assessments manually or programmatically using the APIs.
Do you have a point of contact with our sales team, by chance? It would be good to understand your workflow to help answer any questions you may not know to ask ;-)
If not, would you mind reaching out to MathWorks Technical Support?

Sign in to comment.

Answers (0)

Products


Release

R2022b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!