In Matlab unittesting how to modify a property after test suite is created

8 views (last 30 days)
I would like to modify a property (not a TestParameter) after the test suite is created but I am getting an error "No public field myparam exists for class matlab.unittest.Test". Here is what I am doing in terms of code: I create a class unitclass.m
classdef unitclass < matlab.unittest.TestCase
properties ( SetAccess = public )
myparam = 2;
end
properties (TestParameter)
test_param1 = struct('Low', 0.1,'Medium', 0.5);
test_param2 = struct('Cold', 10,'Hot', 200);
end
methods (Test)
function testError(testcase,test_param1,test_param2)
output = test_param1 * test_param2;
testcase.myparam
testcase.verifyLessThan(output,20);
end
end
end
Then a create a custom unittest SDLTest.m class to modify my parameter
classdef SDLTest < matlab.unittest.Test
methods (Static = true)
function this = set_myparam(this,myparam)
for i = 1:length(this)
this(i).myparam = myparam;
end
end
end
end
when I try to execute the code
mySuite = matlab.unittest.TestSuite.fromClass(?unitclass)
results = SDLTest.set_myparam(mySuite,1)
I get the error "No public field myparam exists for class matlab.unittest.Test".
Help with this is greatly appreciatd.
Thanks.
  4 Comments
Adam
Adam on 10 Dec 2014
In terms of why it doesn't work, I imagine it is because you have an array of matlab.unittest.Test instances and Matlab takes the base class as the type for the array so when you try to call a function or set a property it must be a function or property of the matlab.unittest.Test base class coming out of the TestSuite creation.
Since Andy works for Mathworks with a special interest in the unit testing aspect I'll leave any possible solutions to him as I am not fully clear about your setup!
Nadjib
Nadjib on 10 Dec 2014
Edited: Nadjib on 18 Dec 2014
Thanks for the comment Adam. As I am new with unittest it took me some time yesterday to realise that once the testsuite is created there is no way to modify the property I need. I still would like to know if there is a workaround to control things such plotting when running the testing. I look forward to Andy's reply.

Sign in to comment.

Accepted Answer

Andy Campbell
Andy Campbell on 10 Dec 2014
Edited: Andy Campbell on 11 Dec 2014
Hello Nadjib,
The reason you can't change the property like you were trying to do is becase matlab.unittest.Test and matlab.unittest.TestCase are not in the same class hierarchy. The Test arrays are what give the TestRunner the knowledge of how to run the tests defined in the TestCase classes, but the runner constructs the TestCase classes during the test run.
It sounds to me like you may be interested in the log method of TestCase which was included in the framework in R2014b.
Using this feature, if you write a " PlotDiagnostic " similar to the one shown in this post , you could generate the plots only when you run the test above a certain verbosity threshold. This would look like:
classdef unitclass < matlab.unittest.TestCase
properties (TestParameter)
test_param1 = struct('Low', 0.1,'Medium', 0.5);
test_param2 = struct('Cold', 10,'Hot', 200);
end
methods (Test)
function testError(testcase,test_param1,test_param2)
output = test_param1 * test_param2;
testcase.log(3, PlotDiagnostic(output)); % can be another verbosity
testcase.verifyLessThan(output,20);
end
end
end
Then when could run it as follows:
>> % run it normally (& easily)
>> runtests('unitclass')
>>
>> % create the suite and run it at a higher verbosity level to show the plots
>> import matlab.unittest.TestSuite
>> import matlab.unittest.TestRunner
>>
>> suite = TestSuite.fromClass(?unitclass)
>> runner = TestRunner.withTextOutput('Verbosity', 3)
>> runner.run(suite);
Here are a few relevant links of interest for you:
Does that help?
  5 Comments
Andy Campbell
Andy Campbell on 12 Dec 2014
I'll see if I can present a few options. However, some of these (especially without the log method in 14b!) are advanced workflows. If you don't mind learning about them then that is great, it can only help. However, you may also choose to get well aquainted with the framework and unit testing in general before extending the framework in advanced ways.
Nadjib
Nadjib on 12 Dec 2014
I would love to learn especially from the best :) I will probably have access to 14b in the coming weeks; but I don't mind learning few new tricks when it comes to unit testing; I am sure you busy so it doesn't have to be detailed, just some pointers will be appreciated. Thank you.

Sign in to comment.

More Answers (1)

Andy Campbell
Andy Campbell on 12 Dec 2014
Edited: Andy Campbell on 12 Dec 2014
If you do not yet have R2014b, there are a few things you can do to at least get something close to this behavior.
One thing you could do for example is write a PlotDiagnostic similar to the one shown in the stack overflow answer. Note you could make it behave like Hugenoet's answer where the plots are not automatically created by the diagnostic but instead it creates the hyperlinks to generate the plots when clicked. An additional feature you could then add is a plugin that adds passing and failing listeners to different qualification events (like VerificationFailed, VerificationPassed, AssertionPassed, etc). The result will be the plugin will be notified whenever a verification/assertion/etc is encountered and it will receive the PlotDiagnostic subclass instance. The plugin, after confirming it is a PlotDiagnostic, could then use the data on the PlotDiagnostic instance to produce plots when the plugin was intalled. Running it when you want to see the plots would then look like:
>> suite = TestSuite.fromClass(?unitclass)
>> runner = TestRunner.withTextOutput;
>> runner.addPlugin(NadjibsPlottingPlugin);
>> runner.run(suite);
What would happen here is the links would be printed by one of the standard framework plugins whenever there was a failure so you could click the links to see individual plots. However, if you add your own plugin you could then also unconditionally produce the plots during the test run. You could even mimic a "log" call by sending an always passing qualification, like so:
testCase.verifyTrue(true, PlotDiagnostic(actual, expected));
More information on how to create a plugin can be found here and here. In your plugin you would need to implement one or more of the "create*" plugin methods so you could add both passed and failed listeners to verification/assertion/fatal assertion calls. Then in the listener callback you would want to check the TestDiagnostic property of the QualificationEventData and if it is one of your custom PlotDiagnostics produce the plot at that time.
Hope that helps and/or is instructive!
  3 Comments
Nadjib
Nadjib on 18 Dec 2014
I couldn't work on this for some time but I am back on it and I have a couple of issues. I wrote my PlotDiagnostic and it works fine for failed cases, or for all cases when used with
runner.addPlugin(matlab.unittest.plugins.DiagnosticsValidationPlugin);
I also created a plugin but I am still unsure how I would control the display of the plots when running the test suite; given the code below:
import matlab.unittest.*;
mySuite = TestSuite.fromClass(?unitclass);
runner = TestRunner.withTextOutput;
runner.addPlugin(set_plots_Plugin);
runner.run(mySuite);
How would I implement adding a logical variable, say DoPlot that controls whether a plot is displayed or not based on its value. How DoPlot is going to interact with the listener callback to display plots for all tests or disable plotting all together.
Thank you.
Andy Campbell
Andy Campbell on 24 Dec 2014
Hi Nadjib,
I think you don't need the logical variable. Basically you can write the plugin to always produce the plots, and then you can control whether the plot is produced by whether or not the plugin is installed. In other words, if you don't want to see the plots you jsut run it using the default set of plugins. However, if you do want to see the plots you can install the plugin and the plots show up everytime.
If you want to separate the decision to produce the plot from whether the test passes or fails, then you could implement your PlotDiagnostic to produce a diagnostic message that is empty or some other value, but does not produce a plot when the diagnose method is called. Then set_plots_Plugin could look at all diagnostics from verification calls and call some other method on the Diagnostic to actually produce the plots. That way, plots are not produced in failing cases (or passing cases using DiagnosticsValidationPlugin) but are produced whenever you have your special plugin installed which knows how to call the special plot method on your PlotDiagnostic. You can get to the actual diagnostic object supplied by looking at the QualificationEventData that the listener receives (Note, it can be a non-scalar array).
All of this being said, the log method will be much easier once you have the right version of MATLAB.
Does that help?

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!