This function calculate the performance, based on Bayes theorem, of a clinical test.
X is the following 2x2 matrix.
ALPHA - significance level for confidence intervals (default = 0.05).
- Sensibility and Specificity
- False positive and negative proportions
- False discovery and omission rates
- Youden's Index and Number Needed to Diagnose (NDD)
- Positive and Negative predictivity
- Positive and Negative Likelihood Ratio
- Predictive Summary Index (PSI) and Number Needed to Screen (NNS)
- Test Accuracy
- Mis-classification Rate
- Test bias
- Error odds ratio
- Diagnostic odds ratio
- Discriminant Power
Moreover, the function draws two plots (as you can see in the screenshot)
Noam, the message you get is from a previous version of roc.m routine and not from partest.m. When you have a test, using roc curve analysis you can choose a cut-off point (the point above/below which the test is positive) to obtain the max sensitivity or the max specificity or the max cost effective or the max efficiency or the max positive predictive value...and this choosing depends on your specific problem. Actually, I erased partest invoking from roc to simplify the code.
Anyway, Partest asks whether you want to input the true prevalence of a disease or not. Partest uses the Bayes'es Theorem and the true prevalence is needed to correctly apply it
This is not very clear... Precisely what did you mean? If you open an m.file you always have a text version. To have the plots (did you mean plot when you wrote diagram?) you must run partest on MatLab
Isn't 1-spec equal to false positive rate? I think this is backward on the output partest and roseplot figures (red should be false positive, and yellow should be false negative). In the code it seems like these values (fp1 and fp2) are used correctly throughout and just mis-labelled on the output graphs. Would you mind confirming this? As always, thanks for the excellent code. The comments in this one are really great and I learned a ton going through it.