Description |
Consider a binary classification task, and a real-valued predictor, where higher values denote more confidence that an instance is positive. By setting a fixed threshold on the output, we can trade-off recall (=true positive rate) versus false positive rate (resp. precision).
Depending on the relative class frequencies, ROC and P/R curves can highlight different properties; for details, see e.g., Davis & Goadrich, 'The Relationship Between Precision-Recall and ROC Curves', ICML 2006. |