File Exchange

image thumbnail

Precision-Recall and ROC Curves

version 1.2 (4.14 KB) by

Calculate and plot P/R and ROC curves for binary classification tasks.

13 Ratings



View License

Consider a binary classification task, and a real-valued predictor, where higher values denote more confidence that an instance is positive. By setting a fixed threshold on the output, we can trade-off recall (=true positive rate) versus false positive rate (resp. precision).

Depending on the relative class frequencies, ROC and P/R curves can highlight different properties; for details, see e.g., Davis & Goadrich, 'The Relationship Between Precision-Recall and ROC Curves', ICML 2006.

Comments and Ratings (28)



Javed (view profile)


yildiz (view profile)


Shijia (view profile)

Hi Stefan,
  Thanks a lot for sharing the codes. Can I ask one simple question?
  As the ICML paper mentioned, in PR curve, recall values do not necessarily change linearly with precisions. Hence, the ROC curve is first constructed, and next, the PR curve is inferred from the ROC curve.

  Could you please help confirm whether the provided codes do the similar stuff?

  Thanks a lot

kiran paul

can you help me, i am using FCM based CBIR.. i am stuck with the values of precision and recall using threshold could u plz help me with this!

kiran paul

can you help me, i am using FCM based CBIR.. i am stuck with the values of precision and recall using threshold could u plz help me with this!


Sbj (view profile)

How do I implement these code to produce the PR curve or ROC curve comparing several segmentation algorithm to the ground truth

Yawar Rehman

Hello Stefan,
can you help me out in it ... i am using SVM quadratic classifier; it returns class labels for test samples (i.e. 1(pos) or -1(neg)); how can i obtain score values for test samples to plot PR and ROC curves? Thanks a lot for the upload!

Yawar Rehman

Keikim Jap


wx (view profile)


Ahmed (view profile)

this code is intended for only binary classification tasks. what abt multiclass classification


Salha (view profile)

Very helpful. Thanks!


Chris (view profile)

Hi, I am interested in computing the F1-score for a precision-recall curve. The equation to do this is (2*precision*recall)/(precision+recall).

The outputs "prec" (precision) and "tpr" (recall), however, are vectors. So, if we take (precision' * recall) / (precision + recall), we will end up with a vector.

Shouldn't the F1-score be a scalar ranging from 0 to 1? Thanks for your help.

Warm Regards


i have 1 32x32 matrix in which classify the 32 clas ,all diagonal element shows correct classification while off diagonal element shows misclassification,how can i use this file for plotting precision and recall plot
krishna singh

Segun Oshin


I am trying to obtain the Area Under the Precision-Recall curve. In a previous answer, you stated that your separately submitted aucroc.m would be able to estimate this, but this appears to only measure the area under ROC Curves. Since Precision-Recall curves are different, how can I determine the area under them from an AUROC? Or are you aware of any other methods of measure the Area under P-R curves?

Kind regards


It seems that your function requires statistics toolbox. There is a function "quantile" which is found only in the statistics toolbox. It would be nice to use alternative or a free equivalent. Thanks for a function!

Stefan Schroedl

Stefan Schroedl (view profile)

Hi Zeehasham,

precision-recall curves are useful for classifiers that output a score (e.g., the higher, the more likely to be in the positive class) - if the classifier only gives you a class label, you won't get a graph, only a single precision/recall point.

Given such a classifier, for any threshold ("thresh"), you can assign examples as positive if the score exceeds it, otherwise as negative. So, for each threshold, the return values of the procedure give the corresponding precision ("prec"), true-positive rate ("tpr"), and false-positive ("fpr") rate, in that order.

Hope that helps, good luck!

Hey Stefan
I am using a binary classifier and want to ask you few questions

Can you tell me whats inside "prec" variable as it displays two rows. Is the first row is precision and second row is recall ?

What is "thresh" variable?

Also can you briefly explain prec/recall graph.

Your code is really helpful. Good Work !

Stefan Schroedl

Stefan Schroedl (view profile)

Hi Ashwin,
the link is the same as the old one.
you can use any classifier to produce the scores, the script is independent of that.

Ashiwn Kumar

Hi Stefan ,
can u provide the link for the new version that u have uploaded.

Can i use baseyian classifier in this code??

Pls reply,so that i can complete my project

Thanks in advance

Stefan Schroedl

Stefan Schroedl (view profile)

I just uploaded a new version with better option descriptions, hope that makes it more usable.
- 'count' (now called 'instanceCount') can be used if there are multiple instances with the same score. This would denote the number of instances, and 'target' the number of positive class members among those.
- area under the curve can be computed more efficiently with my 'auroc' submission.


Skynet (view profile)

It seems that perfcurve does this now.

Farooq Azam

Very useful code. Thanks.
But I have two questions/comments:
   - what is the role of 'count'? an example would help.
   - how one would calculate the area under the curve?


Stefan Schroedl

Stefan Schroedl (view profile)

The prediction itself is not part of this function, it only evaluates the output taken from an exteral predictor.

Xian Chen

good. thanks a lot.
one question: which algorithm have you used to calculate? bayes? SVM?

David Chiang

Thanks a lot!



Updated function arguments, added options


Update for better user interface, added options

MATLAB Release
MATLAB 7.4 (R2007a)

Inspired: Lynx MATLAB Toolbox

Download apps, toolboxes, and other File Exchange content using Add-On Explorer in MATLAB.

» Watch video