Description |
Cohen's kappa coefficient is a statistical measure of inter-rater reliability. It is generally thought to be a more robust measure than simple percent agreement calculation since Kappa takes into account the agreement occurring by chance.
Kappa provides a measure of the degree to which two judges, A and B, concur in their respective sortings of N items into k mutually exclusive categories. A 'judge' in this context can be an individual human being, a set of individuals who sort the N items collectively, or some non-human agency, such as a computer program or diagnostic test, that performs a sorting on the basis of specified criteria. The original and simplest version of kappa is the unweighted kappa coefficient introduced by J. Cohen in 1960. When the categories are merely nominal, Cohen's simple unweighted coefficient is the only form of kappa that can meaningfully be used. If the categories are ordinal and if it is the case that category 2 represents more of something than category 1, that category 3 represents more of that same something than category 2, and so on, then it is potentially meaningful to take this into account, weighting each cell of the matrix in accordance with how near it is to the cell in that row that includes the absolutely concordant items. This function can compute a linear weights or a quadratic weights
The output of this function is:
- Observed agreement percentage
- Random agreement percentage
- Agreement percentage due to true concordance
- Residual not random agreement percentage
- Cohen's kappa
- kappa error
- kappa confidence interval
- Maximum possible kappa
- k observed as proportion of maximum possible
- k benchmarks by Landis and Koch
- z test results
You can visit my homepage http://home.tele2.it/cardillo
My profile on XING http://www.xing.com/go/invita/13675097
My profile on LinkedIN http://it.linkedin.com/in/giuseppecardillo |