Reliability Assessment of a Measure: The Kappa Statistic

Authors

  • Francesco Franco Regione Lazio, Roma
  • Anteo Di Napoli Comitato Tecnico-Scientifico RIDT, Roma

DOI:

https://doi.org/10.33393/gcnd.2016.738

Keywords:

Cohen's Kappa, Inter-rater agreement, Intra-rater agreement, Reliability, Weighted Kappa

Abstract

The Kappa coefficient is a measure of inter-rater agreement. This statistic measures observed agreement greater than chance and can range from -1 to 1. A value of zero indicates statistical independence and a value of 1 indicates a perfect agreement between observers. The value of Kappa is influenced by the prevalence of the evaluated condition: two observers can have a high observed agreement but low Kappa if the prevalence is very high or very low (paradox of the Kappa statistic). (Epidemiology_statistics)

Downloads

Download data is not yet available.

Published

2016-11-24

How to Cite

Franco, F., & Di Napoli, A. (2016). Reliability Assessment of a Measure: The Kappa Statistic. Giornale Di Clinica Nefrologica E Dialisi, 28(4), 289–292. https://doi.org/10.33393/gcnd.2016.738

Issue

Section

Epidemiology and statistics

Metrics

Most read articles by the same author(s)

1 2 3 > >>