- Author:
Chang Un PARK
1
;
Hyun Jung KIM
Author Information
- Publication Type:Review
- Keywords: Agreement; Inter-Rater; Kappa; Rater; Reliability
- From:Hanyang Medical Reviews 2015;35(1):44-49
- CountryRepublic of Korea
- Language:Korean
- Abstract: Inter-rater reliability refers to the degree of agreement when a measurement is repeated under identical conditions by different raters. In systematic review, it can be used to evaluate agreement between authors in the process of extracting data. While there have been a variety of methods to measure inter-rater reliability, percent agreement and Cohen's kappa are commonly used in the categorical data. Percent agreement is an amount of actually observed agreement. While the calculation is simple, it has a limitation in that the effect of chance in achieving agreement between raters is not accounted for. Cohen's kappa is a more robust method than percent agreement since it is an adjusted agreement considering the effect of chance. The interpretation of kappa can be misled, because it is sensitive to the distribution of data. Therefore, it is desirable to present both values of percent agreement and kappa in the review. If the value of kappa is too low in spite of high observed agreement, alternative statistics can be pursued.