Original Article

Guideline for Selecting Types of Reliability and Suitable Intra-class Correlation Coefficients in Clinical Research

Abstract

Introduction: Reliability is an integral part of measuring the reproducibility of research information. Intra-cluster correlation coefficient (ICC) is one of the necessary indicators for reliability reporting, which can be misleading in terms of its diversity. The main purpose of this study was to introduce the types of reliability and appropriate ICC indices. 

Methods: In this tutorial article, useful information about the types of reliability and indicators needed to report the results, as well as the types of ICC and its applications were explained for dummies. 

Results: Three general types of reliability include inter-rater reliability, test-retest reliability, and intra-rater reliability was presented. 10 different types of ICC were also introduced and explained. 

Conclusion: The research results may be misleading if any of the reliability types and calculation criteria types are chosen incorrectly. Therefore, to make the results of the study more accurate and valuable. Medical researchers must seek help from relevant guidelines such as this study before conducting reliability analysis.

1. McHugh ML. Interrater reliability: the kappa statistic. Biochemia medica: Biochemia medica. 2012;22(3):276-82.
2. Drost EA. Validity and reliability in social science research. Education Research and perspectives. 2011;38(1):105.
3. Bruton A, Conway JH, Holgate ST. Reliability: what is it, and how is it measured? Physiotherapy. 2000;86(2):94-9.
4. Koo TK, Li MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of chiropractic medicine. 2016;15(2):155-63.
5. Hallgren KA. Computing inter-rater reliability for observational data: an overview and tutorial. Tutorials in quantitative methods for psychology. 2012;8(1):23.
6. Siegel S. Nonparametric statistics for the behavioral sciences. 1956.
7. Metrology JCfGi. Evaluation of measurement data—Guide to the expression of uncertainty in measurement. JCGM. 2008;100(2008):1-116.
8. Polit DF. Getting serious about test– retest reliability: a critique of retest research and some recommendations. Quality of Life Research. 2014;23(6):1713-20.
9. Cohen J. A coefficient of agreement for nominal scales. Educational and psychological measurement. 1960;20(1):37-46.
10. Fleiss JL. Measuring nominal scale agreement among many raters. Psychological bulletin. 1971;76(5):378-82.
11. Shrout PE, Fleiss JL. Intraclass : uses in assessing rater reliability. Psychological bulletin. 1979;86(2):420-28.
12. McGraw KO, Wong SP. Forming inferences about some intraclass correlation coefficients. Psychological methods. 1996;1(1):30-46.
13. Portney LG, Watkins MP. Foundations of clinical research: applications to practice: Pearson/Prentice Hall Upper Saddle River, NJ; 2009.
14. Cicchetti DV. Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychological assessment. 1994;6(4):284-90.
Files
IssueVol 7 No 3 (2021) QRcode
SectionOriginal Article(s)
DOI https://doi.org/10.18502/jbe.v7i3.7301
Keywords
Reliability Inter-rater reliability Test-retest reliability Intra-rater reliability Intra-cluster correlation coefficient

Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
How to Cite
1.
Taherzadeh Chenani K, Madadizadeh F. Guideline for Selecting Types of Reliability and Suitable Intra-class Correlation Coefficients in Clinical Research. JBE. 2021;7(3):305-309.