Advanced search
119
Views
0
CrossRef citations to date
0
Altmetric
Articles

Determining agreement using rater characteristics

&
Pages 619-630
Received 15 Oct 2014
Accepted 07 Apr 2015
Accepted author version posted online: 22 Jun 2015
Published online: 08 Jan 2016
 

ABSTRACT

When evaluating the usefulness of clinical information for the diagnosis of disease, multiple raters provide a diagnosis for the same set of data. These ratings provide important insights into the performance of the diagnosis, determining the accuracy of each rater’s diagnosis compared to the truth standard and the level of agreement among the raters. We demonstrate that the intraclass correlation coefficient (ICC) is dependent on the sensitivities and specificities of the raters involved in the study. Given the sensitivity and specificity of any number of raters, along with the prevalence of disease, the expected ICC can be determined.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
EUR 47.00 Add to cart

Purchase access via tokens

  • Choose from packages of 10, 20, and 30 tokens
  • Can use on articles across multiple libraries & subject collections
  • Article PDFs can be downloaded & printed
From EUR 400.00
per package
Learn more
* Local tax will be added as applicable
 

Related research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.