Whiting PF, Rutjes AWS, Westwood ME, Mallett S. A systematic review classifies sources of bias and variation in diagnostic test accuracy studies.
J Clin Epidemiol 2013;
66:1093-104. [PMID:
23958378 DOI:
10.1016/j.jclinepi.2013.05.014]
[Citation(s) in RCA: 190] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2012] [Revised: 05/08/2013] [Accepted: 05/15/2013] [Indexed: 11/15/2022]
Abstract
OBJECTIVE
To classify the sources of bias and variation and to provide an updated summary of the evidence of the effects of each source of bias and variation.
STUDY DESIGN AND SETTING
We conducted a systematic review of studies of any design with the main objective of addressing bias or variation in the results of diagnostic accuracy studies. We searched MEDLINE, EMBASE, BIOSIS, the Cochrane Methodology Register, and Database of Abstracts of Reviews of Effects (DARE) from 2001 to October 2011. Citation searches based on three key papers were conducted, and studies from our previous review (search to 2001) were eligible. One reviewer extracted data on the study design, objective, sources of bias and/or variation, and results. A second reviewer checked the extraction.
RESULTS
We summarized the number of studies providing evidence of an effect arising from each source of bias and variation on the estimates of sensitivity, specificity, and overall accuracy.
CONCLUSIONS
We found consistent evidence for the effects of case-control design, observer variability, availability of clinical information, reference standard, partial and differential verification bias, demographic features, and disease prevalence and severity. Effects were generally stronger for sensitivity than for specificity. Evidence for other sources of bias and variation was limited.
Collapse