Margo CE. A pilot study in ophthalmology of inter-rater reliability in classifying diagnostic errors: an underinvestigated area of medical error.
Qual Saf Health Care 2004;
12:416-20. [PMID:
14645756 PMCID:
PMC1758028 DOI:
10.1136/qhc.12.6.416]
[Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
BACKGROUND
Misdiagnosis is the least studied form of medical error. Before effective strategies to reduce misdiagnosis can be developed, there needs to be a better understanding of the factors that lead to these errors.
AIM
To evaluate the applicability and reliability of three classification systems for misdiagnosis.
DESIGN
Retrospective independent analysis of five cases by clinical experts.
PARTICIPANTS
Three ophthalmologists trained in ocular oncology who devote at least 75% of their practice to ocular oncology.
MAIN OUTCOME MEASURES
Percentage agreement in determining cause of misdiagnosis.
RESULTS
Participants agreed a misdiagnosis occurred in all cases and the error was graded as serious 14 of 15 times (93%). Inter-rater agreement for root cause varied among the three classification systems from 47% to zero.
CONCLUSIONS
Although there was excellent agreement among clinical experts of what constitutes serious misdiagnosis under idealized conditions, there is not a reliable method for categorizing the primary or root cause for these errors. The origins of misdiagnosis are complex, often multifactorial, and more difficult to categorize than other types of medical error. Misdiagnosis is a professional and public healthcare challenge that will require novel strategies to enable it to be successfully studied.
Collapse