Wang HE, Landers M, Adams R, Subbaswamy A, Kharrazi H, Gaskin DJ, Saria S. OUP accepted manuscript.
J Am Med Inform Assoc 2022;
29:1323-1333. [PMID:
35579328 PMCID:
PMC9277650 DOI:
10.1093/jamia/ocac065]
[Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 03/23/2022] [Accepted: 04/26/2022] [Indexed: 11/12/2022] Open
Abstract
Objective
Health care providers increasingly rely upon predictive algorithms when making
important treatment decisions, however, evidence indicates that these tools can lead to
inequitable outcomes across racial and socio-economic groups. In this study, we
introduce a bias evaluation checklist that allows model developers and health care
providers a means to systematically appraise a model’s potential to introduce bias.
Materials and Methods
Our methods include developing a bias evaluation checklist, a scoping literature review
to identify 30-day hospital readmission prediction models, and assessing the selected
models using the checklist.
Results
We selected 4 models for evaluation: LACE, HOSPITAL, Johns Hopkins ACG, and HATRIX. Our
assessment identified critical ways in which these algorithms can perpetuate health care
inequalities. We found that LACE and HOSPITAL have the greatest potential for
introducing bias, Johns Hopkins ACG has the most areas of uncertainty, and HATRIX has
the fewest causes for concern.
Discussion
Our approach gives model developers and health care providers a practical and
systematic method for evaluating bias in predictive models. Traditional bias
identification methods do not elucidate sources of bias and are thus insufficient for
mitigation efforts. With our checklist, bias can be addressed and eliminated before a
model is fully developed or deployed.
Conclusion
The potential for algorithms to perpetuate biased outcomes is not isolated to
readmission prediction models; rather, we believe our results have implications for
predictive models across health care. We offer a systematic method for evaluating
potential bias with sufficient flexibility to be utilized across models and
applications.
Collapse