1
|
Trinkley KE, An R, Maw AM, Glasgow RE, Brownson RC. Leveraging artificial intelligence to advance implementation science: potential opportunities and cautions. Implement Sci 2024; 19:17. [PMID: 38383393 PMCID: PMC10880216 DOI: 10.1186/s13012-024-01346-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 01/25/2024] [Indexed: 02/23/2024] Open
Abstract
BACKGROUND The field of implementation science was developed to address the significant time delay between establishing an evidence-based practice and its widespread use. Although implementation science has contributed much toward bridging this gap, the evidence-to-practice chasm remains a challenge. There are some key aspects of implementation science in which advances are needed, including speed and assessing causality and mechanisms. The increasing availability of artificial intelligence applications offers opportunities to help address specific issues faced by the field of implementation science and expand its methods. MAIN TEXT This paper discusses the many ways artificial intelligence can address key challenges in applying implementation science methods while also considering potential pitfalls to the use of artificial intelligence. We answer the questions of "why" the field of implementation science should consider artificial intelligence, for "what" (the purpose and methods), and the "what" (consequences and challenges). We describe specific ways artificial intelligence can address implementation science challenges related to (1) speed, (2) sustainability, (3) equity, (4) generalizability, (5) assessing context and context-outcome relationships, and (6) assessing causality and mechanisms. Examples are provided from global health systems, public health, and precision health that illustrate both potential advantages and hazards of integrating artificial intelligence applications into implementation science methods. We conclude by providing recommendations and resources for implementation researchers and practitioners to leverage artificial intelligence in their work responsibly. CONCLUSIONS Artificial intelligence holds promise to advance implementation science methods ("why") and accelerate its goals of closing the evidence-to-practice gap ("purpose"). However, evaluation of artificial intelligence's potential unintended consequences must be considered and proactively monitored. Given the technical nature of artificial intelligence applications as well as their potential impact on the field, transdisciplinary collaboration is needed and may suggest the need for a subset of implementation scientists cross-trained in both fields to ensure artificial intelligence is used optimally and ethically.
Collapse
Affiliation(s)
- Katy E Trinkley
- Department of Family Medicine, School of Medicine, University of Colorado Anschutz Medical Campus, Aurora, CO, USA.
- Adult and Child Center for Outcomes Research and Delivery Science Center, University of Colorado Anschutz Medical Campus, Aurora, CO, USA.
- Department of Biomedical Informatics, School of Medicine, University of Colorado Anschutz Medical Campus, Aurora, CO, USA.
- Colorado Center for Personalized Medicine, School of Medicine, University of Colorado Anschutz Medical Campus, Aurora, CO, USA.
| | - Ruopeng An
- Brown School and Division of Computational and Data Sciences at Washington University in St. Louis, St. Louis, MO, USA
| | - Anna M Maw
- Adult and Child Center for Outcomes Research and Delivery Science Center, University of Colorado Anschutz Medical Campus, Aurora, CO, USA
- School of Medicine, Division of Hospital Medicine, University of Colorado Anschutz Medical Campus, Aurora, CO, USA
| | - Russell E Glasgow
- Department of Family Medicine, School of Medicine, University of Colorado Anschutz Medical Campus, Aurora, CO, USA
- Adult and Child Center for Outcomes Research and Delivery Science Center, University of Colorado Anschutz Medical Campus, Aurora, CO, USA
| | - Ross C Brownson
- Prevention Research Center, Brown School at Washington University in St. Louis, St. Louis, MO, USA
- Department of Surgery, Division of Public Health Sciences, and Alvin J. Siteman Cancer Center, Washington University School of Medicine, Washington University in St. Louis, St. Louis, MO, USA
| |
Collapse
|
2
|
Macrae C. Managing risk and resilience in autonomous and intelligent systems: Exploring safety in the development, deployment, and use of artificial intelligence in healthcare. RISK ANALYSIS : AN OFFICIAL PUBLICATION OF THE SOCIETY FOR RISK ANALYSIS 2024. [PMID: 38246857 DOI: 10.1111/risa.14273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 01/07/2024] [Accepted: 01/08/2024] [Indexed: 01/23/2024]
Abstract
Autonomous and intelligent systems (AIS) are being developed and deployed across a wide range of sectors and encompass a variety of technologies designed to engage in different forms of independent reasoning and self-directed behavior. These technologies may bring considerable benefits to society but also pose a range of risk management challenges, particularly when deployed in safety-critical sectors where complex interactions between human, social, and technical processes underpin safety and resilience. Healthcare is one safety-critical sector at the forefront of efforts to develop and deploy intelligent technologies, such as through artificial intelligence (AI) systems intended to automate key aspects of healthcare tasks such as reading medical images to identify signs of pathology. This article develops a qualitative analysis of the sociotechnical sources of risk and resilience associated with the development, deployment, and use of AI in healthcare, drawing on 40 in-depth interviews with participants involved in the development, management, and regulation of AI. Qualitative template analysis is used to examine sociotechnical sources of risk and resilience, drawing on and elaborating Macrae's (2022, Risk Analysis, 42(9), 1999-2025) SOTEC framework that integrates structural, organizational, technological, epistemic, and cultural sources of risk in AIS. This analysis explores an array of sociotechnical sources of risk associated with the development, deployment, and use of AI in healthcare and identifies an array of sociotechnical patterns of resilience that may counter those risks. In doing so, the SOTEC framework is elaborated and translated to define key sources of both risk and resilience in AIS.
Collapse
Affiliation(s)
- Carl Macrae
- Nottingham University Business School, University of Nottingham, Nottingham, UK
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| |
Collapse
|