1
|
Michalowski M, Wilk S, Michalowski W, Rao M, Carrier M. Provision and evaluation of explanations within an automated planning-based approach to solving the multimorbidity problem. J Biomed Inform 2024; 156:104681. [PMID: 38960273 DOI: 10.1016/j.jbi.2024.104681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 05/28/2024] [Accepted: 06/22/2024] [Indexed: 07/05/2024]
Abstract
The multimorbidity problem involves the identification and mitigation of adverse interactions that occur when multiple computer interpretable guidelines are applied concurrently to develop a treatment plan for a patient diagnosed with multiple diseases. Solving this problem requires decision support approaches which are difficult to comprehend for physicians. As such, the rationale for treatment plans generated by these approaches needs to be provided. OBJECTIVE To develop an explainability component for an automated planning-based approach to the multimorbidity problem, and to assess the fidelity and interpretability of generated explanations using a clinical case study. METHODS The explainability component leverages the task-network model for representing computer interpretable guidelines. It generates post-hoc explanations composed of three aspects that answer why specific clinical actions are in a treatment plan, why specific revisions were applied, and how factors like medication cost, patient's adherence, etc. influence the selection of specific actions. The explainability component is implemented as part of MitPlan, where we revised our planning-based approach to support explainability. We developed an evaluation instrument based on the system causability scale and other vetted surveys to evaluate the fidelity and interpretability of its explanations using a two dimensional comparison study design. RESULTS The explainability component was implemented for MitPlan and tested in the context of a clinical case study. The fidelity and interpretability of the generated explanations were assessed using a physician-focused evaluation study involving 21 participants from two different specialties and two levels of experience. Results show that explanations provided by the explainability component in MitPlan are of acceptable fidelity and interpretability, and that the clinical justification of the actions in a treatment plan is important to physicians. CONCLUSION We created an explainability component that enriches an automated planning-based approach to solving the multimorbidity problem with meaningful explanations for actions in a treatment plan. This component relies on the task-network model to represent computer interpretable guidelines and as such can be ported to other approaches that also use the task-network model representation. Our evaluation study demonstrated that explanations that support a physician's understanding of the clinical reasons for the actions in a treatment plan are useful and important.
Collapse
Affiliation(s)
| | - Szymon Wilk
- Institute of Computing Science, Poznan University of Technology, Piotrowo 2, 60-965 Poznan, Poland
| | - Wojtek Michalowski
- Telfer School of Management, University of Ottawa, 55 Laurier Ave East, Ottawa, ON K1N 6N5, Canada
| | - Malvika Rao
- Telfer School of Management, University of Ottawa, 55 Laurier Ave East, Ottawa, ON K1N 6N5, Canada
| | - Marc Carrier
- The Ottawa Hospital, 725 Parkdale Ave, Ottawa, ON K1Y 4E9, Canada
| |
Collapse
|
2
|
Evans RP, Bryant LD, Russell G, Absolom K. Trust and acceptability of data-driven clinical recommendations in everyday practice: A scoping review. Int J Med Inform 2024; 183:105342. [PMID: 38266426 DOI: 10.1016/j.ijmedinf.2024.105342] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 12/08/2023] [Accepted: 01/14/2024] [Indexed: 01/26/2024]
Abstract
BACKGROUND Increasing attention is being given to the analysis of large health datasets to derive new clinical decision support systems (CDSS). However, few data-driven CDSS are being adopted into clinical practice. Trust in these tools is believed to be fundamental for acceptance and uptake but to date little attention has been given to defining or evaluating trust in clinical settings. OBJECTIVES A scoping review was conducted to explore how and where acceptability and trustworthiness of data-driven CDSS have been assessed from the health professional's perspective. METHODS Medline, Embase, PsycInfo, Web of Science, Scopus, ACM Digital, IEEE Xplore and Google Scholar were searched in March 2022 using terms expanded from: "data-driven" AND "clinical decision support" AND "acceptability". Included studies focused on healthcare practitioner-facing data-driven CDSS, relating directly to clinical care. They included trust or a proxy as an outcome, or in the discussion. The preferred reporting items for systematic reviews and meta-analyses extension for scoping reviews (PRISMA-ScR) is followed in the reporting of this review. RESULTS 3291 papers were screened, with 85 primary research studies eligible for inclusion. Studies covered a diverse range of clinical specialisms and intended contexts, but hypothetical systems (24) outnumbered those in clinical use (18). Twenty-five studies measured trust, via a wide variety of quantitative, qualitative and mixed methods. A further 24 discussed themes of trust without it being explicitly evaluated, and from these, themes of transparency, explainability, and supporting evidence were identified as factors influencing healthcare practitioner trust in data-driven CDSS. CONCLUSION There is a growing body of research on data-driven CDSS, but few studies have explored stakeholder perceptions in depth, with limited focused research on trustworthiness. Further research on healthcare practitioner acceptance, including requirements for transparency and explainability, should inform clinical implementation.
Collapse
Affiliation(s)
- Ruth P Evans
- University of Leeds, Woodhouse Lane, Leeds LS2 9JT, UK.
| | | | - Gregor Russell
- Bradford District Care Trust, Bradford, New Mill, Victoria Rd, BD18 3LD, UK.
| | - Kate Absolom
- University of Leeds, Woodhouse Lane, Leeds LS2 9JT, UK.
| |
Collapse
|
3
|
Subramanian HV, Canfield C, Shank DB. Designing explainable AI to improve human-AI team performance: A medical stakeholder-driven scoping review. Artif Intell Med 2024; 149:102780. [PMID: 38462282 DOI: 10.1016/j.artmed.2024.102780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 12/20/2023] [Accepted: 01/14/2024] [Indexed: 03/12/2024]
Abstract
The rise of complex AI systems in healthcare and other sectors has led to a growing area of research called Explainable AI (XAI) designed to increase transparency. In this area, quantitative and qualitative studies focus on improving user trust and task performance by providing system- and prediction-level XAI features. We analyze stakeholder engagement events (interviews and workshops) on the use of AI for kidney transplantation. From this we identify themes which we use to frame a scoping literature review on current XAI features. The stakeholder engagement process lasted over nine months covering three stakeholder group's workflows, determining where AI could intervene and assessing a mock XAI decision support system. Based on the stakeholder engagement, we identify four major themes relevant to designing XAI systems - 1) use of AI predictions, 2) information included in AI predictions, 3) personalization of AI predictions for individual differences, and 4) customizing AI predictions for specific cases. Using these themes, our scoping literature review finds that providing AI predictions before, during, or after decision-making could be beneficial depending on the complexity of the stakeholder's task. Additionally, expert stakeholders like surgeons prefer minimal to no XAI features, AI prediction, and uncertainty estimates for easy use cases. However, almost all stakeholders prefer to have optional XAI features to review when needed, especially in hard-to-predict cases. The literature also suggests that providing both system- and prediction-level information is necessary to build the user's mental model of the system appropriately. Although XAI features improve users' trust in the system, human-AI team performance is not always enhanced. Overall, stakeholders prefer to have agency over the XAI interface to control the level of information based on their needs and task complexity. We conclude with suggestions for future research, especially on customizing XAI features based on preferences and tasks.
Collapse
Affiliation(s)
- Harishankar V Subramanian
- Engineering Management & Systems Engineering, Missouri University of Science and Technology, 600 W 14(th) Street, Rolla, MO 65409, United States of America
| | - Casey Canfield
- Engineering Management & Systems Engineering, Missouri University of Science and Technology, 600 W 14(th) Street, Rolla, MO 65409, United States of America.
| | - Daniel B Shank
- Psychological Science, Missouri University of Science and Technology, 500 W 14(th) Street, Rolla, MO 65409, United States of America
| |
Collapse
|
4
|
Shevtsova D, Ahmed A, Boot IWA, Sanges C, Hudecek M, Jacobs JJL, Hort S, Vrijhoef HJM. Trust in and Acceptance of Artificial Intelligence Applications in Medicine: Mixed Methods Study. JMIR Hum Factors 2024; 11:e47031. [PMID: 38231544 PMCID: PMC10831593 DOI: 10.2196/47031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 09/25/2023] [Accepted: 11/20/2023] [Indexed: 01/18/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI)-powered technologies are being increasingly used in almost all fields, including medicine. However, to successfully implement medical AI applications, ensuring trust and acceptance toward such technologies is crucial for their successful spread and timely adoption worldwide. Although AI applications in medicine provide advantages to the current health care system, there are also various associated challenges regarding, for instance, data privacy, accountability, and equity and fairness, which could hinder medical AI application implementation. OBJECTIVE The aim of this study was to identify factors related to trust in and acceptance of novel AI-powered medical technologies and to assess the relevance of those factors among relevant stakeholders. METHODS This study used a mixed methods design. First, a rapid review of the existing literature was conducted, aiming to identify various factors related to trust in and acceptance of novel AI applications in medicine. Next, an electronic survey including the rapid review-derived factors was disseminated among key stakeholder groups. Participants (N=22) were asked to assess on a 5-point Likert scale (1=irrelevant to 5=relevant) to what extent they thought the various factors (N=19) were relevant to trust in and acceptance of novel AI applications in medicine. RESULTS The rapid review (N=32 papers) yielded 110 factors related to trust and 77 factors related to acceptance toward AI technology in medicine. Closely related factors were assigned to 1 of the 19 overarching umbrella factors, which were further grouped into 4 categories: human-related (ie, the type of institution AI professionals originate from), technology-related (ie, the explainability and transparency of AI application processes and outcomes), ethical and legal (ie, data use transparency), and additional factors (ie, AI applications being environment friendly). The categorized 19 umbrella factors were presented as survey statements, which were evaluated by relevant stakeholders. Survey participants (N=22) represented researchers (n=18, 82%), technology providers (n=5, 23%), hospital staff (n=3, 14%), and policy makers (n=3, 14%). Of the 19 factors, 16 (84%) human-related, technology-related, ethical and legal, and additional factors were considered to be of high relevance to trust in and acceptance of novel AI applications in medicine. The patient's gender, age, and education level were found to be of low relevance (3/19, 16%). CONCLUSIONS The results of this study could help the implementers of medical AI applications to understand what drives trust and acceptance toward AI-powered technologies among key stakeholders in medicine. Consequently, this would allow the implementers to identify strategies that facilitate trust in and acceptance of medical AI applications among key stakeholders and potential users.
Collapse
Affiliation(s)
- Daria Shevtsova
- Panaxea bv, Den Bosch, Netherlands
- Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | | | | | | | | | | | - Simon Hort
- Fraunhofer Institute for Production Technology, Aachen, Germany
| | | |
Collapse
|
5
|
Hoyos W, Aguilar J, Raciny M, Toro M. Case studies of clinical decision-making through prescriptive models based on machine learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107829. [PMID: 37837889 DOI: 10.1016/j.cmpb.2023.107829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 08/11/2023] [Accepted: 09/22/2023] [Indexed: 10/16/2023]
Abstract
BACKGROUND The development of computational methodologies to support clinical decision-making is of vital importance to reduce morbidity and mortality rates. Specifically, prescriptive analytic is a promising area to support decision-making in the monitoring, treatment and prevention of diseases. These aspects remain a challenge for medical professionals and health authorities. MATERIALS AND METHODS In this study, we propose a methodology for the development of prescriptive models to support decision-making in clinical settings. The prescriptive model requires a predictive model to build the prescriptions. The predictive model is developed using fuzzy cognitive maps and the particle swarm optimization algorithm, while the prescriptive model is developed with an extension of fuzzy cognitive maps that combines them with genetic algorithms. We evaluated the proposed approach in three case studies related to monitoring (warfarin dose estimation), treatment (severe dengue) and prevention (geohelminthiasis) of diseases. RESULTS The performance of the developed prescriptive models demonstrated the ability to estimate warfarin doses in coagulated patients, prescribe treatment for severe dengue and generate actions aimed at the prevention of geohelminthiasis. Additionally, the predictive models can predict coagulation indices, severe dengue mortality and soil-transmitted helminth infections. CONCLUSIONS The developed models performed well to prescribe actions aimed to monitor, treat and prevent diseases. This type of strategy allows supporting decision-making in clinical settings. However, validations in health institutions are required for their implementation.
Collapse
Affiliation(s)
- William Hoyos
- Grupo de Investigaciones Microbiológicas y Biomédicas de Córdoba, Universidad de Córdoba, Montería, Colombia; Grupo de Investigación en I+D+i en TIC, Universidad EAFIT, Medellín, Colombia
| | - Jose Aguilar
- Grupo de Investigación en I+D+i en TIC, Universidad EAFIT, Medellín, Colombia; Centro de Estudios en Microelectrónica y Sistemas Distribuidos, Universidad de Los Andes, Merida, Venezuela; IMDEA Networks Institute, Madrid, Spain.
| | - Mayra Raciny
- Grupo de Investigaciones Microbiológicas y Biomédicas de Córdoba, Universidad de Córdoba, Montería, Colombia
| | - Mauricio Toro
- Grupo de Investigación en I+D+i en TIC, Universidad EAFIT, Medellín, Colombia
| |
Collapse
|
6
|
Kopanitsa G, Metsker O, Kovalchuk S. Machine Learning Methods for Pregnancy and Childbirth Risk Management. J Pers Med 2023; 13:975. [PMID: 37373964 DOI: 10.3390/jpm13060975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Revised: 06/04/2023] [Accepted: 06/08/2023] [Indexed: 06/29/2023] Open
Abstract
Machine learning methods enable medical systems to automatically generate data-driven decision support models using real-world data inputs, eliminating the need for explicit rule design. In this research, we investigated the application of machine learning methods in healthcare, specifically focusing on pregnancy and childbirth risks. The timely identification of risk factors during early pregnancy, along with risk management, mitigation, prevention, and adherence management, can significantly reduce adverse perinatal outcomes and complications for both mother and child. Given the existing burden on medical professionals, clinical decision support systems (CDSSs) can play a role in risk management. However, these systems require high-quality decision support models based on validated medical data that are also clinically interpretable. To develop models for predicting childbirth risks and due dates, we conducted a retrospective analysis of electronic health records from the perinatal Center of the Almazov Specialized Medical Center in Saint-Petersburg, Russia. The dataset, which was exported from the medical information system, consisted of structured and semi-structured data, encompassing a total of 73,115 lines for 12,989 female patients. Our proposed approach, which includes a detailed analysis of predictive model performance and interpretability, offers numerous opportunities for decision support in perinatal care provision. The high predictive performance achieved by our models ensures precise support for both individual patient care and overall health organization management.
Collapse
Affiliation(s)
- Georgy Kopanitsa
- Faculty of Digital Transformations, ITMO University, 4 Birzhevaya Liniya, 199034 Saint-Petersburg, Russia
- Almazov National Medical Research Centre, Ulitsa Akkuratova, 2, 197341 Saint-Petersburg, Russia
| | - Oleg Metsker
- Almazov National Medical Research Centre, Ulitsa Akkuratova, 2, 197341 Saint-Petersburg, Russia
| | - Sergey Kovalchuk
- Faculty of Digital Transformations, ITMO University, 4 Birzhevaya Liniya, 199034 Saint-Petersburg, Russia
| |
Collapse
|
7
|
Talias MA, Lamnisos D, Heraclides A. Editorial: Data science and health economics in precision public health. Front Public Health 2022; 10:960282. [PMID: 36561876 PMCID: PMC9765307 DOI: 10.3389/fpubh.2022.960282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 09/20/2022] [Indexed: 12/12/2022] Open
Affiliation(s)
- Michael A. Talias
- Healthcare Management Postgraduate Program, School of Economics and Management, Open University of Cyprus, Latsia, Cyprus,*Correspondence: Michael A. Talias
| | - Demetris Lamnisos
- Department of Health Sciences, European University Cyprus, Engomi, Cyprus
| | | |
Collapse
|
8
|
Di Martino F, Delmastro F. Explainable AI for clinical and remote health applications: a survey on tabular and time series data. Artif Intell Rev 2022; 56:5261-5315. [PMID: 36320613 PMCID: PMC9607788 DOI: 10.1007/s10462-022-10304-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
AbstractNowadays Artificial Intelligence (AI) has become a fundamental component of healthcare applications, both clinical and remote, but the best performing AI systems are often too complex to be self-explaining. Explainable AI (XAI) techniques are defined to unveil the reasoning behind the system’s predictions and decisions, and they become even more critical when dealing with sensitive and personal health data. It is worth noting that XAI has not gathered the same attention across different research areas and data types, especially in healthcare. In particular, many clinical and remote health applications are based on tabular and time series data, respectively, and XAI is not commonly analysed on these data types, while computer vision and Natural Language Processing (NLP) are the reference applications. To provide an overview of XAI methods that are most suitable for tabular and time series data in the healthcare domain, this paper provides a review of the literature in the last 5 years, illustrating the type of generated explanations and the efforts provided to evaluate their relevance and quality. Specifically, we identify clinical validation, consistency assessment, objective and standardised quality evaluation, and human-centered quality assessment as key features to ensure effective explanations for the end users. Finally, we highlight the main research challenges in the field as well as the limitations of existing XAI methods.
Collapse
|