1
|
Raina R, Nada A, Shah R, Aly H, Kadatane S, Abitbol C, Aggarwal M, Koyner J, Neyra J, Sethi SK. Artificial intelligence in early detection and prediction of pediatric/neonatal acute kidney injury: current status and future directions. Pediatr Nephrol 2024; 39:2309-2324. [PMID: 37889281 DOI: 10.1007/s00467-023-06191-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 08/27/2023] [Accepted: 09/26/2023] [Indexed: 10/28/2023]
Abstract
Acute kidney injury (AKI) has a significant impact on the short-term and long-term clinical outcomes of pediatric and neonatal patients, and it is imperative in these populations to mitigate the pathways leading to AKI and be prepared for early diagnosis and treatment intervention of established AKI. Recently, artificial intelligence (AI) has provided more advent predictive models for early detection/prediction of AKI utilizing machine learning (ML). By providing strong detail and evidence from risk scores and electronic alerts, this review outlines a comprehensive and holistic insight into the current state of AI in AKI in pediatric/neonatal patients. In the pediatric population, AI models including XGBoost, logistic regression, support vector machines, decision trees, naïve Bayes, and risk stratification scores (Renal Angina Index (RAI), Nephrotoxic Injury Negated by Just-in-time Action (NINJA)) have shown success in predicting AKI using variables like serum creatinine, urine output, and electronic health record (EHR) alerts. Similarly, in the neonatal population, using the "Baby NINJA" model showed a decrease in nephrotoxic medication exposure by 42%, the rate of AKI by 78%, and the number of days with AKI by 68%. Furthermore, the "STARZ" risk stratification AI model showed a predictive ability of AKI within 7 days of NICU admission of AUC 0.93 and AUC of 0.96 in the validation and derivation cohorts, respectively. Many studies have reported the superiority of using biomarkers to predict AKI in pediatric patients and neonates as well. Future directions include the application of AI along with biomarkers (NGAL, CysC, OPN, IL-18, B2M, etc.) in a Labelbox configuration to create a more robust and accurate model for predicting and detecting pediatric/neonatal AKI.
Collapse
Affiliation(s)
- Rupesh Raina
- Akron Nephrology Associates/Cleveland Clinic Akron General Medical Center, Akron, OH, USA.
- Department of Nephrology, Akron Children's Hospital, Akron, OH, USA.
- Department of Medicine, Northeast Ohio Medical University, Rootstown, OH, USA.
| | - Arwa Nada
- Le Bonheur Children's Hospital & St. Jude Research Hospital, The University of Tennessee Health Science Center, Memphis, TN, USA
| | - Raghav Shah
- Akron Nephrology Associates/Cleveland Clinic Akron General Medical Center, Akron, OH, USA
- Department of Medicine, Northeast Ohio Medical University, Rootstown, OH, USA
| | - Hany Aly
- Department of Neonatology, Cleveland Clinic Children's, Cleveland, OH, USA
| | - Saurav Kadatane
- Department of Medicine, Northeast Ohio Medical University, Rootstown, OH, USA
| | - Carolyn Abitbol
- Department of Pediatrics, Division of Pediatric Nephrology, University of Miami Miller School of Medicine/Holtz Children's Hospital, Miami, FL, USA
| | - Mihika Aggarwal
- Paediatric Nephrology & Paediatric Kidney Transplantation, Kidney and Urology Institute, Medanta, The Medicity Hospital, Gurgaon, India
| | - Jay Koyner
- Section of Nephrology, Department of Medicine, University of Chicago, Pritzker School of Medicine, Chicago, IL, USA
| | - Javier Neyra
- Department of Medicine, Division of Nephrology, University of Alabama at Birmingham, Birmingham, AL, USA
| | - Sidharth Kumar Sethi
- Paediatric Nephrology & Paediatric Kidney Transplantation, Kidney and Urology Institute, Medanta, The Medicity Hospital, Gurgaon, India
| |
Collapse
|
2
|
De Barros A, Abel F, Kolisnyk S, Geraci GC, Hill F, Engrav M, Samavedi S, Suldina O, Kim J, Rusakov A, Lebl DR, Mourad R. Determining Prior Authorization Approval for Lumbar Stenosis Surgery With Machine Learning. Global Spine J 2024; 14:1753-1759. [PMID: 36752058 DOI: 10.1177/21925682231155844] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/09/2023] Open
Abstract
STUDY DESIGN Medical vignettes. OBJECTIVES Lumbar spinal stenosis (LSS) is a degenerative condition with a high prevalence in the elderly population, that is associated with a significant economic burden and often requires spinal surgery. Prior authorization of surgical candidates is required before patients can be covered by a health plan and must be approved by medical directors (MDs), which is often subjective and clinician specific. In this study, we hypothesized that the prediction accuracy of machine learning (ML) methods regarding surgical candidates is comparable to that of a panel of MDs. METHODS Based on patient demographic factors, previous therapeutic history, symptoms and physical examinations and imaging findings, we propose an ML which computes the probability of spinal surgical recommendations for LSS. The model implements a random forest model trained from medical vignette data reviewed by MDs. Sets of 400 and 100 medical vignettes reviewed by MDs were used for training and testing. RESULTS The predictive accuracy of the machine learning model was with a root mean square error (RMSE) between model predictions and ground truth of .1123, while the average RMSE between individual MD's recommendations and ground truth was .2661. For binary classification, the AUROC and Cohen's kappa were .959 and .801, while the corresponding average metrics based on individual MD's recommendations were .844 and .564, respectively. CONCLUSIONS Our results suggest that ML can be used to automate prior authorization approval of surgery for LSS with performance comparable to a panel of MDs.
Collapse
Affiliation(s)
- Amaury De Barros
- Toulouse NeuroImaging Center (ToNIC), University of Toulouse Paul Sabatier-INSERM, Toulouse, France
- Neuroscience (Neurosurgery) Center, Toulouse University Hospital, Toulouse, France
| | | | | | | | | | | | | | | | | | | | | | - Raphael Mourad
- Remedy Logic, New York, NY, USA
- University of Toulouse, Toulouse, France
| |
Collapse
|
3
|
Makarov V, Chabbert C, Koletou E, Psomopoulos F, Kurbatova N, Ramirez S, Nelson C, Natarajan P, Neupane B. Good machine learning practices: Learnings from the modern pharmaceutical discovery enterprise. Comput Biol Med 2024; 177:108632. [PMID: 38788373 DOI: 10.1016/j.compbiomed.2024.108632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2024] [Revised: 05/07/2024] [Accepted: 05/18/2024] [Indexed: 05/26/2024]
Abstract
Machine Learning (ML) and Artificial Intelligence (AI) have become an integral part of the drug discovery and development value chain. Many teams in the pharmaceutical industry nevertheless report the challenges associated with the timely, cost effective and meaningful delivery of ML and AI powered solutions for their scientists. We sought to better understand what these challenges were and how to overcome them by performing an industry wide assessment of the practices in AI and Machine Learning. Here we report results of the systematic business analysis of the personas in the modern pharmaceutical discovery enterprise in relation to their work with the AI and ML technologies. We identify 23 common business problems that individuals in these roles face when they encounter AI and ML technologies at work, and describe best practices (Good Machine Learning Practices) that address these issues.
Collapse
Affiliation(s)
- Vladimir Makarov
- The Pistoia Alliance, 401 Edgewater Place, Suite 600, Wakefield, MA, 01880, USA.
| | | | | | | | | | | | | | | | | |
Collapse
|
4
|
Templeton K. Sex and Gender in Orthopaedic Research: How Do We Continue to Move the Needle? J Bone Joint Surg Am 2024:00004623-990000000-01142. [PMID: 38905354 DOI: 10.2106/jbjs.24.00605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/23/2024]
Affiliation(s)
- Kimberly Templeton
- Department of Orthopaedic Surgery, University of Kansas Medical Center, Kansas City, Kansas
| |
Collapse
|
5
|
Westbrook JI, Wabe N, Raban MZ. Using AI to improve medication safety. Nat Med 2024; 30:1531-1532. [PMID: 38720001 DOI: 10.1038/s41591-024-02980-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Affiliation(s)
- Johanna I Westbrook
- Centre for Health Systems and Safety Research, Australian Institute of Health Innovation, Macquarie University, Sydney, New South Wales, Australia.
| | - Nasir Wabe
- Centre for Health Systems and Safety Research, Australian Institute of Health Innovation, Macquarie University, Sydney, New South Wales, Australia
| | - Magdalena Z Raban
- Centre for Health Systems and Safety Research, Australian Institute of Health Innovation, Macquarie University, Sydney, New South Wales, Australia
| |
Collapse
|
6
|
Zondag AGM, Rozestraten R, Grimmelikhuijsen SG, Jongsma KR, van Solinge WW, Bots ML, Vernooij RWM, Haitjema S. The Effect of Artificial Intelligence on Patient-Physician Trust: Cross-Sectional Vignette Study. J Med Internet Res 2024; 26:e50853. [PMID: 38805702 PMCID: PMC11167322 DOI: 10.2196/50853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 03/21/2024] [Accepted: 04/16/2024] [Indexed: 05/30/2024] Open
Abstract
BACKGROUND Clinical decision support systems (CDSSs) based on routine care data, using artificial intelligence (AI), are increasingly being developed. Previous studies focused largely on the technical aspects of using AI, but the acceptability of these technologies by patients remains unclear. OBJECTIVE We aimed to investigate whether patient-physician trust is affected when medical decision-making is supported by a CDSS. METHODS We conducted a vignette study among the patient panel (N=860) of the University Medical Center Utrecht, the Netherlands. Patients were randomly assigned into 4 groups-either the intervention or control groups of the high-risk or low-risk cases. In both the high-risk and low-risk case groups, a physician made a treatment decision with (intervention groups) or without (control groups) the support of a CDSS. Using a questionnaire with a 7-point Likert scale, with 1 indicating "strongly disagree" and 7 indicating "strongly agree," we collected data on patient-physician trust in 3 dimensions: competence, integrity, and benevolence. We assessed differences in patient-physician trust between the control and intervention groups per case using Mann-Whitney U tests and potential effect modification by the participant's sex, age, education level, general trust in health care, and general trust in technology using multivariate analyses of (co)variance. RESULTS In total, 398 patients participated. In the high-risk case, median perceived competence and integrity were lower in the intervention group compared to the control group but not statistically significant (5.8 vs 5.6; P=.16 and 6.3 vs 6.0; P=.06, respectively). However, the effect of a CDSS application on the perceived competence of the physician depended on the participant's sex (P=.03). Although no between-group differences were found in men, in women, the perception of the physician's competence and integrity was significantly lower in the intervention compared to the control group (P=.009 and P=.01, respectively). In the low-risk case, no differences in trust between the groups were found. However, increased trust in technology positively influenced the perceived benevolence and integrity in the low-risk case (P=.009 and P=.04, respectively). CONCLUSIONS We found that, in general, patient-physician trust was high. However, our findings indicate a potentially negative effect of AI applications on the patient-physician relationship, especially among women and in high-risk situations. Trust in technology, in general, might increase the likelihood of embracing the use of CDSSs by treating professionals.
Collapse
Affiliation(s)
- Anna G M Zondag
- Central Diagnostic Laboratory, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
| | - Raoul Rozestraten
- Utrecht University School of Governance, Utrecht University, Utrecht, Netherlands
| | | | - Karin R Jongsma
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
| | - Wouter W van Solinge
- Central Diagnostic Laboratory, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
| | - Michiel L Bots
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
| | - Robin W M Vernooij
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
- Department of Nephrology and Hypertension, University Medical Center Utrecht, Utrecht, Netherlands
| | - Saskia Haitjema
- Central Diagnostic Laboratory, University Medical Center Utrecht, Utrecht University, Utrecht, Netherlands
| |
Collapse
|
7
|
Alhuwaydi AM. Exploring the Role of Artificial Intelligence in Mental Healthcare: Current Trends and Future Directions - A Narrative Review for a Comprehensive Insight. Risk Manag Healthc Policy 2024; 17:1339-1348. [PMID: 38799612 PMCID: PMC11127648 DOI: 10.2147/rmhp.s461562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 05/10/2024] [Indexed: 05/29/2024] Open
Abstract
Mental health is an essential component of the health and well-being of a person and community, and it is critical for the individual, society, and socio-economic development of any country. Mental healthcare is currently in the health sector transformation era, with emerging technologies such as artificial intelligence (AI) reshaping the screening, diagnosis, and treatment modalities of psychiatric illnesses. The present narrative review is aimed at discussing the current landscape and the role of AI in mental healthcare, including screening, diagnosis, and treatment. Furthermore, this review attempted to highlight the key challenges, limitations, and prospects of AI in providing mental healthcare based on existing works of literature. The literature search for this narrative review was obtained from PubMed, Saudi Digital Library (SDL), Google Scholar, Web of Science, and IEEE Xplore, and we included only English-language articles published in the last five years. Keywords used in combination with Boolean operators ("AND" and "OR") were the following: "Artificial intelligence", "Machine learning", Deep learning", "Early diagnosis", "Treatment", "interventions", "ethical consideration", and "mental Healthcare". Our literature review revealed that, equipped with predictive analytics capabilities, AI can improve treatment planning by predicting an individual's response to various interventions. Predictive analytics, which uses historical data to formulate preventative interventions, aligns with the move toward individualized and preventive mental healthcare. In the screening and diagnostic domains, a subset of AI, such as machine learning and deep learning, has been proven to analyze various mental health data sets and predict the patterns associated with various mental health problems. However, limited studies have evaluated the collaboration between healthcare professionals and AI in delivering mental healthcare, as these sensitive problems require empathy, human connections, and holistic, personalized, and multidisciplinary approaches. Ethical issues, cybersecurity, a lack of data analytics diversity, cultural sensitivity, and language barriers remain concerns for implementing this futuristic approach in mental healthcare. Considering these sensitive problems require empathy, human connections, and holistic, personalized, and multidisciplinary approaches, it is imperative to explore these aspects. Therefore, future comparative trials with larger sample sizes and data sets are warranted to evaluate different AI models used in mental healthcare across regions to fill the existing knowledge gaps.
Collapse
Affiliation(s)
- Ahmed M Alhuwaydi
- Department of Internal Medicine, Division of Psychiatry, College of Medicine, Jouf University, Sakaka, Saudi Arabia
| |
Collapse
|
8
|
Franklin G, Stephens R, Piracha M, Tiosano S, Lehouillier F, Koppel R, Elkin PL. The Sociodemographic Biases in Machine Learning Algorithms: A Biomedical Informatics Perspective. Life (Basel) 2024; 14:652. [PMID: 38929638 PMCID: PMC11204917 DOI: 10.3390/life14060652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Revised: 04/24/2024] [Accepted: 04/26/2024] [Indexed: 06/28/2024] Open
Abstract
Artificial intelligence models represented in machine learning algorithms are promising tools for risk assessment used to guide clinical and other health care decisions. Machine learning algorithms, however, may house biases that propagate stereotypes, inequities, and discrimination that contribute to socioeconomic health care disparities. The biases include those related to some sociodemographic characteristics such as race, ethnicity, gender, age, insurance, and socioeconomic status from the use of erroneous electronic health record data. Additionally, there is concern that training data and algorithmic biases in large language models pose potential drawbacks. These biases affect the lives and livelihoods of a significant percentage of the population in the United States and globally. The social and economic consequences of the associated backlash cannot be underestimated. Here, we outline some of the sociodemographic, training data, and algorithmic biases that undermine sound health care risk assessment and medical decision-making that should be addressed in the health care system. We present a perspective and overview of these biases by gender, race, ethnicity, age, historically marginalized communities, algorithmic bias, biased evaluations, implicit bias, selection/sampling bias, socioeconomic status biases, biased data distributions, cultural biases and insurance status bias, conformation bias, information bias and anchoring biases and make recommendations to improve large language model training data, including de-biasing techniques such as counterfactual role-reversed sentences during knowledge distillation, fine-tuning, prefix attachment at training time, the use of toxicity classifiers, retrieval augmented generation and algorithmic modification to mitigate the biases moving forward.
Collapse
Affiliation(s)
- Gillian Franklin
- Department of Biomedical Informatics, University at Buffalo, Buffalo, NY 14203, USA; (G.F.); (R.S.); (M.P.); (F.L.); (R.K.)
- Department of Veterans Affairs, Knowledge Based Systems and Western New York, Veterans Affairs, Buffalo, NY 14215, USA
| | - Rachel Stephens
- Department of Biomedical Informatics, University at Buffalo, Buffalo, NY 14203, USA; (G.F.); (R.S.); (M.P.); (F.L.); (R.K.)
| | - Muhammad Piracha
- Department of Biomedical Informatics, University at Buffalo, Buffalo, NY 14203, USA; (G.F.); (R.S.); (M.P.); (F.L.); (R.K.)
| | - Shmuel Tiosano
- Department of Biomedical Informatics, University at Buffalo, Buffalo, NY 14203, USA; (G.F.); (R.S.); (M.P.); (F.L.); (R.K.)
| | - Frank Lehouillier
- Department of Biomedical Informatics, University at Buffalo, Buffalo, NY 14203, USA; (G.F.); (R.S.); (M.P.); (F.L.); (R.K.)
- Department of Veterans Affairs, Knowledge Based Systems and Western New York, Veterans Affairs, Buffalo, NY 14215, USA
| | - Ross Koppel
- Department of Biomedical Informatics, University at Buffalo, Buffalo, NY 14203, USA; (G.F.); (R.S.); (M.P.); (F.L.); (R.K.)
- Institute for Biomedical Informatics, Perelman School of Medicine, and Sociology Department, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Peter L. Elkin
- Department of Biomedical Informatics, University at Buffalo, Buffalo, NY 14203, USA; (G.F.); (R.S.); (M.P.); (F.L.); (R.K.)
- Department of Veterans Affairs, Knowledge Based Systems and Western New York, Veterans Affairs, Buffalo, NY 14215, USA
| |
Collapse
|
9
|
Haupt S, Carcel C, Norton R. Neglecting sex and gender in research is a public-health risk. Nature 2024; 629:527-530. [PMID: 38750229 DOI: 10.1038/d41586-024-01372-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/18/2024]
|
10
|
Rokhshad R, Zhang P, Mohammad-Rahimi H, Pitchika V, Entezari N, Schwendicke F. Accuracy and consistency of chatbots versus clinicians for answering pediatric dentistry questions: A pilot study. J Dent 2024; 144:104938. [PMID: 38499280 DOI: 10.1016/j.jdent.2024.104938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Revised: 03/06/2024] [Accepted: 03/11/2024] [Indexed: 03/20/2024] Open
Abstract
OBJECTIVES Artificial Intelligence has applications such as Large Language Models (LLMs), which simulate human-like conversations. The potential of LLMs in healthcare is not fully evaluated. This pilot study assessed the accuracy and consistency of chatbots and clinicians in answering common questions in pediatric dentistry. METHODS Two expert pediatric dentists developed thirty true or false questions involving different aspects of pediatric dentistry. Publicly accessible chatbots (Google Bard, ChatGPT4, ChatGPT 3.5, Llama, Sage, Claude 2 100k, Claude-instant, Claude-instant-100k, and Google Palm) were employed to answer the questions (3 independent new conversations). Three groups of clinicians (general dentists, pediatric specialists, and students; n = 20/group) also answered. Responses were graded by two pediatric dentistry faculty members, along with a third independent pediatric dentist. Resulting accuracies (percentage of correct responses) were compared using analysis of variance (ANOVA), and post-hoc pairwise group comparisons were corrected using Tukey's HSD method. ACronbach's alpha was calculated to determine consistency. RESULTS Pediatric dentists were significantly more accurate (mean±SD 96.67 %± 4.3 %) than other clinicians and chatbots (p < 0.001). General dentists (88.0 % ± 6.1 %) also demonstrated significantly higher accuracy than chatbots (p < 0.001), followed by students (80.8 %±6.9 %). ChatGPT showed the highest accuracy (78 %±3 %) among chatbots. All chatbots except ChatGPT3.5 showed acceptable consistency (Cronbach alpha>0.7). CLINICAL SIGNIFICANCE Based on this pilot study, chatbots may be valuable adjuncts for educational purposes and for distributing information to patients. However, they are not yet ready to serve as substitutes for human clinicians in diagnostic decision-making. CONCLUSION In this pilot study, chatbots showed lower accuracy than dentists. Chatbots may not yet be recommended for clinical pediatric dentistry.
Collapse
Affiliation(s)
- Rata Rokhshad
- Department of Pediatric Dentistry, University of Alabama at Birmingham, Birmingham, AL, USA.
| | - Ping Zhang
- Department of Pediatric Dentistry, University of Alabama at Birmingham, Birmingham, AL, USA
| | - Hossein Mohammad-Rahimi
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Germany
| | - Vinay Pitchika
- Department of Conservative Dentistry and Periodontology, LMU Klinikum Munich, Germany
| | - Niloufar Entezari
- Department of pediatric dentistry, School of Dentistry, Qom University of Medical Sciences, Qom, Iran
| | - Falk Schwendicke
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Germany; Department of Conservative Dentistry and Periodontology, LMU Klinikum Munich, Germany
| |
Collapse
|
11
|
Suzuki LA, Caso TJ, Yucel A, Asad A, Kokaze H. Contextualizing Positionality, Intersectionality, and Intelligence in the Anthropocene. J Intell 2024; 12:45. [PMID: 38667712 PMCID: PMC11050987 DOI: 10.3390/jintelligence12040045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 03/06/2024] [Accepted: 04/10/2024] [Indexed: 04/28/2024] Open
Abstract
The geological epoch of the Anthropocene has challenged traditional definitions of what intellectual abilities are necessary to creatively problem-solve, understand, and address contemporary societal and environmental crises. If we hope to make meaningful changes to how our society addresses these complex issues and pave the way for a better future for generations to come, we must advance traditional theories and measures of higher-order abilities to reflect equity and inclusion. To this end, we must address global issues by integrating the complexities of intersectional identities as they impact our understanding of what constitutes intelligence in individuals, groups, and diverse communities. This re-envisioning of intelligence presents new complexities for understanding and challenges for our field beyond the boundaries of what has been previously touted by many disciplines, including psychology. It is an opportunity to re-envision what it means to be intelligent in a diverse global context while also honoring and recognizing the value of difference, positionality, and other ways of knowing.
Collapse
Affiliation(s)
- Lisa A. Suzuki
- Department of Applied Psychology, New York University, New York, NY 10003, USA;
| | - Taymy J. Caso
- Educational Psychology, University of Alberta, Edmonton, AB T6G 1H9, Canada; (T.J.C.); (A.A.)
| | - Aysegul Yucel
- Department of Counseling and Clinical Psychology, John Jay College of Criminal Justice, New York, NY 10019, USA;
| | - Ahad Asad
- Educational Psychology, University of Alberta, Edmonton, AB T6G 1H9, Canada; (T.J.C.); (A.A.)
| | - Haruka Kokaze
- Department of Applied Psychology, New York University, New York, NY 10003, USA;
| |
Collapse
|
12
|
Juwara L, El-Hussuna A, El Emam K. An evaluation of synthetic data augmentation for mitigating covariate bias in health data. PATTERNS (NEW YORK, N.Y.) 2024; 5:100946. [PMID: 38645766 PMCID: PMC11026977 DOI: 10.1016/j.patter.2024.100946] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 10/23/2023] [Accepted: 02/08/2024] [Indexed: 04/23/2024]
Abstract
Data bias is a major concern in biomedical research, especially when evaluating large-scale observational datasets. It leads to imprecise predictions and inconsistent estimates in standard regression models. We compare the performance of commonly used bias-mitigating approaches (resampling, algorithmic, and post hoc approaches) against a synthetic data-augmentation method that utilizes sequential boosted decision trees to synthesize under-represented groups. The approach is called synthetic minority augmentation (SMA). Through simulations and analysis of real health datasets on a logistic regression workload, the approaches are evaluated across various bias scenarios (types and severity levels). Performance was assessed based on area under the curve, calibration (Brier score), precision of parameter estimates, confidence interval overlap, and fairness. Overall, SMA produces the closest results to the ground truth in low to medium bias (50% or less missing proportion). In high bias (80% or more missing proportion), the advantage of SMA is not obvious, with no specific method consistently outperforming others.
Collapse
Affiliation(s)
- Lamin Juwara
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada
- Research Institute, Children’s Hospital of Eastern Ontario, Ottawa, ON, Canada
| | | | - Khaled El Emam
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada
- Research Institute, Children’s Hospital of Eastern Ontario, Ottawa, ON, Canada
- Data Science, Replica Analytics Ltd., Ottawa, ON, Canada
| |
Collapse
|
13
|
Wiener RC, Patel JS. Oral and oropharyngeal cancer screening and tobacco cessation discussions, NHANES 2011-2018. Community Dent Oral Epidemiol 2024; 52:248-254. [PMID: 37853992 DOI: 10.1111/cdoe.12921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 08/23/2023] [Accepted: 10/05/2023] [Indexed: 10/20/2023]
Abstract
OBJECTIVE Oral cavity and oropharyngeal cancer (OOPC) is a devastating disease often caught in late stages. People who use tobacco are at higher risk of OOPC. Tobacco cessation discussions and OOPC screenings are important factors in decreasing the risk of OOPC or its late stage diagnosis. As research on sex differences has been increasing-from research on biomedical to psychological and sociological determinants-there is a potential difference, by sex, as to whom is more likely to have a tobacco cessation discussion and OOPC screening. The objective of this study is to determine if there is an association of sex with tobacco cessation discussions and OOPC screenings conducted by dental healthcare professionals among participants who currently use tobacco. METHOD Data from 8 years of the National Health and Nutrition Examination Survey (2011-2018) were merged. Data from participants, ages 30 years and above, who self-reported current use of tobacco, a dental visit within the previous year and responsed to questions about oral cancer screening were analysed for frequency determination and logistic regression analysis. Having the combination of neither OOPC screening nor discussion about the benefits of not using tobacco was the outcome in the analysis. RESULTS There were 22.1% who had an OOPC screening by a dental professional within the previous year. Of the 41% who reported having had a conversation with a dental professional within the previous year about the benefits of tobacco cessation, 9.8% reported having both the conversation and OOPC screening. Males were less likely than females to have the combination of neither OOPC screening nor advice about tobacco cessation than females (adjusted odds ratio: 0.74; 95%CI: 0.57, 0.96). CONCLUSION There is an increased need for OOPC screening and the discussion of tobacco use by dental professionals among their patients who use tobacco, particularly for female patients.
Collapse
Affiliation(s)
- R Constance Wiener
- Department of Dental Public Health and Professional Practice, School of Dentistry, West Virginia University, Morgantown, West Virginia, USA
| | - Jay S Patel
- Department of Oral Health Sciences Kornberg School of Dentistry, Temple University, Philadelphia, Pennsylvania, USA
| |
Collapse
|
14
|
Grossmann K, Risch M, Markovic A, Aeschbacher S, Weideli OC, Velez L, Kovac M, Pereira F, Wohlwend N, Risch C, Hillmann D, Lung T, Renz H, Twerenbold R, Rothenbühler M, Leibovitz D, Kovacevic V, Klaver P, Brakenhoff TB, Franks B, Mitratza M, Downward GS, Dowling A, Montes S, Veen D, Grobbee DE, Cronin M, Conen D, Goodale BM, Risch L. Sex-specific differences in physiological parameters related to SARS-CoV-2 infections among a national cohort (COVI-GAPP study). PLoS One 2024; 19:e0292203. [PMID: 38446766 PMCID: PMC10917257 DOI: 10.1371/journal.pone.0292203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Accepted: 01/04/2024] [Indexed: 03/08/2024] Open
Abstract
Considering sex as a biological variable in modern digital health solutions, we investigated sex-specific differences in the trajectory of four physiological parameters across a COVID-19 infection. A wearable medical device measured breathing rate, heart rate, heart rate variability, and wrist skin temperature in 1163 participants (mean age = 44.1 years, standard deviation [SD] = 5.6; 667 [57%] females). Participants reported daily symptoms and confounders in a complementary app. A machine learning algorithm retrospectively ingested daily biophysical parameters to detect COVID-19 infections. COVID-19 serology samples were collected from all participants at baseline and follow-up. We analysed potential sex-specific differences in physiology and antibody titres using multilevel modelling and t-tests. Over 1.5 million hours of physiological data were recorded. During the symptomatic period of infection, men demonstrated larger increases in skin temperature, breathing rate, and heart rate as well as larger decreases in heart rate variability than women. The COVID-19 infection detection algorithm performed similarly well for men and women. Our study belongs to the first research to provide evidence for differential physiological responses to COVID-19 between females and males, highlighting the potential of wearable technology to inform future precision medicine approaches.
Collapse
Affiliation(s)
- Kirsten Grossmann
- Private University in the Principality of Liechtenstein (UFL), Triesen, Principality of Liechtenstein
- Dr Risch Medical Laboratory, Vaduz, Liechtenstein
| | - Martin Risch
- Dr Risch Medical Laboratory, Vaduz, Liechtenstein
- Central Laboratory, Kantonsspital Graubünden, Chur, Switzerland
- Dr Risch Medical Laboratory, Buchs, Switzerland
| | - Andjela Markovic
- Ava AG, Zürich, Switzerland
- Department of Psychology, University of Fribourg, Fribourg, Switzerland
- Department of Pulmonology, University Hospital Zurich, Zurich, Switzerland
| | - Stefanie Aeschbacher
- Cardiovascular Research Institute Basel (CRIB), University Hospital Basel, University of Basel, Basel, Switzerland
| | - Ornella C. Weideli
- Dr Risch Medical Laboratory, Vaduz, Liechtenstein
- Soneva Fushi, Boduthakurufaanu Magu, Male, Maldives
| | - Laura Velez
- Dr Risch Medical Laboratory, Vaduz, Liechtenstein
| | - Marc Kovac
- Dr Risch Medical Laboratory, Buchs, Switzerland
| | - Fiona Pereira
- Department of Metabolism, Digestive Diseases and Reproduction, Imperial College London, South Kensington Campus, London, United Kingdom
| | | | | | | | - Thomas Lung
- Dr Risch Medical Laboratory, Buchs, Switzerland
| | - Harald Renz
- Institute of Laboratory Medicine and Pathobiochemistry, Molecular Diagnostics, Philipps University Marburg, Marburg, Germany
| | - Raphael Twerenbold
- Cardiovascular Research Institute Basel (CRIB), University Hospital Basel, University of Basel, Basel, Switzerland
- Department of Cardiology and University Center of Cardiovascular Science, University Heart and Vascular Center Hamburg, Hamburg, Germany
| | | | | | | | | | | | | | - Marianna Mitratza
- UMC Utrecht, Utrecht, The Netherlands
- Julius Global Health, the Julius Center for Health Sciences and Primary Care, University Medical Center, Utrecht, The Netherlands
| | - George S. Downward
- UMC Utrecht, Utrecht, The Netherlands
- Julius Global Health, the Julius Center for Health Sciences and Primary Care, University Medical Center, Utrecht, The Netherlands
| | - Ariel Dowling
- Takeda Pharmaceuticals, Digital Clinical Devices, Cambridge, Massachusetts, United States of America
| | | | - Duco Veen
- Department of Methodology and Statistics, Utrecht University, Utrecht, The Netherlands
- Optentia Research Programme, North-West University, Potchefstroom, South Africa
| | - Diederick E. Grobbee
- UMC Utrecht, Utrecht, The Netherlands
- Julius Global Health, the Julius Center for Health Sciences and Primary Care, University Medical Center, Utrecht, The Netherlands
| | | | - David Conen
- Population Health Research Institute, McMaster University, Hamilton, Canada
| | | | - Lorenz Risch
- Private University in the Principality of Liechtenstein (UFL), Triesen, Principality of Liechtenstein
- Dr Risch Medical Laboratory, Vaduz, Liechtenstein
- Dr Risch Medical Laboratory, Buchs, Switzerland
- Center of Laboratory Medicine, University Institute of Clinical Chemistry, University of Bern, Inselspital, Bern, Switzerland
| | | |
Collapse
|
15
|
Lv C, Guo W, Yin X, Liu L, Huang X, Li S, Zhang L. Innovative applications of artificial intelligence during the COVID-19 pandemic. INFECTIOUS MEDICINE 2024; 3:100095. [PMID: 38586543 PMCID: PMC10998276 DOI: 10.1016/j.imj.2024.100095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 12/16/2023] [Accepted: 02/18/2024] [Indexed: 04/09/2024]
Abstract
The COVID-19 pandemic has created unprecedented challenges worldwide. Artificial intelligence (AI) technologies hold tremendous potential for tackling key aspects of pandemic management and response. In the present review, we discuss the tremendous possibilities of AI technology in addressing the global challenges posed by the COVID-19 pandemic. First, we outline the multiple impacts of the current pandemic on public health, the economy, and society. Next, we focus on the innovative applications of advanced AI technologies in key areas such as COVID-19 prediction, detection, control, and drug discovery for treatment. Specifically, AI-based predictive analytics models can use clinical, epidemiological, and omics data to forecast disease spread and patient outcomes. Additionally, deep neural networks enable rapid diagnosis through medical imaging. Intelligent systems can support risk assessment, decision-making, and social sensing, thereby improving epidemic control and public health policies. Furthermore, high-throughput virtual screening enables AI to accelerate the identification of therapeutic drug candidates and opportunities for drug repurposing. Finally, we discuss future research directions for AI technology in combating COVID-19, emphasizing the importance of interdisciplinary collaboration. Though promising, barriers related to model generalization, data quality, infrastructure readiness, and ethical risks must be addressed to fully translate these innovations into real-world impacts. Multidisciplinary collaboration engaging diverse expertise and stakeholders is imperative for developing robust, responsible, and human-centered AI solutions against COVID-19 and future public health emergencies.
Collapse
Affiliation(s)
- Chenrui Lv
- Huazhong Agricultural University, Wuhan 430070, China
| | - Wenqiang Guo
- Huazhong Agricultural University, Wuhan 430070, China
| | - Xinyi Yin
- Huazhong Agricultural University, Wuhan 430070, China
| | - Liu Liu
- National Institute of Parasitic Diseases, Chinese Center for Disease Control and Prevention; Chinese Center for Tropical Diseases Research, Shanghai 200001, China
| | - Xinlei Huang
- Huazhong Agricultural University, Wuhan 430070, China
| | - Shimin Li
- Huazhong Agricultural University, Wuhan 430070, China
| | - Li Zhang
- Huazhong Agricultural University, Wuhan 430070, China
| |
Collapse
|
16
|
Weidener L, Fischer M. Proposing a Principle-Based Approach for Teaching AI Ethics in Medical Education. JMIR MEDICAL EDUCATION 2024; 10:e55368. [PMID: 38285931 PMCID: PMC10891487 DOI: 10.2196/55368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 01/02/2024] [Accepted: 01/29/2024] [Indexed: 01/31/2024]
Abstract
The use of artificial intelligence (AI) in medicine, potentially leading to substantial advancements such as improved diagnostics, has been of increasing scientific and societal interest in recent years. However, the use of AI raises new ethical challenges, such as an increased risk of bias and potential discrimination against patients, as well as misdiagnoses potentially leading to over- or underdiagnosis with substantial consequences for patients. Recognizing these challenges, current research underscores the importance of integrating AI ethics into medical education. This viewpoint paper aims to introduce a comprehensive set of ethical principles for teaching AI ethics in medical education. This dynamic and principle-based approach is designed to be adaptive and comprehensive, addressing not only the current but also emerging ethical challenges associated with the use of AI in medicine. This study conducts a theoretical analysis of the current academic discourse on AI ethics in medical education, identifying potential gaps and limitations. The inherent interconnectivity and interdisciplinary nature of these anticipated challenges are illustrated through a focused discussion on "informed consent" in the context of AI in medicine and medical education. This paper proposes a principle-based approach to AI ethics education, building on the 4 principles of medical ethics-autonomy, beneficence, nonmaleficence, and justice-and extending them by integrating 3 public health ethics principles-efficiency, common good orientation, and proportionality. The principle-based approach to teaching AI ethics in medical education proposed in this study offers a foundational framework for addressing the anticipated ethical challenges of using AI in medicine, recommended in the current academic discourse. By incorporating the 3 principles of public health ethics, this principle-based approach ensures that medical ethics education remains relevant and responsive to the dynamic landscape of AI integration in medicine. As the advancement of AI technologies in medicine is expected to increase, medical ethics education must adapt and evolve accordingly. The proposed principle-based approach for teaching AI ethics in medical education provides an important foundation to ensure that future medical professionals are not only aware of the ethical dimensions of AI in medicine but also equipped to make informed ethical decisions in their practice. Future research is required to develop problem-based and competency-oriented learning objectives and educational content for the proposed principle-based approach to teaching AI ethics in medical education.
Collapse
Affiliation(s)
- Lukas Weidener
- UMIT TIROL - Private University for Health Sciences and Health Technology, Hall in Tirol, Austria
| | - Michael Fischer
- UMIT TIROL - Private University for Health Sciences and Health Technology, Hall in Tirol, Austria
| |
Collapse
|
17
|
Campesi I, Franconi F, Serra PA. The Appropriateness of Medical Devices Is Strongly Influenced by Sex and Gender. Life (Basel) 2024; 14:234. [PMID: 38398743 PMCID: PMC10890141 DOI: 10.3390/life14020234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 01/22/2024] [Accepted: 02/05/2024] [Indexed: 02/25/2024] Open
Abstract
Until now, research has been performed mainly in men, with a low recruitment of women; consequentially, biological, physiological, and physio-pathological mechanisms are less understood in women. Obviously, without data obtained on women, it is impossible to apply the results of research appropriately to women. This issue also applies to medical devices (MDs), and numerous problems linked to scarce pre-market research and clinical trials on MDs were evidenced after their introduction to the market. Globally, some MDs are less efficient in women than in men and sometimes MDs are less safe for women than men, although recently there has been a small but significant decrease in the sex and gender gap. As an example, cardiac resynchronization defibrillators seem to produce more beneficial effects in women than in men. It is also important to remember that MDs can impact the health of healthcare providers and this could occur in a sex- and gender-dependent manner. Recently, MDs' complexity is rising, and to ensure their appropriate use they must have a sex-gender-sensitive approach. Unfortunately, the majority of physicians, healthcare providers, and developers of MDs still believe that the human population is only constituted by men. Therefore, to overcome the gender gap, a real collaboration between the inventors of MDs, health researchers, and health providers should be established to test MDs in female and male tissues, animals, and women.
Collapse
Affiliation(s)
- Ilaria Campesi
- Dipartimento di Scienze Biomediche, Università degli Studi di Sassari, 07100 Sassari, Italy
- Laboratorio Nazionale sulla Farmacologia e Medicina di Genere, Istituto Nazionale Biostrutture Biosistemi, 07100 Sassari, Italy;
| | - Flavia Franconi
- Laboratorio Nazionale sulla Farmacologia e Medicina di Genere, Istituto Nazionale Biostrutture Biosistemi, 07100 Sassari, Italy;
| | - Pier Andrea Serra
- Dipartimento di Medicina, Chirurgia e Farmacia, Università degli Studi di Sassari, 07100 Sassari, Italy;
| |
Collapse
|
18
|
Lin S, Pandit S, Tritsch T, Levy A, Shoja MM. What Goes In, Must Come Out: Generative Artificial Intelligence Does Not Present Algorithmic Bias Across Race and Gender in Medical Residency Specialties. Cureus 2024; 16:e54448. [PMID: 38510858 PMCID: PMC10951939 DOI: 10.7759/cureus.54448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 02/18/2024] [Indexed: 03/22/2024] Open
Abstract
Objective Artificial Intelligence (AI) has made significant inroads into various domains, including medicine, raising concerns about algorithmic bias. This study investigates the presence of biases in generative AI programs, with a specific focus on gender and racial representations across 19 medical residency specialties. Methodology This comparative study utilized DALL-E2 to generate faces representing 19 distinct residency training specialties, as identified by the Association of American Medical Colleges (AAMC), which were then compared to the AAMC's residency specialty breakdown with respect to race and gender. Results Our findings reveal an alignment between OpenAI's DALL-E2's predictions and the current demographic landscape of medical residents, suggesting an absence of algorithmic bias in this AI model. Conclusion This revelation gives rise to important ethical considerations. While AI excels at pattern recognition, it inherits and mirrors the biases present in its training data. To combat AI bias, addressing real-world disparities is imperative. Initiatives to promote inclusivity and diversity within medicine are commendable and contribute to reshaping medical education. This study underscores the need for ongoing efforts to dismantle barriers and foster inclusivity in historically male-dominated medical fields, particularly for underrepresented populations. Ultimately, our findings underscore the crucial role of real-world data quality in mitigating AI bias. As AI continues to shape healthcare and education, the pursuit of equitable, unbiased AI applications should remain at the forefront of these transformative endeavors.
Collapse
Affiliation(s)
- Shu Lin
- Department of Medical Education, Nova Southeastern University Dr. Kiran C. Patel College of Allopathic Medicine, Fort Lauderdale, USA
| | - Saket Pandit
- Department of Medical Education, Nova Southeastern University Dr. Kiran C. Patel College of Allopathic Medicine, Fort Lauderdale, USA
| | - Tara Tritsch
- Department of Medical Education, Nova Southeastern University Dr. Kiran C. Patel College of Allopathic Medicine, Fort Lauderdale, USA
| | - Arkene Levy
- Department of Medical Education, Nova Southeastern University Dr. Kiran C. Patel College of Allopathic Medicine, Fort Lauderdale, USA
| | - Mohammadali M Shoja
- Department of Medical Education, Nova Southeastern University Dr. Kiran C. Patel College of Allopathic Medicine, Fort Lauderdale, USA
| |
Collapse
|
19
|
Draghi B, Wang Z, Myles P, Tucker A. Identifying and handling data bias within primary healthcare data using synthetic data generators. Heliyon 2024; 10:e24164. [PMID: 38288010 PMCID: PMC10823075 DOI: 10.1016/j.heliyon.2024.e24164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 11/28/2023] [Accepted: 01/04/2024] [Indexed: 01/31/2024] Open
Abstract
Advanced synthetic data generators can simulate data samples that closely resemble sensitive personal datasets while significantly reducing the risk of individual identification. The use of these advanced generators holds enormous potential in the medical field, as it allows for the simulation and sharing of sensitive patient data. This enables the development and rigorous validation of novel AI technologies for accurate diagnosis and efficient disease management. Despite the availability of massive ground truth datasets (such as UK-NHS databases that contain millions of patient records), the risk of biases being carried over to data generators still exists. These biases may arise from the under-representation of specific patient cohorts due to cultural sensitivities within certain communities or standardised data collection procedures. Machine learning models can exhibit bias in various forms, including the under-representation of certain groups in the data. This can lead to missing data and inaccurate correlations and distributions, which may also be reflected in synthetic data. Our paper aims to improve synthetic data generators by introducing probabilistic approaches to first detect difficult-to-predict data samples in ground truth data and then boost them when applying the generator. In addition, we explore strategies to generate synthetic data that can reduce bias and, at the same time, improve the performance of predictive models.
Collapse
Affiliation(s)
- Barbara Draghi
- Medicines and Healthcare products Regulatory Agency, London, UK
- Brunel University London, London, UK
| | - Zhenchen Wang
- Medicines and Healthcare products Regulatory Agency, London, UK
| | - Puja Myles
- Medicines and Healthcare products Regulatory Agency, London, UK
| | | |
Collapse
|
20
|
Tripathi S, Tabari A, Mansur A, Dabbara H, Bridge CP, Daye D. From Machine Learning to Patient Outcomes: A Comprehensive Review of AI in Pancreatic Cancer. Diagnostics (Basel) 2024; 14:174. [PMID: 38248051 PMCID: PMC10814554 DOI: 10.3390/diagnostics14020174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 12/28/2023] [Accepted: 12/29/2023] [Indexed: 01/23/2024] Open
Abstract
Pancreatic cancer is a highly aggressive and difficult-to-detect cancer with a poor prognosis. Late diagnosis is common due to a lack of early symptoms, specific markers, and the challenging location of the pancreas. Imaging technologies have improved diagnosis, but there is still room for improvement in standardizing guidelines. Biopsies and histopathological analysis are challenging due to tumor heterogeneity. Artificial Intelligence (AI) revolutionizes healthcare by improving diagnosis, treatment, and patient care. AI algorithms can analyze medical images with precision, aiding in early disease detection. AI also plays a role in personalized medicine by analyzing patient data to tailor treatment plans. It streamlines administrative tasks, such as medical coding and documentation, and provides patient assistance through AI chatbots. However, challenges include data privacy, security, and ethical considerations. This review article focuses on the potential of AI in transforming pancreatic cancer care, offering improved diagnostics, personalized treatments, and operational efficiency, leading to better patient outcomes.
Collapse
Affiliation(s)
- Satvik Tripathi
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Harvard Medical School, Boston, MA 02115, USA
| | - Azadeh Tabari
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Harvard Medical School, Boston, MA 02115, USA
| | - Arian Mansur
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Harvard Medical School, Boston, MA 02115, USA
| | - Harika Dabbara
- Boston University Chobanian & Avedisian School of Medicine, Boston, MA 02118, USA;
| | - Christopher P. Bridge
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Harvard Medical School, Boston, MA 02115, USA
| | - Dania Daye
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
21
|
Weidener L, Fischer M. Role of Ethics in Developing AI-Based Applications in Medicine: Insights From Expert Interviews and Discussion of Implications. JMIR AI 2024; 3:e51204. [PMID: 38875585 PMCID: PMC11041491 DOI: 10.2196/51204] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 11/20/2023] [Accepted: 12/09/2023] [Indexed: 06/16/2024]
Abstract
BACKGROUND The integration of artificial intelligence (AI)-based applications in the medical field has increased significantly, offering potential improvements in patient care and diagnostics. However, alongside these advancements, there is growing concern about ethical considerations, such as bias, informed consent, and trust in the development of these technologies. OBJECTIVE This study aims to assess the role of ethics in the development of AI-based applications in medicine. Furthermore, this study focuses on the potential consequences of neglecting ethical considerations in AI development, particularly their impact on patients and physicians. METHODS Qualitative content analysis was used to analyze the responses from expert interviews. Experts were selected based on their involvement in the research or practical development of AI-based applications in medicine for at least 5 years, leading to the inclusion of 7 experts in the study. RESULTS The analysis revealed 3 main categories and 7 subcategories reflecting a wide range of views on the role of ethics in AI development. This variance underscores the subjectivity and complexity of integrating ethics into the development of AI in medicine. Although some experts view ethics as fundamental, others prioritize performance and efficiency, with some perceiving ethics as potential obstacles to technological progress. This dichotomy of perspectives clearly emphasizes the subjectivity and complexity surrounding the role of ethics in AI development, reflecting the inherent multifaceted nature of this issue. CONCLUSIONS Despite the methodological limitations impacting the generalizability of the results, this study underscores the critical importance of consistent and integrated ethical considerations in AI development for medical applications. It advocates further research into effective strategies for ethical AI development, emphasizing the need for transparent and responsible practices, consideration of diverse data sources, physician training, and the establishment of comprehensive ethical and legal frameworks.
Collapse
Affiliation(s)
- Lukas Weidener
- Research Unit for Quality and Ethics in Health Care, UMIT TIROL - Private University for Health Sciences and Health Technology, Hall in Tirol, Austria
| | - Michael Fischer
- Research Unit for Quality and Ethics in Health Care, UMIT TIROL - Private University for Health Sciences and Health Technology, Hall in Tirol, Austria
| |
Collapse
|
22
|
Burnazovic E, Yee A, Levy J, Gore G, Abbasgholizadeh Rahimi S. Application of Artificial intelligence in COVID-19-related geriatric care: A scoping review. Arch Gerontol Geriatr 2024; 116:105129. [PMID: 37542917 DOI: 10.1016/j.archger.2023.105129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 07/11/2023] [Accepted: 07/13/2023] [Indexed: 08/07/2023]
Abstract
BACKGROUND Older adults have been disproportionately affected by the COVID-19 pandemic. This scoping review aimed to summarize the current evidence of artificial intelligence (AI) use in the screening/monitoring, diagnosis, and/or treatment of COVID-19 among older adults. METHOD The review followed the Joanna Briggs Institute and Arksey and O'Malley frameworks. An information specialist performed a comprehensive search from the date of inception until May 2021, in six bibliographic databases. The selected studies considered all populations, and all AI interventions that had been used in COVID-19-related geriatric care. We focused on patient, healthcare provider, and healthcare system-related outcomes. The studies were restricted to peer-reviewed English publications. Two authors independently screened the titles and abstracts of the identified records, read the selected full texts, and extracted data from the included studies using a validated data extraction form. Disagreements were resolved by consensus, and if this was not possible, the opinion of a third reviewer was sought. RESULTS Six databases were searched , yielding 3,228 articles, of which 10 were included. The majority of articles used a single AI model to assess the association between patients' comorbidities and COVID-19 outcomes. Articles were mainly conducted in high-income countries, with limited representation of females in study participants, and insufficient reporting of participants' race and ethnicity. DISCUSSION This review highlighted how the COVID-19 pandemic has accelerated the application of AI to protect older populations, with most interventions in the pilot testing stage. Further work is required to measure effectiveness of these technologies in a larger scale, use more representative datasets for training of AI models, and expand AI applications to low-income countries.
Collapse
Affiliation(s)
- Emina Burnazovic
- Integrated Biomedical Engineering and Health Sciences, Department of Computing and Software, Faculty of Engineering, McMaster University, Hamilton, ON, Canada
| | - Amanda Yee
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
| | - Joshua Levy
- Department of Pharmacology and Therapeutics, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada
| | - Genevieve Gore
- Schulich Library of Physical Sciences, Life Sciences and Engineering, McGill University, Montreal, QC, Canada
| | - Samira Abbasgholizadeh Rahimi
- Department of Family Medicine, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC, Canada; Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, QC, Canada; Mila-Quebec Artificial Intelligence Institute, Montreal, QC, Canada; Faculty of Dental Medicine and Oral Health Sciences, McGill University, Montreal, QC, Canada.
| |
Collapse
|
23
|
Santana GO, Couto RDM, Loureiro RM, Furriel BCRS, Rother ET, de Paiva JPQ, Correia LR. Economic Evaluations and Equity in the Use of Artificial Intelligence in Imaging Exams for Medical Diagnosis in People With Skin, Neurological, and Pulmonary Diseases: Protocol for a Systematic Review. JMIR Res Protoc 2023; 12:e48544. [PMID: 38153775 PMCID: PMC10784972 DOI: 10.2196/48544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 09/23/2023] [Accepted: 10/24/2023] [Indexed: 12/29/2023] Open
Abstract
BACKGROUND Traditional health care systems face long-standing challenges, including patient diversity, geographical disparities, and financial constraints. The emergence of artificial intelligence (AI) in health care offers solutions to these challenges. AI, a multidisciplinary field, enhances clinical decision-making. However, imbalanced AI models may enhance health disparities. OBJECTIVE This systematic review aims to investigate the economic performance and equity impact of AI in diagnostic imaging for skin, neurological, and pulmonary diseases. The research question is "To what extent does the use of AI in imaging exams for diagnosing skin, neurological, and pulmonary diseases result in improved economic outcomes, and does it promote equity in health care systems?" METHODS The study is a systematic review of economic and equity evaluations following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) and CHEERS (Consolidated Health Economic Evaluation Reporting Standards) guidelines. Eligibility criteria include articles reporting on economic evaluations or equity considerations related to AI-based diagnostic imaging for specified diseases. Data will be collected from PubMed, Embase, Scopus, Web of Science, and reference lists. Data quality and transferability will be assessed according to CHEC (Consensus on Health Economic Criteria), EPHPP (Effective Public Health Practice Project), and Welte checklists. RESULTS This systematic review began in March 2023. The literature search identified 9,526 publications and, after full-text screening, 9 publications were included in the study. We plan to submit a manuscript to a peer-reviewed journal once it is finalized, with an expected completion date in January 2024. CONCLUSIONS AI in diagnostic imaging offers potential benefits but also raises concerns about equity and economic impact. Bias in algorithms and disparities in access may hinder equitable outcomes. Evaluating the economic viability of AI applications is essential for resource allocation and affordability. Policy makers and health care stakeholders can benefit from this review's insights to make informed decisions. Limitations, including study variability and publication bias, will be considered in the analysis. This systematic review will provide valuable insights into the economic and equity implications of AI in diagnostic imaging. It aims to inform evidence-based decision-making and contribute to more efficient and equitable health care systems. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) DERR1-10.2196/48544.
Collapse
Affiliation(s)
| | - Rodrigo de Macedo Couto
- Imaging Department, Hospital Israelita Albert Einstein, São Paulo, Brazil
- Department of Preventive Medicine, Universidade Federal de São Paulo, São Paulo, Brazil
| | | | - Brunna Carolinne Rocha Silva Furriel
- Imaging Department, Hospital Israelita Albert Einstein, São Paulo, Brazil
- Computer Engineering School, Universidade Federal de Goiás, Goiânia, Brazil
- Studies and Research in Science and Technology Group (GCITE), Instituto Federal de Goiás, Goiânia, Brazil
| | - Edna Terezinha Rother
- Instituto Israelita de Ensino e Pesquisa, Hospital Israelita Albert Einstein, São Paulo, Brazil
| | | | - Lucas Reis Correia
- PROADI-SUS, Hospital Israelita Albert Einstein, São Paulo, Brazil
- Department of Preventive Medicine, Universidade de São Paulo, São Paulo, Brazil
| |
Collapse
|
24
|
Graf J, Simoes E, Kranz A, Weinert K, Abele H. The Importance of Gender-Sensitive Health Care in the Context of Pain, Emergency and Vaccination: A Narrative Review. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 21:13. [PMID: 38276801 PMCID: PMC10815689 DOI: 10.3390/ijerph21010013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 12/11/2023] [Accepted: 12/20/2023] [Indexed: 01/27/2024]
Abstract
So far, health care has been insufficiently organized in a gender-sensitive way, which makes the promotion of care that meets the needs of women and men equally emerge as a relevant public health problem. The aim of this narrative review was to outline the need for more gender-sensitive medical care in the context of pain, emergency care and vaccinations. In this narrative review, a selective search was performed in Pubmed, and the databases of the World Health Organization (WHO), the European Institute for Gender Equality and the German Federal Ministry of Health were searched. Study data indicate that there are differences between men and women with regard to the ability to bear pain. On the other hand, socially constructed role expectations in pain and the communication of these are also relevant. Studies indicate that women receive adequate pain medication less often than men with a comparable pain score. Furthermore, study results indicate that the female gender is associated with an increased risk of inadequate emergency care. In terms of vaccine provision, women are less likely than men to utilize or gain access to vaccination services, and there are gender-sensitive differences in vaccine efficacy and safety. Sensitization in teaching, research and care is needed to mitigate gender-specific health inequalities.
Collapse
Affiliation(s)
- Joachim Graf
- Institute for Health Sciences, University Hospital Tuebingen, Midwifery Science, Hoppe-Seyler-Str. 9, 72076 Tuebingen, Germany; (A.K.); (K.W.); (H.A.)
| | - Elisabeth Simoes
- Department for Women’s Health, University Hospital Tuebingen, Calwerstr. 7, 72076 Tuebingen, Germany
| | - Angela Kranz
- Institute for Health Sciences, University Hospital Tuebingen, Midwifery Science, Hoppe-Seyler-Str. 9, 72076 Tuebingen, Germany; (A.K.); (K.W.); (H.A.)
| | - Konstanze Weinert
- Institute for Health Sciences, University Hospital Tuebingen, Midwifery Science, Hoppe-Seyler-Str. 9, 72076 Tuebingen, Germany; (A.K.); (K.W.); (H.A.)
| | - Harald Abele
- Institute for Health Sciences, University Hospital Tuebingen, Midwifery Science, Hoppe-Seyler-Str. 9, 72076 Tuebingen, Germany; (A.K.); (K.W.); (H.A.)
- Department for Women’s Health, University Hospital Tuebingen, Calwerstr. 7, 72076 Tuebingen, Germany
| |
Collapse
|
25
|
Cascalheira CJ, Pugh TH, Hong C, Birkett M, Macapagal K, Holloway IW. Developing technology-based interventions for infectious diseases: ethical considerations for young sexual and gender minority people. FRONTIERS IN REPRODUCTIVE HEALTH 2023; 5:1303218. [PMID: 38169805 PMCID: PMC10759218 DOI: 10.3389/frph.2023.1303218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 10/20/2023] [Indexed: 01/05/2024] Open
Abstract
Compared to their heterosexual and cisgender peers, young sexual and gender minority (YSGM) people are more likely to contract sexually transmitted infections (STIs; e.g., HIV) and to face adverse consequences of emerging infections, such as COVID-19 and mpox. To reduce these sexual health disparities, technology-based interventions (TBIs) for STIs and emerging infections among YSGM adolescents and young adults have been developed. In this Perspective, we discuss ethical issues, ethical principles, and recommendations in the development and implementation of TBIs to address STIs and emerging infections among YSGM. Our discussion covers: (1) confidentiality, privacy, and data security (e.g., if TBI use is revealed, YSGM are at increased risk of discrimination and family rejection); (2) empowerment and autonomy (e.g., designing TBIs that can still function if YSGM users opt-out of multiple features and data collection requests); (3) evidence-based and quality controlled (e.g., going above and beyond minimum FDA effectiveness standards to protect vulnerable YSGM people); (4) cultural sensitivity and tailoring (e.g., using YSGM-specific models of prevention and intervention); (5) balancing inclusivity vs. group specificity (e.g., honoring YSGM heterogeneity); (6) duty to care (e.g., providing avenues to contact affirming healthcare professionals); (7) equitable access (e.g., prioritizing YSGM people living in low-resource, high-stigma areas); and (8) digital temperance (e.g., being careful with gamification because YSGM experience substantial screen time compared to their peers). We conclude that a community-engaged, YSGM-centered approach to TBI development and implementation is paramount to ethically preventing and treating STIs and emerging infections with innovative technology.
Collapse
Affiliation(s)
- Cory J. Cascalheira
- Department of Counseling and Educational Psychology, New Mexico State University, Las Cruces, NM, United States
| | - Tyler H. Pugh
- Department of Social Policy and Intervention, University of Oxford, Oxford, United Kingdom
| | - Chenglin Hong
- Luskin School of Public Affairs, University of California, Los Angeles, Los Angeles, CA, United States
| | - Michelle Birkett
- Department of Medical Social Sciences, Feinberg School of Medicine, Northwestern University, Chicago, IL, United States
- Institute for Sexual and Gender Minority Health and Wellbeing, Northwestern University, Chicago, IL, United States
| | - Kathryn Macapagal
- Department of Medical Social Sciences, Feinberg School of Medicine, Northwestern University, Chicago, IL, United States
- Institute for Sexual and Gender Minority Health and Wellbeing, Northwestern University, Chicago, IL, United States
| | - Ian W. Holloway
- Luskin School of Public Affairs, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
26
|
Kaplan A, Boivin M, Bouchard J, Kim J, Hayes S, Licskai C. The emerging role of digital health in the management of asthma. Ther Adv Chronic Dis 2023; 14:20406223231209329. [PMID: 38028951 PMCID: PMC10657529 DOI: 10.1177/20406223231209329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 09/25/2023] [Indexed: 12/01/2023] Open
Abstract
The most common reasons seen for lack of asthma control include misconceptions about disease control, low controller treatment adherence, poor inhaler technique, and the resulting underuse of controllers and overuse of short-acting beta2 agonists (SABAs). Narrowing these care gaps may be achieved through well-designed patient education that considers the patient's motivation, beliefs, and capabilities regarding their asthma and its management and empowers the patient to become an active participant in treatment decisions. Digital health technologies (DHTs) and digital therapeutic (DT) devices provide new opportunities to monitor treatment behaviors, improve communication between healthcare providers and patients, and generate data that inform educational interactions. DHT and DT have been proven effective in enhancing patient self-management in other chronic conditions, particularly diabetes. Accelerated integration of DHT and DT into the management of asthma patients is facilitated by the use of digital inhalers that employ sensor technology ("smart" inhalers). These devices efficiently provide real-time feedback on controller adherence, SABA use, and inhaler technique that have the strong potential to optimize asthma control.
Collapse
Affiliation(s)
- Alan Kaplan
- Department of Family and Community Medicine, University of Toronto, 14872 Yonge Street, Aurora, Toronto, ON L4G 1N2, Canada
- Family Physician Airways Group of Canada, Markham, ON, Canada
| | | | | | - James Kim
- Faculty of Medicine, University of Calgary, Calgary, AB, Canada
| | | | - Christopher Licskai
- Division of Respirology, Department of Medicine, Western University, London, ON, Canada
| |
Collapse
|
27
|
Bures D, Hosters B, Reibel T, Jovy-Klein F, Schramm J, Brendt-Müller J, Sander J, Diehl A. [The transformative effect of artificial intelligence in hospitals : The focus is on the individual]. INNERE MEDIZIN (HEIDELBERG, GERMANY) 2023; 64:1025-1032. [PMID: 37853060 PMCID: PMC10602990 DOI: 10.1007/s00108-023-01597-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 09/14/2023] [Indexed: 10/20/2023]
Abstract
Rapid advances in digital technology and the promising potential of artificial intelligence (AI) are changing our everyday lives and have already impacted on hospital procedures. The use of AI applications, in particular, enables a wide range of possible uses and has considerable potential for improving medical and nursing care. In radiological diagnostics, for example, there are already many well-researched applications for AI-based image evaluation. In this article further AI developments are presented, which can help to relieve medical staff in order to create more time for direct patient care. In addition, essential aspects regarding the development and transfer of AI-based applications are highlighted. It is crucial that the integration of AI into medical practice is carried out with the utmost care and prudence. Data protection and ethical aspects need to be considered and respected at all times. Ensuring the reliability and integrity of AI systems is essential to earn the trust of both patients and healthcare professionals. A comprehensive inspection for possible bias within the underlying data and algorithms is indispensable. In this field of tension between promising possibilities and ethical challenges, the digital transformation in medicine and care can be designed to increase patient safety and to relieve staff.
Collapse
Affiliation(s)
- Dominik Bures
- Stabsstelle Digitale Transformation, Universitätsmedizin Essen, Hufelandstr. 55, 45147, Essen, Deutschland.
| | - Bernadette Hosters
- Stabsstelle Entwicklung und Forschung Pflege, Universitätsmedizin Essen, Essen, Deutschland
| | - Thomas Reibel
- Institut für Technologie- und Innovationsmanagement, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Deutschland
| | - Florian Jovy-Klein
- Institut für Technologie- und Innovationsmanagement, Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Deutschland
| | - Johanna Schramm
- Stabsstelle Entwicklung und Forschung Pflege, Universitätsmedizin Essen, Essen, Deutschland
| | - Jennifer Brendt-Müller
- Stabsstelle Entwicklung und Forschung Pflege, Universitätsmedizin Essen, Essen, Deutschland
| | - Jil Sander
- Stabsstelle Digitale Transformation, Universitätsmedizin Essen, Hufelandstr. 55, 45147, Essen, Deutschland
| | - Anke Diehl
- Stabsstelle Digitale Transformation, Universitätsmedizin Essen, Hufelandstr. 55, 45147, Essen, Deutschland
| |
Collapse
|
28
|
Weidener L, Fischer M. Teaching AI Ethics in Medical Education: A Scoping Review of Current Literature and Practices. PERSPECTIVES ON MEDICAL EDUCATION 2023; 12:399-410. [PMID: 37868075 PMCID: PMC10588522 DOI: 10.5334/pme.954] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 10/03/2023] [Indexed: 10/24/2023]
Abstract
Introduction The increasing use of Artificial Intelligence (AI) in medicine has raised ethical concerns, such as patient autonomy, bias, and transparency. Recent studies suggest a need for teaching AI ethics as part of medical curricula. This scoping review aimed to represent and synthesize the literature on teaching AI ethics as part of medical education. Methods The PRISMA-SCR guidelines and JBI methodology guided a literature search in four databases (PubMed, Embase, Scopus, and Web of Science) for the past 22 years (2000-2022). To account for the release of AI-based chat applications, such as ChatGPT, the literature search was updated to include publications until the end of June 2023. Results 1384 publications were originally identified and, after screening titles and abstracts, the full text of 87 publications was assessed. Following the assessment of the full text, 10 publications were included for further analysis. The updated literature search identified two additional relevant publications from 2023 were identified and included in the analysis. All 12 publications recommended teaching AI ethics in medical curricula due to the potential implications of AI in medicine. Anticipated ethical challenges such as bias were identified as the recommended basis for teaching content in addition to basic principles of medical ethics. Case-based teaching using real-world examples in interactive seminars and small groups was recommended as a teaching modality. Conclusion This scoping review reveals a scarcity of literature on teaching AI ethics in medical education, with most of the available literature being recent and theoretical. These findings emphasize the importance of more empirical studies and foundational definitions of AI ethics to guide the development of teaching content and modalities. Recognizing AI's significant impact of AI on medicine, additional research on the teaching of AI ethics in medical education is needed to best prepare medical students for future ethical challenges.
Collapse
Affiliation(s)
- Lukas Weidener
- UMIT TIROL – Private University for Health Sciences and Health Technology, Eduard-Wallnöfer-Zentrum 1, 6060 Hall in Tirol, Austria
| | - Michael Fischer
- Head of the Research Unit for Quality and Ethics in Health Care, UMIT TIROL – Private University for Health Sciences and Health Technology, Austria
| |
Collapse
|
29
|
Sarkar S, Gaur M, Chen LK, Garg M, Srivastava B. A review of the explainability and safety of conversational agents for mental health to identify avenues for improvement. Front Artif Intell 2023; 6:1229805. [PMID: 37899961 PMCID: PMC10601652 DOI: 10.3389/frai.2023.1229805] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Accepted: 08/29/2023] [Indexed: 10/31/2023] Open
Abstract
Virtual Mental Health Assistants (VMHAs) continuously evolve to support the overloaded global healthcare system, which receives approximately 60 million primary care visits and 6 million emergency room visits annually. These systems, developed by clinical psychologists, psychiatrists, and AI researchers, are designed to aid in Cognitive Behavioral Therapy (CBT). The main focus of VMHAs is to provide relevant information to mental health professionals (MHPs) and engage in meaningful conversations to support individuals with mental health conditions. However, certain gaps prevent VMHAs from fully delivering on their promise during active communications. One of the gaps is their inability to explain their decisions to patients and MHPs, making conversations less trustworthy. Additionally, VMHAs can be vulnerable in providing unsafe responses to patient queries, further undermining their reliability. In this review, we assess the current state of VMHAs on the grounds of user-level explainability and safety, a set of desired properties for the broader adoption of VMHAs. This includes the examination of ChatGPT, a conversation agent developed on AI-driven models: GPT3.5 and GPT-4, that has been proposed for use in providing mental health services. By harnessing the collaborative and impactful contributions of AI, natural language processing, and the mental health professionals (MHPs) community, the review identifies opportunities for technological progress in VMHAs to ensure their capabilities include explainable and safe behaviors. It also emphasizes the importance of measures to guarantee that these advancements align with the promise of fostering trustworthy conversations.
Collapse
Affiliation(s)
- Surjodeep Sarkar
- Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, United States
| | - Manas Gaur
- Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, United States
| | - Lujie Karen Chen
- Department of Information Systems, University of Maryland, Baltimore County, Baltimore, MD, United States
| | - Muskan Garg
- Department of AI & Informatics, Mayo Clinic, Rochester, MN, United States
| | - Biplav Srivastava
- AI Institute, University of South Carolina, Columbia, SC, United States
| |
Collapse
|
30
|
Hernandez-Boussard T, Siddique SM, Bierman AS, Hightower M, Burstin H. Promoting Equity In Clinical Decision Making: Dismantling Race-Based Medicine. Health Aff (Millwood) 2023; 42:1369-1373. [PMID: 37782875 PMCID: PMC10849087 DOI: 10.1377/hlthaff.2023.00545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
As the use of artificial intelligence has spread rapidly throughout the US health care system, concerns have been raised about racial and ethnic biases built into the algorithms that often guide clinical decision making. Race-based medicine, which relies on algorithms that use race as a proxy for biological differences, has led to treatment patterns that are inappropriate, unjust, and harmful to minoritized racial and ethnic groups. These patterns have contributed to persistent disparities in health and health care. To reduce these disparities, we recommend a race-aware approach to clinical decision support that considers social and environmental factors such as structural racism and social determinants of health. Recent policy changes in medical specialty societies and innovations in algorithm development represent progress on the path to dismantling race-based medicine. Success will require continued commitment and sustained efforts among stakeholders in the health care, research, and technology sectors. Increasing the diversity of clinical trial populations, broadening the focus of precision medicine, improving education about the complex factors shaping health outcomes, and developing new guidelines and policies to enable culturally responsive care are important next steps.
Collapse
Affiliation(s)
| | | | - Arlene S Bierman
- Arlene S. Bierman, Agency for Healthcare Research and Quality, Rockville, Maryland
| | - Maia Hightower
- Maia Hightower, University of Chicago, Chicago, Illinois
| | - Helen Burstin
- Helen Burstin, Council of Medical Specialty Societies, Washington, D.C
| |
Collapse
|
31
|
van Breugel M, Fehrmann RSN, Bügel M, Rezwan FI, Holloway JW, Nawijn MC, Fontanella S, Custovic A, Koppelman GH. Current state and prospects of artificial intelligence in allergy. Allergy 2023; 78:2623-2643. [PMID: 37584170 DOI: 10.1111/all.15849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 07/08/2023] [Accepted: 07/31/2023] [Indexed: 08/17/2023]
Abstract
The field of medicine is witnessing an exponential growth of interest in artificial intelligence (AI), which enables new research questions and the analysis of larger and new types of data. Nevertheless, applications that go beyond proof of concepts and deliver clinical value remain rare, especially in the field of allergy. This narrative review provides a fundamental understanding of the core concepts of AI and critically discusses its limitations and open challenges, such as data availability and bias, along with potential directions to surmount them. We provide a conceptual framework to structure AI applications within this field and discuss forefront case examples. Most of these applications of AI and machine learning in allergy concern supervised learning and unsupervised clustering, with a strong emphasis on diagnosis and subtyping. A perspective is shared on guidelines for good AI practice to guide readers in applying it effectively and safely, along with prospects of field advancement and initiatives to increase clinical impact. We anticipate that AI can further deepen our knowledge of disease mechanisms and contribute to precision medicine in allergy.
Collapse
Affiliation(s)
- Merlijn van Breugel
- Department of Pediatric Pulmonology and Pediatric Allergology, Beatrix Children's Hospital, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
- Groningen Research Institute for Asthma and COPD (GRIAC), University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
- MIcompany, Amsterdam, the Netherlands
| | - Rudolf S N Fehrmann
- Department of Medical Oncology, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | | | - Faisal I Rezwan
- Human Development and Health, Faculty of Medicine, University of Southampton, Southampton, UK
- Department of Computer Science, Aberystwyth University, Aberystwyth, UK
| | - John W Holloway
- Human Development and Health, Faculty of Medicine, University of Southampton, Southampton, UK
- National Institute for Health and Care Research Southampton Biomedical Research Centre, University Hospitals Southampton NHS Foundation Trust, Southampton, UK
| | - Martijn C Nawijn
- Groningen Research Institute for Asthma and COPD (GRIAC), University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
- Department of Pathology and Medical Biology, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | - Sara Fontanella
- National Heart and Lung Institute, Imperial College London, London, UK
- National Institute for Health and Care Research Imperial Biomedical Research Centre (BRC), London, UK
| | - Adnan Custovic
- National Heart and Lung Institute, Imperial College London, London, UK
- National Institute for Health and Care Research Imperial Biomedical Research Centre (BRC), London, UK
| | - Gerard H Koppelman
- Department of Pediatric Pulmonology and Pediatric Allergology, Beatrix Children's Hospital, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
- Groningen Research Institute for Asthma and COPD (GRIAC), University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| |
Collapse
|
32
|
Sahiner B, Chen W, Samala RK, Petrick N. Data drift in medical machine learning: implications and potential remedies. Br J Radiol 2023; 96:20220878. [PMID: 36971405 PMCID: PMC10546450 DOI: 10.1259/bjr.20220878] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 02/16/2023] [Accepted: 02/20/2023] [Indexed: 03/29/2023] Open
Abstract
Data drift refers to differences between the data used in training a machine learning (ML) model and that applied to the model in real-world operation. Medical ML systems can be exposed to various forms of data drift, including differences between the data sampled for training and used in clinical operation, differences between medical practices or context of use between training and clinical use, and time-related changes in patient populations, disease patterns, and data acquisition, to name a few. In this article, we first review the terminology used in ML literature related to data drift, define distinct types of drift, and discuss in detail potential causes within the context of medical applications with an emphasis on medical imaging. We then review the recent literature regarding the effects of data drift on medical ML systems, which overwhelmingly show that data drift can be a major cause for performance deterioration. We then discuss methods for monitoring data drift and mitigating its effects with an emphasis on pre- and post-deployment techniques. Some of the potential methods for drift detection and issues around model retraining when drift is detected are included. Based on our review, we find that data drift is a major concern in medical ML deployment and that more research is needed so that ML models can identify drift early, incorporate effective mitigation strategies and resist performance decay.
Collapse
Affiliation(s)
- Berkman Sahiner
- Center for Devices and Radiological Health, U.S. Food and Drug Administration 10903 New Hampshire Avenue, Silver Spring, MD 20993-0002
| | - Weijie Chen
- Center for Devices and Radiological Health, U.S. Food and Drug Administration 10903 New Hampshire Avenue, Silver Spring, MD 20993-0002
| | - Ravi K. Samala
- Center for Devices and Radiological Health, U.S. Food and Drug Administration 10903 New Hampshire Avenue, Silver Spring, MD 20993-0002
| | - Nicholas Petrick
- Center for Devices and Radiological Health, U.S. Food and Drug Administration 10903 New Hampshire Avenue, Silver Spring, MD 20993-0002
| |
Collapse
|
33
|
Göring C. Gendermedizin: Wie vertrauenswürdig ist Künstliche Intelligenz? MMW Fortschr Med 2023; 165:20-23. [PMID: 37857955 DOI: 10.1007/s15006-023-2898-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2023]
Affiliation(s)
- Carola Göring
- Freie Medizinjournalistin, Merowingerstr. 12, 82363, Weilheilm, Germany
| |
Collapse
|
34
|
Azizi Z, Adedinsewo D, Rodriguez F, Lewey J, Merchant RM, Brewer LC. Leveraging Digital Health to Improve the Cardiovascular Health of Women. CURRENT CARDIOVASCULAR RISK REPORTS 2023; 17:205-214. [PMID: 37868625 PMCID: PMC10587029 DOI: 10.1007/s12170-023-00728-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/29/2023] [Indexed: 10/24/2023]
Abstract
Purpose of Review In this review, we present a comprehensive discussion on the population-level implications of digital health interventions (DHIs) to improve cardiovascular health (CVH) through sex- and gender-specific prevention strategies among women. Recent Findings Over the past 30 years, there have been significant advancements in the diagnosis and treatment of cardiovascular diseases, a leading cause of morbidity and mortality among men and women worldwide. However, women are often underdiagnosed, undertreated, and underrepresented in cardiovascular clinical trials, which all contribute to disparities within this population. One approach to address this is through DHIs, particularly among racial and ethnic minoritized groups. Implementation of telemedicine has shown promise in increasing adherence to healthcare visits, improving BP monitoring, weight control, physical activity, and the adoption of healthy behaviors. Furthermore, the use of mobile health applications facilitated by smart devices, wearables, and other eHealth (defined as electronically delivered health services) modalities has also promoted CVH among women in general, as well as during pregnancy and the postpartum period. Overall, utilizing a digital health approach for healthcare delivery, decentralized clinical trials, and incorporation into daily lifestyle activities has the potential to improve CVH among women by mitigating geographical, structural, and financial barriers to care. Summary Leveraging digital technologies and strategies introduces novel methods to address sex- and gender-specific health and healthcare disparities and improve the quality of care provided to women. However, it is imperative to be mindful of the digital divide in specific populations, which may hinder accessibility to these novel technologies and inadvertently widen preexisting inequities.
Collapse
Affiliation(s)
- Zahra Azizi
- Center for Digital Health, Stanford University, Stanford, CA USA
- Department of Cardiovascular Medicine and the Cardiovascular Institute, Stanford University, Stanford, CA USA
| | | | - Fatima Rodriguez
- Department of Cardiovascular Medicine and the Cardiovascular Institute, Stanford University, Stanford, CA USA
| | - Jennifer Lewey
- Department of Medicine, Division of Cardiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA USA
| | - Raina M. Merchant
- Center for Digital Health, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA USA
| | - LaPrincess C. Brewer
- Department of Cardiovascular Medicine, Mayo Clinic College of Medicine, Rochester, MN USA
- Center for Health Equity and Community Engagement Research, Mayo Clinic, Rochester, MN USA
| |
Collapse
|
35
|
Liu M, Ning Y, Teixayavong S, Mertens M, Xu J, Ting DSW, Cheng LTE, Ong JCL, Teo ZL, Tan TF, RaviChandran N, Wang F, Celi LA, Ong MEH, Liu N. A translational perspective towards clinical AI fairness. NPJ Digit Med 2023; 6:172. [PMID: 37709945 PMCID: PMC10502051 DOI: 10.1038/s41746-023-00918-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 09/04/2023] [Indexed: 09/16/2023] Open
Abstract
Artificial intelligence (AI) has demonstrated the ability to extract insights from data, but the fairness of such data-driven insights remains a concern in high-stakes fields. Despite extensive developments, issues of AI fairness in clinical contexts have not been adequately addressed. A fair model is normally expected to perform equally across subgroups defined by sensitive variables (e.g., age, gender/sex, race/ethnicity, socio-economic status, etc.). Various fairness measurements have been developed to detect differences between subgroups as evidence of bias, and bias mitigation methods are designed to reduce the differences detected. This perspective of fairness, however, is misaligned with some key considerations in clinical contexts. The set of sensitive variables used in healthcare applications must be carefully examined for relevance and justified by clear clinical motivations. In addition, clinical AI fairness should closely investigate the ethical implications of fairness measurements (e.g., potential conflicts between group- and individual-level fairness) to select suitable and objective metrics. Generally defining AI fairness as "equality" is not necessarily reasonable in clinical settings, as differences may have clinical justifications and do not indicate biases. Instead, "equity" would be an appropriate objective of clinical AI fairness. Moreover, clinical feedback is essential to developing fair and well-performing AI models, and efforts should be made to actively involve clinicians in the process. The adaptation of AI fairness towards healthcare is not self-evident due to misalignments between technical developments and clinical considerations. Multidisciplinary collaboration between AI researchers, clinicians, and ethicists is necessary to bridge the gap and translate AI fairness into real-life benefits.
Collapse
Affiliation(s)
- Mingxuan Liu
- Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore, Singapore
| | - Yilin Ning
- Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore, Singapore
| | | | - Mayli Mertens
- Centre for Ethics, Department of Philosophy, University of Antwerp, Antwerp, Belgium
- Antwerp Center on Responsible AI, University of Antwerp, Antwerp, Belgium
| | - Jie Xu
- Department of Health Outcomes and Biomedical Informatics, University of Florida, Gainesville, FL, USA
| | - Daniel Shu Wei Ting
- Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SingHealth AI Office, Singapore Health Services, Singapore, Singapore
| | - Lionel Tim-Ee Cheng
- Department of Diagnostic Radiology, Singapore General Hospital, Singapore, Singapore
| | | | - Zhen Ling Teo
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Ting Fang Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | | | - Fei Wang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
| | - Leo Anthony Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, USA
- Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Marcus Eng Hock Ong
- Programme in Health Services and Systems Research, Duke-NUS Medical School, Singapore, Singapore
- Department of Emergency Medicine, Singapore General Hospital, Singapore, Singapore
| | - Nan Liu
- Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore, Singapore.
- SingHealth AI Office, Singapore Health Services, Singapore, Singapore.
- Programme in Health Services and Systems Research, Duke-NUS Medical School, Singapore, Singapore.
- Institute of Data Science, National University of Singapore, Singapore, Singapore.
| |
Collapse
|
36
|
Buslón N, Cortés A, Catuara-Solarz S, Cirillo D, Rementeria MJ. Raising awareness of sex and gender bias in artificial intelligence and health. Front Glob Womens Health 2023; 4:970312. [PMID: 37746321 PMCID: PMC10512182 DOI: 10.3389/fgwh.2023.970312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 08/18/2023] [Indexed: 09/26/2023] Open
Abstract
Historically, biomedical research has been led by and focused on men. The recent introduction of Artificial Intelligence (AI) in this area has further proven this practice to be discriminatory for other sexes and genders, more noticeably for women. To move towards a fair AI development, it is essential to include sex and gender diversity both in research practices and in the workplace. In this context, the Bioinfo4women (B4W) program of the Barcelona Supercomputing Center (i) promotes the participation of women scientists by improving their visibility, (ii) fosters international collaborations between institutions and programs and (iii) advances research on sex and gender bias in AI and health. In this article, we discuss methodology and results of a series of conferences, titled “Sex and Gender Bias in Artificial Intelligence and Health, organized by B4W and La Caixa Foundation from March to June 2021 in Barcelona, Spain. The series consisted of nine hybrid events, composed of keynote sessions and seminars open to the general audience, and two working groups with invited experts from different professional backgrounds (academic fields such as biology, engineering, and sociology, as well as NGOs, journalists, lawyers, policymakers, industry). Based on this awareness-raising action, we distilled key recommendations to facilitate the inclusion of sex and gender perspective into public policies, educational programs, industry, and biomedical research, among other sectors, and help overcome sex and gender biases in AI and health.
Collapse
Affiliation(s)
- Nataly Buslón
- Life Sciences Department, Barcelona Supercomputing Center, Barcelona, Spain
| | - Atia Cortés
- Life Sciences Department, Barcelona Supercomputing Center, Barcelona, Spain
| | | | - Davide Cirillo
- Life Sciences Department, Barcelona Supercomputing Center, Barcelona, Spain
| | | |
Collapse
|
37
|
Raparelli V, Romiti GF, Di Teodoro G, Seccia R, Tanzilli G, Viceconte N, Marrapodi R, Flego D, Corica B, Cangemi R, Pilote L, Basili S, Proietti M, Palagi L, Stefanini L. A machine-learning based bio-psycho-social model for the prediction of non-obstructive and obstructive coronary artery disease. Clin Res Cardiol 2023; 112:1263-1277. [PMID: 37004526 PMCID: PMC10449670 DOI: 10.1007/s00392-023-02193-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 03/24/2023] [Indexed: 04/04/2023]
Abstract
BACKGROUND Mechanisms of myocardial ischemia in obstructive and non-obstructive coronary artery disease (CAD), and the interplay between clinical, functional, biological and psycho-social features, are still far to be fully elucidated. OBJECTIVES To develop a machine-learning (ML) model for the supervised prediction of obstructive versus non-obstructive CAD. METHODS From the EVA study, we analysed adults hospitalized for IHD undergoing conventional coronary angiography (CCA). Non-obstructive CAD was defined by a stenosis < 50% in one or more vessels. Baseline clinical and psycho-socio-cultural characteristics were used for computing a Rockwood and Mitnitski frailty index, and a gender score according to GENESIS-PRAXY methodology. Serum concentration of inflammatory cytokines was measured with a multiplex flow cytometry assay. Through an XGBoost classifier combined with an explainable artificial intelligence tool (SHAP), we identified the most influential features in discriminating obstructive versus non-obstructive CAD. RESULTS Among the overall EVA cohort (n = 509), 311 individuals (mean age 67 ± 11 years, 38% females; 67% obstructive CAD) with complete data were analysed. The ML-based model (83% accuracy and 87% precision) showed that while obstructive CAD was associated with higher frailty index, older age and a cytokine signature characterized by IL-1β, IL-12p70 and IL-33, non-obstructive CAD was associated with a higher gender score (i.e., social characteristics traditionally ascribed to women) and with a cytokine signature characterized by IL-18, IL-8, IL-23. CONCLUSIONS Integrating clinical, biological, and psycho-social features, we have optimized a sex- and gender-unbiased model that discriminates obstructive and non-obstructive CAD. Further mechanistic studies will shed light on the biological plausibility of these associations. CLINICAL TRIAL REGISTRATION NCT02737982.
Collapse
Affiliation(s)
- Valeria Raparelli
- Department of Experimental Medicine, Sapienza University of Rome, Rome, Italy.
- Department of Translational Medicine, University of Ferrara, Via Luigi Borsari, 46, 44121, Ferrara, Italy.
- Faculty of Nursing, University of Alberta, Edmonton, Canada.
- University Center for Studies on Gender Medicine, University of Ferrara, Ferrara, Italy.
| | - Giulio Francesco Romiti
- Department of Translational and Precision Medicine, Sapienza University of Rome, Rome, Italy
- Liverpool Centre for Cardiovascular Science, University of Liverpool and Liverpool Heart and Chest Hospital, Liverpool, UK
| | - Giulia Di Teodoro
- Department of Computer Control and Management Engineering Antonio Ruberti, Sapienza University of Rome, Rome, Italy
| | - Ruggiero Seccia
- Department of Computer Control and Management Engineering Antonio Ruberti, Sapienza University of Rome, Rome, Italy
| | - Gaetano Tanzilli
- Department of Clinical, Internal, Anesthesiology and Cardiovascular Sciences, Umberto I Hospital, Sapienza University of Rome, Rome, Italy
| | - Nicola Viceconte
- Department of Clinical, Internal, Anesthesiology and Cardiovascular Sciences, Umberto I Hospital, Sapienza University of Rome, Rome, Italy
| | - Ramona Marrapodi
- Department of Translational and Precision Medicine, Sapienza University of Rome, Rome, Italy
| | - Davide Flego
- Department of Translational and Precision Medicine, Sapienza University of Rome, Rome, Italy
| | - Bernadette Corica
- Department of Translational and Precision Medicine, Sapienza University of Rome, Rome, Italy
- Liverpool Centre for Cardiovascular Science, University of Liverpool and Liverpool Heart and Chest Hospital, Liverpool, UK
| | - Roberto Cangemi
- Department of Translational and Precision Medicine, Sapienza University of Rome, Rome, Italy
| | - Louise Pilote
- Centre for Outcomes Research and Evaluation, McGill University Health Centre Research Institute, Montreal, QC, Canada
- Divisions of Clinical Epidemiology and General Internal Medicine, McGill University Health Centre Research Institute, Montreal, QC, Canada
| | - Stefania Basili
- Department of Translational and Precision Medicine, Sapienza University of Rome, Rome, Italy
| | - Marco Proietti
- Liverpool Centre for Cardiovascular Science, University of Liverpool and Liverpool Heart and Chest Hospital, Liverpool, UK
- Division of Subacute Care, IRCCS Istituti Clinici Scientifici Maugeri, Milan, Italy
- Department of Clinical Sciences and Community Health, University of Milan, Milan, Italy
| | - Laura Palagi
- Department of Computer Control and Management Engineering Antonio Ruberti, Sapienza University of Rome, Rome, Italy
| | - Lucia Stefanini
- Department of Translational and Precision Medicine, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
38
|
Foltz PW, Chandler C, Diaz-Asper C, Cohen AS, Rodriguez Z, Holmlund TB, Elvevåg B. Reflections on the nature of measurement in language-based automated assessments of patients' mental state and cognitive function. Schizophr Res 2023; 259:127-139. [PMID: 36153250 DOI: 10.1016/j.schres.2022.07.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 07/12/2022] [Accepted: 07/13/2022] [Indexed: 11/23/2022]
Abstract
Modern advances in computational language processing methods have enabled new approaches to the measurement of mental processes. However, the field has primarily focused on model accuracy in predicting performance on a task or a diagnostic category. Instead the field should be more focused on determining which computational analyses align best with the targeted neurocognitive/psychological functions that we want to assess. In this paper we reflect on two decades of experience with the application of language-based assessment to patients' mental state and cognitive function by addressing the questions of what we are measuring, how it should be measured and why we are measuring the phenomena. We address the questions by advocating for a principled framework for aligning computational models to the constructs being assessed and the tasks being used, as well as defining how those constructs relate to patient clinical states. We further examine the assumptions that go into the computational models and the effects that model design decisions may have on the accuracy, bias and generalizability of models for assessing clinical states. Finally, we describe how this principled approach can further the goal of transitioning language-based computational assessments to part of clinical practice while gaining the trust of critical stakeholders.
Collapse
Affiliation(s)
- Peter W Foltz
- Institute of Cognitive Science, University of Colorado Boulder, United States of America.
| | - Chelsea Chandler
- Institute of Cognitive Science, University of Colorado Boulder, United States of America; Department of Computer Science, University of Colorado Boulder, United States of America
| | | | - Alex S Cohen
- Department of Psychology, Louisiana State University, United States of America; Center for Computation and Technology, Louisiana State University, United States of America
| | - Zachary Rodriguez
- Department of Psychology, Louisiana State University, United States of America; Center for Computation and Technology, Louisiana State University, United States of America
| | - Terje B Holmlund
- Department of Clinical Medicine, University of Tromsø - the Arctic University of Norway, Tromsø, Norway
| | - Brita Elvevåg
- Department of Clinical Medicine, University of Tromsø - the Arctic University of Norway, Tromsø, Norway; Norwegian Centre for eHealth Research, University Hospital of North Norway, Tromsø, Norway.
| |
Collapse
|
39
|
Tripathi S, Gabriel K, Dheer S, Parajuli A, Augustin AI, Elahi A, Awan O, Dako F. Understanding Biases and Disparities in Radiology AI Datasets: A Review. J Am Coll Radiol 2023; 20:836-841. [PMID: 37454752 DOI: 10.1016/j.jacr.2023.06.015] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 06/14/2023] [Indexed: 07/18/2023]
Abstract
Artificial intelligence (AI) continues to show great potential in disease detection and diagnosis on medical imaging with increasingly high accuracy. An important component of AI model creation is dataset development for training, validation, and testing. Diverse and high-quality datasets are critical to ensure robust and unbiased AI models that maintain validity, especially in traditionally underserved populations globally. Yet publicly available datasets demonstrate problems with quality and inclusivity. In this literature review, the authors evaluate publicly available medical imaging datasets for demographic, geographic, genetic, and disease representation or lack thereof and call for an increase emphasis on dataset development to maximize the impact of AI models.
Collapse
Affiliation(s)
- Satvik Tripathi
- Department of Radiology, University of Pennsylvania School of Medicine, Philadelphia, Pennsylvania.
| | - Kyla Gabriel
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts
| | - Suhani Dheer
- Department of Radiology, University of Pennsylvania School of Medicine, Philadelphia, Pennsylvania
| | - Aastha Parajuli
- Department of Radiology, Kathmandu University of School of Medical Sciences, Dhulikhel, Nepal
| | | | - Ameena Elahi
- Department of Information Services, University of Pennsylvania Health System, Philadelphia, Pennsylvania
| | - Omar Awan
- Department of Radiology, University of Maryland School of Medicine, Baltimore, Maryland
| | - Farouk Dako
- Department of Radiology, University of Pennsylvania School of Medicine, Philadelphia, Pennsylvania
| |
Collapse
|
40
|
Ingram HR. Generating gender generalists. Br J Gen Pract 2023; 73:395. [PMID: 37652722 PMCID: PMC10471352 DOI: 10.3399/bjgp23x734745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/02/2023] Open
|
41
|
Doo FX, McGinty GB. Building Diversity, Equity, and Inclusion Within Radiology Artificial Intelligence: Representation Matters, From Data to the Workforce. J Am Coll Radiol 2023; 20:852-856. [PMID: 37453602 DOI: 10.1016/j.jacr.2023.06.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 06/14/2023] [Indexed: 07/18/2023]
Abstract
Diversity, equity, and inclusion (DEI) is both a critical ingredient and moral imperative in shaping the future of radiology artificial intelligence (AI) for improved patient care, from design to deployment. At the design level: Potential biases and discrimination within data sets results in inaccurate radiology AI models, and there is an urgent need to purposefully embed DEI principles throughout the AI development and implementation process. At the deployment level: Diverse representation in radiology AI leadership, research, and career development is necessary to avoid worsening structural and historical health inequities. To create an inclusive and equitable AI-enabled future in healthcare, a DEI radiology AI leadership training program may be needed to cultivate a diverse and sustainable pipeline of leaders in the field.
Collapse
Affiliation(s)
- Florence X Doo
- Director of Innovation, University of Maryland Medical Intelligent Imaging Center (UM2ii), Baltimore, Maryland; Member, Committee on Economics in Academic Radiology, under the ACR Commission on Economics.
| | - Geraldine B McGinty
- Senior Associate Dean for Clinical Affairs, Professor of Clinical Radiology and Population Health Sciences, Weill Cornell Medicine, Cornell University, New York, New York; Founder, RADEqual; Chair, International Society of Radiology Commission on Education. https://twitter.com/DrGMcGinty
| |
Collapse
|
42
|
Bagheri AB, Rouzi MD, Koohbanani NA, Mahoor MH, Finco MG, Lee M, Najafi B, Chung J. Potential applications of artificial intelligence and machine learning on diagnosis, treatment, and outcome prediction to address health care disparities of chronic limb-threatening ischemia. Semin Vasc Surg 2023; 36:454-459. [PMID: 37863620 DOI: 10.1053/j.semvascsurg.2023.06.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 06/14/2023] [Accepted: 06/20/2023] [Indexed: 10/22/2023]
Abstract
Chronic limb-threatening ischemia (CLTI) is the most advanced form of peripheral artery disease. CLTI has an extremely poor prognosis and is associated with considerable risk of major amputation, cardiac morbidity, mortality, and poor quality of life. Early diagnosis and targeted treatment of CLTI is critical for improving patient's prognosis. However, this objective has proven elusive, time-consuming, and challenging due to existing health care disparities among patients. In this article, we reviewed how artificial intelligence (AI) and machine learning (ML) can be helpful to accurately diagnose, improve outcome prediction, and identify disparities in the treatment of CLTI. We demonstrate the importance of AI/ML approaches for management of these patients and how available data could be used for computer-guided interventions. Although AI/ML applications to mitigate health care disparities in CLTI are in their infancy, we also highlighted specific AI/ML methods that show potential for addressing health care disparities in CLTI.
Collapse
Affiliation(s)
- Amir Behzad Bagheri
- Interdisciplinary Consortium on Advanced Motion Performance, Division of Vascular Surgery and Endovascular Therapy, Michael E. DeBakey Department of Surgery, Baylor College of Medicine, Houston, TX
| | - Mohammad Dehghan Rouzi
- Interdisciplinary Consortium on Advanced Motion Performance, Division of Vascular Surgery and Endovascular Therapy, Michael E. DeBakey Department of Surgery, Baylor College of Medicine, Houston, TX
| | - Navid Alemi Koohbanani
- Department of Computer Science, Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | - Mohammad H Mahoor
- Department of Electrical and Computer Engineering, University of Denver, Denver, CO
| | - M G Finco
- Interdisciplinary Consortium on Advanced Motion Performance, Division of Vascular Surgery and Endovascular Therapy, Michael E. DeBakey Department of Surgery, Baylor College of Medicine, Houston, TX
| | - Myeounggon Lee
- Interdisciplinary Consortium on Advanced Motion Performance, Division of Vascular Surgery and Endovascular Therapy, Michael E. DeBakey Department of Surgery, Baylor College of Medicine, Houston, TX
| | - Bijan Najafi
- Interdisciplinary Consortium on Advanced Motion Performance, Division of Vascular Surgery and Endovascular Therapy, Michael E. DeBakey Department of Surgery, Baylor College of Medicine, Houston, TX
| | - Jayer Chung
- Division of Vascular Surgery and Endovascular Therapy, Michael E. DeBakey Department of Surgery, Baylor College of Medicine, One Baylor Plaza MS-390, Houston, TX 77030.
| |
Collapse
|
43
|
Zhao H, DiMarco M, Ichikawa K, Boulicault M, Perret M, Jillson K, Fair A, DeJesus K, Richardson SS. Making a 'sex-difference fact': Ambien dosing at the interface of policy, regulation, women's health, and biology. SOCIAL STUDIES OF SCIENCE 2023; 53:475-494. [PMID: 37148216 DOI: 10.1177/03063127231168371] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
The U.S. Food and Drug Administration's (FDA) 2013 decision to lower recommended Ambien dosing for women has been widely cited as a hallmark example of the importance of sex differences in biomedicine. Using regulatory documents, scientific publications, and media coverage, this article analyzes the making of this highly influential and mobile 'sex-difference fact'. As we show, the FDA's decision was a contingent outcome of the drug approval process. Attending to how a contested sex-difference fact came to anchor elite women's health advocacy, this article excavates the role of regulatory processes, advocacy groups, and the media in producing perceptions of scientific agreement while foreclosing ongoing debate, ultimately enabling the stabilization of a binary, biological sex-difference fact and the distancing of this fact from its conditions of construction.
Collapse
|
44
|
Azizi Z, Lindner S, Shiba Y, Raparelli V, Norris CM, Kublickiene K, Herrero MT, Kautzky-Willer A, Klimek P, Gisinger T, Pilote L, El Emam K. A comparison of synthetic data generation and federated analysis for enabling international evaluations of cardiovascular health. Sci Rep 2023; 13:11540. [PMID: 37460705 DOI: 10.1038/s41598-023-38457-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Accepted: 07/08/2023] [Indexed: 07/20/2023] Open
Abstract
Sharing health data for research purposes across international jurisdictions has been a challenge due to privacy concerns. Two privacy enhancing technologies that can enable such sharing are synthetic data generation (SDG) and federated analysis, but their relative strengths and weaknesses have not been evaluated thus far. In this study we compared SDG with federated analysis to enable such international comparative studies. The objective of the analysis was to assess country-level differences in the role of sex on cardiovascular health (CVH) using a pooled dataset of Canadian and Austrian individuals. The Canadian data was synthesized and sent to the Austrian team for analysis. The utility of the pooled (synthetic Canadian + real Austrian) dataset was evaluated by comparing the regression results from the two approaches. The privacy of the Canadian synthetic data was assessed using a membership disclosure test which showed an F1 score of 0.001, indicating low privacy risk. The outcome variable of interest was CVH, calculated through a modified CANHEART index. The main and interaction effect parameter estimates of the federated and pooled analyses were consistent and directionally the same. It took approximately one month to set up the synthetic data generation platform and generate the synthetic data, whereas it took over 1.5 years to set up the federated analysis system. Synthetic data generation can be an efficient and effective tool for enabling multi-jurisdictional studies while addressing privacy concerns.
Collapse
Affiliation(s)
- Zahra Azizi
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, 5252 De Maisonneuve Blvd, Office 2B.39, Montréal, QC, H4A 3S5, Canada
| | - Simon Lindner
- Department of Internal Medicine III, Division of Endocrinology and Metabolism, Gender Medicine Unit, Medical University of Vienna, Vienna, Austria
| | - Yumika Shiba
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, 5252 De Maisonneuve Blvd, Office 2B.39, Montréal, QC, H4A 3S5, Canada
- Faculty of Medicine, McGill University, Montreal, Canada
| | - Valeria Raparelli
- Department of Translational Medicine, University of Ferrara, Ferrara, Italy
- Faculty of Nursing, University of Alberta, Edmonton, AB, Canada
| | - Colleen M Norris
- Faculty of Nursing, University of Alberta, Edmonton, AB, Canada
- Heart and Stroke Strategic Clinical Networks, Alberta Health Services, Alberta, Canada
| | | | - Maria Trinidad Herrero
- Clinical & Experimental Neuroscience (NiCE-IMIB-IUIE), School of Medicine, University of Murcia, Murcia, Spain
| | - Alexandra Kautzky-Willer
- Department of Internal Medicine III, Division of Endocrinology and Metabolism, Gender Medicine Unit, Medical University of Vienna, Vienna, Austria
| | - Peter Klimek
- Section for Science of Complex Systems, CeMSIIS, Medical University of Vienna, Vienna, Austria
- Complexity Science Hub Vienna, Vienna, Austria
| | - Teresa Gisinger
- Division of Endocrinology and Metabolism, Medical University of Vienna, Vienna, Austria
| | - Louise Pilote
- Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, 5252 De Maisonneuve Blvd, Office 2B.39, Montréal, QC, H4A 3S5, Canada.
- Divisions of Clinical Epidemiology and General Internal Medicine, McGill University Health Centre Research Institute, Montreal, QC, Canada.
| | - Khaled El Emam
- Children's Hospital of Eastern Ontario Research Institute, 401 Smyth Road, Ottawa, ON, K1H 8L1, Canada.
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada.
- Replica Analytics Ltd, Ottawa, ON, Canada.
| |
Collapse
|
45
|
Piette JD, Thomas L, Newman S, Marinec N, Krauss J, Chen J, Wu Z, Bohnert ASB. An Automatically Adaptive Digital Health Intervention to Decrease Opioid-Related Risk While Conserving Counselor Time: Quantitative Analysis of Treatment Decisions Based on Artificial Intelligence and Patient-Reported Risk Measures. J Med Internet Res 2023; 25:e44165. [PMID: 37432726 PMCID: PMC10369305 DOI: 10.2196/44165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 04/04/2023] [Accepted: 05/17/2023] [Indexed: 07/12/2023] Open
Abstract
BACKGROUND Some patients prescribed opioid analgesic (OA) medications for pain experience serious side effects, including dependence, sedation, and overdose. As most patients are at low risk for OA-related harms, risk reduction interventions requiring multiple counseling sessions are impractical on a large scale. OBJECTIVE This study evaluates whether an intervention based on reinforcement learning (RL), a field of artificial intelligence, learned through experience to personalize interactions with patients with pain discharged from the emergency department (ED) and decreased self-reported OA misuse behaviors while conserving counselors' time. METHODS We used data representing 2439 weekly interactions between a digital health intervention ("Prescription Opioid Wellness and Engagement Research in the ED" [PowerED]) and 228 patients with pain discharged from 2 EDs who reported recent opioid misuse. During each patient's 12 weeks of intervention, PowerED used RL to select from 3 treatment options: a brief motivational message delivered via an interactive voice response (IVR) call, a longer motivational IVR call, or a live call from a counselor. The algorithm selected session types for each patient each week, with the goal of minimizing OA risk, defined in terms of a dynamic score reflecting patient reports during IVR monitoring calls. When a live counseling call was predicted to have a similar impact on future risk as an IVR message, the algorithm favored IVR to conserve counselor time. We used logit models to estimate changes in the relative frequency of each session type as PowerED gained experience. Poisson regression was used to examine the changes in self-reported OA risk scores over calendar time, controlling for the ordinal session number (1st to 12th). RESULTS Participants on average were 40 (SD 12.7) years of age; 66.7% (152/228) were women and 51.3% (117/228) were unemployed. Most participants (175/228, 76.8%) reported chronic pain, and 46.2% (104/225) had moderate to severe depressive symptoms. As PowerED gained experience through interactions over a period of 142 weeks, it delivered fewer live counseling sessions than brief IVR sessions (P=.006) and extended IVR sessions (P<.001). Live counseling sessions were selected 33.5% of the time in the first 5 weeks of interactions (95% CI 27.4%-39.7%) but only for 16.4% of sessions (95% CI 12.7%-20%) after 125 weeks. Controlling for each patient's changes during the course of treatment, this adaptation of treatment-type allocation led to progressively greater improvements in self-reported OA risk scores (P<.001) over calendar time, as measured by the number of weeks since enrollment began. Improvement in risk behaviors over time was especially pronounced among patients with the highest risk at baseline (P=.02). CONCLUSIONS The RL-supported program learned which treatment modalities worked best to improve self-reported OA risk behaviors while conserving counselors' time. RL-supported interventions represent a scalable solution for patients with pain receiving OA prescriptions. TRIAL REGISTRATION Clinicaltrials.gov NCT02990377; https://classic.clinicaltrials.gov/ct2/show/NCT02990377.
Collapse
Affiliation(s)
- John D Piette
- Ann Arbor Department of Veterans Affairs Center for Clinical Management Research, Ann Arbor, MI, United States
- Department of Health Behavior Health Education, School of Public Health, University of Michigan, Ann Arbor, MI, United States
| | - Laura Thomas
- Ann Arbor Department of Veterans Affairs Center for Clinical Management Research, Ann Arbor, MI, United States
- Department of Anesthesiology, School of Medicine, University of Michigan, Ann Arbor, MI, United States
| | - Sean Newman
- Ann Arbor Department of Veterans Affairs Center for Clinical Management Research, Ann Arbor, MI, United States
- Department of Health Behavior Health Education, School of Public Health, University of Michigan, Ann Arbor, MI, United States
| | - Nicolle Marinec
- Ann Arbor Department of Veterans Affairs Center for Clinical Management Research, Ann Arbor, MI, United States
- Department of Health Behavior Health Education, School of Public Health, University of Michigan, Ann Arbor, MI, United States
| | - Joel Krauss
- Department of Emergency Medicine, Trinity Health St. Joseph Mercy, Ann Arbor, MI, United States
| | - Jenny Chen
- Ann Arbor Department of Veterans Affairs Center for Clinical Management Research, Ann Arbor, MI, United States
- Department of Health Behavior Health Education, School of Public Health, University of Michigan, Ann Arbor, MI, United States
| | - Zhenke Wu
- Department of Biostatistics, School of Public Health, University of Michigan, Ann Arbor, MI, United States
| | - Amy S B Bohnert
- Ann Arbor Department of Veterans Affairs Center for Clinical Management Research, Ann Arbor, MI, United States
- Department of Anesthesiology, School of Medicine, University of Michigan, Ann Arbor, MI, United States
| |
Collapse
|
46
|
Levy M, Pauzner M, Rosenblum S, Peleg M. Achieving trust in health-behavior-change artificial intelligence apps (HBC-AIApp) development: a multi-perspective guide. J Biomed Inform 2023:104414. [PMID: 37276948 DOI: 10.1016/j.jbi.2023.104414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 05/23/2023] [Accepted: 05/30/2023] [Indexed: 06/07/2023]
Abstract
OBJECTIVE Trust determines the success of Health-Behavior-Change Artificial Intelligence Apps (HBC-AIApp). Developers of such apps need theory-based practical methods that can guide them in achieving such trust. Our study aimed to develop a comprehensive conceptual model and development process that can guide developers how to build HBC-AIApp in order to support trust creation among the app's users. METHODS We apply a multi-disciplinary approach where medical informatics, human-centered design, and holistic health methods are integrated to address the trust challenge in HBC-AIApps. The integration extends a conceptual model of trust in AI developed by Jermutus et al., whose properties guide the extension of the IDEAS (integrate, design, assess, and share) HBC-App development process. RESULTS The HBC-AIApp framework consists of three main blocks: (1) system development methods that study the users' complex reality, hence, their perceptions, needs, goals and environment; (2) mediators and other stakeholders who are important for developing and operating the HBC-AIApp, boundary objects that examine users' activities via the HBC-AIApp; and (3) the HBC-AIApp's structural components, AI logic, and physical implementation. These blocks come together to provide the extended conceptual model of trust in HBC-AIApps and the extended IDEAS process. DISCUSSION The developed HBC-AIApp framework drew from our own experience in developing trust in HBC-AIApp. Further research will focus on studying the application of the proposed comprehensive HBC-AIApp development framework and whether applying it supports trust creation in such apps.
Collapse
Affiliation(s)
- Meira Levy
- School of Industrial Engineering and Management, Shenkar, the College of Engineering Design and Art, Ramat-Gan, Israel; Department of Information Systems, University of Haifa, Haifa, Israel.
| | - Michal Pauzner
- The Visual Communication Department, Shenkar, the College of Engineering Design and Art, Ramat-Gan, Israel
| | - Sara Rosenblum
- Department of Occupational Therapy, University of Haifa, Haifa, Israel
| | - Mor Peleg
- Department of Information Systems, University of Haifa, Haifa, Israel
| |
Collapse
|
47
|
Rudd J, Igbrude C. A global perspective on data powering responsible AI solutions in health applications. AI AND ETHICS 2023:1-11. [PMID: 37360149 PMCID: PMC10231277 DOI: 10.1007/s43681-023-00302-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 05/18/2023] [Indexed: 06/28/2023]
Abstract
Healthcare AI solutions have the potential to transform access, quality of care, and improve outcomes for patients globally. This review suggests consideration of a more global perspective, with a particular focus on marginalized communities, during the development of healthcare AI solutions. The review focuses on one aspect (medical applications) to allow technologists to build solutions in today's environment with an understanding of the challenges they face. The following sections explore and discuss the current challenges in the underlying data and AI technology design on healthcare solutions for global deployment. We highlight some of the factors that lead to gaps in data, gaps around regulations for the healthcare sector, and infrastructural challenges in power and network connectivity, as well as lack of social systems for healthcare and education, which pose challenges to the potential universal impacts of such technologies. We recommend using these considerations in developing prototype healthcare AI solutions to better capture the needs of a global population.
Collapse
|
48
|
Bienefeld N, Boss JM, Lüthy R, Brodbeck D, Azzati J, Blaser M, Willms J, Keller E. Solving the explainable AI conundrum by bridging clinicians' needs and developers' goals. NPJ Digit Med 2023; 6:94. [PMID: 37217779 DOI: 10.1038/s41746-023-00837-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 05/05/2023] [Indexed: 05/24/2023] Open
Abstract
Explainable artificial intelligence (XAI) has emerged as a promising solution for addressing the implementation challenges of AI/ML in healthcare. However, little is known about how developers and clinicians interpret XAI and what conflicting goals and requirements they may have. This paper presents the findings of a longitudinal multi-method study involving 112 developers and clinicians co-designing an XAI solution for a clinical decision support system. Our study identifies three key differences between developer and clinician mental models of XAI, including opposing goals (model interpretability vs. clinical plausibility), different sources of truth (data vs. patient), and the role of exploring new vs. exploiting old knowledge. Based on our findings, we propose design solutions that can help address the XAI conundrum in healthcare, including the use of causal inference models, personalized explanations, and ambidexterity between exploration and exploitation mindsets. Our study highlights the importance of considering the perspectives of both developers and clinicians in the design of XAI systems and provides practical recommendations for improving the effectiveness and usability of XAI in healthcare.
Collapse
Affiliation(s)
- Nadine Bienefeld
- Department of Management, Technology, and Economics, ETH Zurich, Zürich, Switzerland.
| | - Jens Michael Boss
- Neurocritical Care Unit, Department of Neurosurgery and Institute of Intensive Care Medicine, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zürich, Switzerland
| | - Rahel Lüthy
- Institute for Medical Engineering and Medical Informatics, School of Life Sciences FHNW, Muttenz, Switzerland
| | - Dominique Brodbeck
- Institute for Medical Engineering and Medical Informatics, School of Life Sciences FHNW, Muttenz, Switzerland
| | - Jan Azzati
- Institute for Medical Engineering and Medical Informatics, School of Life Sciences FHNW, Muttenz, Switzerland
| | - Mirco Blaser
- Institute for Medical Engineering and Medical Informatics, School of Life Sciences FHNW, Muttenz, Switzerland
| | - Jan Willms
- Neurocritical Care Unit, Department of Neurosurgery and Institute of Intensive Care Medicine, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zürich, Switzerland
| | - Emanuela Keller
- Neurocritical Care Unit, Department of Neurosurgery and Institute of Intensive Care Medicine, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Zürich, Switzerland
| |
Collapse
|
49
|
Safarlou CW, Jongsma KR, Vermeulen R, Bredenoord AL. The ethical aspects of exposome research: a systematic review. EXPOSOME 2023; 3:osad004. [PMID: 37745046 PMCID: PMC7615114 DOI: 10.1093/exposome/osad004] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
In recent years, exposome research has been put forward as the next frontier for the study of human health and disease. Exposome research entails the analysis of the totality of environmental exposures and their corresponding biological responses within the human body. Increasingly, this is operationalized by big-data approaches to map the effects of internal as well as external exposures using smart sensors and multiomics technologies. However, the ethical implications of exposome research are still only rarely discussed in the literature. Therefore, we conducted a systematic review of the academic literature regarding both the exposome and underlying research fields and approaches, to map the ethical aspects that are relevant to exposome research. We identify five ethical themes that are prominent in ethics discussions: the goals of exposome research, its standards, its tools, how it relates to study participants, and the consequences of its products. Furthermore, we provide a number of general principles for how future ethics research can best make use of our comprehensive overview of the ethical aspects of exposome research. Lastly, we highlight three aspects of exposome research that are most in need of ethical reflection: the actionability of its findings, the epidemiological or clinical norms applicable to exposome research, and the meaning and action-implications of bias.
Collapse
Affiliation(s)
- Caspar W. Safarlou
- Department of Global Public Health and Bioethics, Julius Center for
Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, The
Netherlands
| | - Karin R. Jongsma
- Department of Global Public Health and Bioethics, Julius Center for
Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, The
Netherlands
| | - Roel Vermeulen
- Department of Global Public Health and Bioethics, Julius Center for
Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, The
Netherlands
- Department of Population Health Sciences, Utrecht University,
Utrecht, The Netherlands
| | - Annelien L. Bredenoord
- Department of Global Public Health and Bioethics, Julius Center for
Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, The
Netherlands
- Erasmus School of Philosophy, Erasmus University Rotterdam,
Rotterdam, The Netherlands
| |
Collapse
|
50
|
Regitz-Zagrosek V, Gebhard C. Gender medicine: effects of sex and gender on cardiovascular disease manifestation and outcomes. Nat Rev Cardiol 2023; 20:236-247. [PMID: 36316574 PMCID: PMC9628527 DOI: 10.1038/s41569-022-00797-4] [Citation(s) in RCA: 39] [Impact Index Per Article: 39.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/26/2022] [Indexed: 11/06/2022]
Abstract
Despite a growing body of evidence, the distinct contributions of biological sex and the sociocultural dimension of gender to the manifestations and outcomes of ischaemic heart disease and heart failure remain unknown. The intertwining of sex-based differences in genetic and hormonal mechanisms with the complex dimension of gender and its different components and determinants that result in different disease phenotypes in women and men needs to be elucidated. The relative contribution of purely biological factors, such as genes and hormones, to cardiovascular phenotypes and outcomes is not yet fully understood. Increasing awareness of the effects of gender has led to efforts to measure gender in retrospective and prospective clinical studies and the development of gender scores. However, the synergistic or opposing effects of sex and gender on cardiovascular traits and on ischaemic heart disease and heart failure mechanisms have not yet been systematically described. Furthermore, specific considerations of sex-related and gender-related factors in gender dysphoria or in heart-brain interactions and their association with cardiovascular disease are still lacking. In this Review, we summarize contemporary evidence on the distinct effects of sex and gender as well as of their interactions on cardiovascular disease and how they favourably or unfavourably influence the pathogenesis, clinical manifestations and treatment responses in patients with ischaemic heart disease or heart failure.
Collapse
Affiliation(s)
- Vera Regitz-Zagrosek
- Institute for Gender in Medicine, Charité University Medicine Berlin, Berlin, Germany.
- Faculty of Medicine, University of Zurich, Zurich, Switzerland.
| | - Catherine Gebhard
- Department of Nuclear Medicine, University Hospital Zurich, Zurich, Switzerland
- Department of Cardiology, Inselspital Bern University Hospital, Bern, Switzerland
| |
Collapse
|