101
|
[Artificial intelligence in urology-opportunities and possibilities]. UROLOGIE (HEIDELBERG, GERMANY) 2023; 62:383-388. [PMID: 36729176 PMCID: PMC10073044 DOI: 10.1007/s00120-023-02026-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 01/09/2023] [Indexed: 02/03/2023]
Abstract
The use of artificial intelligence (AI) in urology can contribute to a significant improvement with regard to individualization of diagnostics and therapy as well as healthcare cost reduction. The potential applications and advantages of AI in medicine are often underestimated or incompletely understood. This makes it difficult to conceptually solve relevant medical problems using AI. With current advances in computer science, multiple, highly complex nonmedical processes have already been studied and optimized in an automated fashion. The development of AI models, if applied correctly, can lead to more effective processing and analysis of patient-related data and correspondingly optimized diagnosis and therapy of urological patients. In this review, the current status on the application of AI in medicine and its opportunities and possibilities in urology are presented from a conceptual perspective using practical examples.
Collapse
|
102
|
Lehoux P, Rivard L, de Oliveira RR, Mörch CM, Alami H. Tools to foster responsibility in digital solutions that operate with or without artificial intelligence: A scoping review for health and innovation policymakers. Int J Med Inform 2023; 170:104933. [PMID: 36521423 DOI: 10.1016/j.ijmedinf.2022.104933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 11/20/2022] [Accepted: 11/21/2022] [Indexed: 12/12/2022]
Abstract
BACKGROUND Digital health solutions that operate with or without artificial intelligence (D/AI) raise several responsibility challenges. Though many frameworks and tools have been developed, determining what principles should be translated into practice remains under debate. This scoping review aims to provide policymakers with a rigorous body of knowledge by asking: 1) what kinds of practice-oriented tools are available?; 2) on what principles do they predominantly rely?; and 3) what are their limitations? METHODS We searched six academic and three grey literature databases for practice-oriented tools, defined as frameworks and/or sets of principles with clear operational explanations, published in English or French from 2015 to 2021. Characteristics of the tools were qualitatively coded and variations across the dataset identified through descriptive statistics and a network analysis. FINDINGS A total of 56 tools met our inclusion criteria: 19 health-specific tools (33.9%) and 37 generic tools (66.1%). They adopt a normative (57.1%), reflective (35.7%), operational (3.6%), or mixed approach (3.6%) to guide developers (14.3%), managers (16.1%), end users (10.7%), policymakers (5.4%) or multiple groups (53.6%). The frequency of 40 principles varies greatly across tools (from 0% for 'environmental sustainability' to 83.8% for 'transparency'). While 50% or more of the generic tools promote up to 19 principles, 50% or more of the health-specific tools promote 10 principles, and 50% or more of all tools disregard 21 principles. In contrast to the scattered network of principles proposed by academia, the business sector emphasizes closely connected principles. Few tools rely on a formal methodology (17.9%). CONCLUSION Despite a lack of consensus, there is a solid knowledge-basis for policymakers to anchor their role in such a dynamic field. Because several tools lack rigour and ignore key social, economic, and environmental issues, an integrated and methodologically sound approach to responsibility in D/AI solutions is warranted.
Collapse
Affiliation(s)
- P Lehoux
- Department of Health Management, Evaluation and Policy, Université de Montréal, Center for Public Health Research (CReSP), Université de Montréal and CIUSSS du Centre-Sud-de-l'Île-de-Montréal, 7101 Av du Parc, Montréal, Québec H3N 1X9, Canada.
| | - L Rivard
- Center for Public Health Research (CReSP), Université de Montréal, Canada.
| | | | - C M Mörch
- FARI - AI for the Common Good Institute, Université Libre de Bruxelles, 10-12, Cantersteen, 1000 Brussels, Belgium.
| | - H Alami
- Interdisciplinary Research in Health Sciences, Nuffield Department of Primary Care Health Sciences, University of Oxford Radcliffe Primary Care Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford OX2 6GG, United Kingdom.
| |
Collapse
|
103
|
Zhang J, Zhang ZM. Ethics and governance of trustworthy medical artificial intelligence. BMC Med Inform Decis Mak 2023; 23:7. [PMID: 36639799 PMCID: PMC9840286 DOI: 10.1186/s12911-023-02103-9] [Citation(s) in RCA: 50] [Impact Index Per Article: 50.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 01/09/2023] [Indexed: 01/14/2023] Open
Abstract
BACKGROUND The growing application of artificial intelligence (AI) in healthcare has brought technological breakthroughs to traditional diagnosis and treatment, but it is accompanied by many risks and challenges. These adverse effects are also seen as ethical issues and affect trustworthiness in medical AI and need to be managed through identification, prognosis and monitoring. METHODS We adopted a multidisciplinary approach and summarized five subjects that influence the trustworthiness of medical AI: data quality, algorithmic bias, opacity, safety and security, and responsibility attribution, and discussed these factors from the perspectives of technology, law, and healthcare stakeholders and institutions. The ethical framework of ethical values-ethical principles-ethical norms is used to propose corresponding ethical governance countermeasures for trustworthy medical AI from the ethical, legal, and regulatory aspects. RESULTS Medical data are primarily unstructured, lacking uniform and standardized annotation, and data quality will directly affect the quality of medical AI algorithm models. Algorithmic bias can affect AI clinical predictions and exacerbate health disparities. The opacity of algorithms affects patients' and doctors' trust in medical AI, and algorithmic errors or security vulnerabilities can pose significant risks and harm to patients. The involvement of medical AI in clinical practices may threaten doctors 'and patients' autonomy and dignity. When accidents occur with medical AI, the responsibility attribution is not clear. All these factors affect people's trust in medical AI. CONCLUSIONS In order to make medical AI trustworthy, at the ethical level, the ethical value orientation of promoting human health should first and foremost be considered as the top-level design. At the legal level, current medical AI does not have moral status and humans remain the duty bearers. At the regulatory level, strengthening data quality management, improving algorithm transparency and traceability to reduce algorithm bias, and regulating and reviewing the whole process of the AI industry to control risks are proposed. It is also necessary to encourage multiple parties to discuss and assess AI risks and social impacts, and to strengthen international cooperation and communication.
Collapse
Affiliation(s)
- Jie Zhang
- grid.410745.30000 0004 1765 1045Institute of Literature in Chinese Medicine, Nanjing University of Chinese Medicine, Nanjing, 210023 China ,grid.260483.b0000 0000 9530 8833Nantong University Xinglin College, Nantong, 226236 China
| | - Zong-ming Zhang
- grid.410745.30000 0004 1765 1045Research Center of Chinese Medicine Culture, Nanjing University of Chinese Medicine, Nanjing, 210023 China
| |
Collapse
|
104
|
Sigfrids A, Leikas J, Salo-Pöntinen H, Koskimies E. Human-centricity in AI governance: A systemic approach. Front Artif Intell 2023; 6:976887. [PMID: 36872934 PMCID: PMC9979257 DOI: 10.3389/frai.2023.976887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 01/24/2023] [Indexed: 02/16/2023] Open
Abstract
Human-centricity is considered a central aspect in the development and governance of artificial intelligence (AI). Various strategies and guidelines highlight the concept as a key goal. However, we argue that current uses of Human-Centered AI (HCAI) in policy documents and AI strategies risk downplaying promises of creating desirable, emancipatory technology that promotes human wellbeing and the common good. Firstly, HCAI, as it appears in policy discourses, is the result of aiming to adapt the concept of human-centered design (HCD) to the public governance context of AI but without proper reflection on how it should be reformed to suit the new task environment. Second, the concept is mainly used in reference to realizing human and fundamental rights, which are necessary, but not sufficient for technological emancipation. Third, the concept is used ambiguously in policy and strategy discourses, making it unclear how it should be operationalized in governance practices. This article explores means and approaches for using the HCAI approach for technological emancipation in the context of public AI governance. We propose that the potential for emancipatory technology development rests on expanding the traditional user-centered view of technology design to involve community- and society-centered perspectives in public governance. Developing public AI governance in this way relies on enabling inclusive governance modalities that enhance the social sustainability of AI deployment. We discuss mutual trust, transparency, communication, and civic tech as key prerequisites for socially sustainable and human-centered public AI governance. Finally, the article introduces a systemic approach to ethically and socially sustainable, human-centered AI development and deployment.
Collapse
Affiliation(s)
- Anton Sigfrids
- VTT Technical Research Centre of Finland Ltd, Espoo, Finland
| | - Jaana Leikas
- VTT Technical Research Centre of Finland Ltd, Espoo, Finland
| | - Henrikki Salo-Pöntinen
- Faculty of Information Technology, Cognitive Science, University of Jyväskylä, Jyväskylä, Finland
| | - Emmi Koskimies
- Faculty of Management and Business, Administrative Sciences, Tampere University, Tampere, Finland
| |
Collapse
|
105
|
Nittas V, Daniore P, Landers C, Gille F, Amann J, Hubbs S, Puhan MA, Vayena E, Blasimme A. Beyond high hopes: A scoping review of the 2019-2021 scientific discourse on machine learning in medical imaging. PLOS DIGITAL HEALTH 2023; 2:e0000189. [PMID: 36812620 PMCID: PMC9931290 DOI: 10.1371/journal.pdig.0000189] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 01/02/2023] [Indexed: 02/04/2023]
Abstract
Machine learning has become a key driver of the digital health revolution. That comes with a fair share of high hopes and hype. We conducted a scoping review on machine learning in medical imaging, providing a comprehensive outlook of the field's potential, limitations, and future directions. Most reported strengths and promises included: improved (a) analytic power, (b) efficiency (c) decision making, and (d) equity. Most reported challenges included: (a) structural barriers and imaging heterogeneity, (b) scarcity of well-annotated, representative and interconnected imaging datasets (c) validity and performance limitations, including bias and equity issues, and (d) the still missing clinical integration. The boundaries between strengths and challenges, with cross-cutting ethical and regulatory implications, remain blurred. The literature emphasizes explainability and trustworthiness, with a largely missing discussion about the specific technical and regulatory challenges surrounding these concepts. Future trends are expected to shift towards multi-source models, combining imaging with an array of other data, in a more open access, and explainable manner.
Collapse
Affiliation(s)
- Vasileios Nittas
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, Swiss Federal Institute of Technology (ETH Zurich), Zurich, Switzerland
- Epidemiology, Biostatistics and Prevention Institute, Faculty of Medicine, Faculty of Science, University of Zurich, Zurich, Switzerland
| | - Paola Daniore
- Institute for Implementation Science in Health Care, Faculty of Medicine, University of Zurich, Switzerland
- Digital Society Initiative, University of Zurich, Switzerland
| | - Constantin Landers
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, Swiss Federal Institute of Technology (ETH Zurich), Zurich, Switzerland
| | - Felix Gille
- Institute for Implementation Science in Health Care, Faculty of Medicine, University of Zurich, Switzerland
- Digital Society Initiative, University of Zurich, Switzerland
| | - Julia Amann
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, Swiss Federal Institute of Technology (ETH Zurich), Zurich, Switzerland
| | - Shannon Hubbs
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, Swiss Federal Institute of Technology (ETH Zurich), Zurich, Switzerland
| | - Milo Alan Puhan
- Epidemiology, Biostatistics and Prevention Institute, Faculty of Medicine, Faculty of Science, University of Zurich, Zurich, Switzerland
| | - Effy Vayena
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, Swiss Federal Institute of Technology (ETH Zurich), Zurich, Switzerland
| | - Alessandro Blasimme
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, Swiss Federal Institute of Technology (ETH Zurich), Zurich, Switzerland
| |
Collapse
|
106
|
Temporal convolutional networks and data rebalancing for clinical length of stay and mortality prediction. Sci Rep 2022; 12:21247. [PMID: 36481828 PMCID: PMC9732283 DOI: 10.1038/s41598-022-25472-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 11/30/2022] [Indexed: 12/13/2022] Open
Abstract
It is critical for hospitals to accurately predict patient length of stay (LOS) and mortality in real-time. We evaluate temporal convolutional networks (TCNs) and data rebalancing methods to predict LOS and mortality. This is a retrospective cohort study utilizing the MIMIC-III database. The MIMIC-Extract pipeline processes 24 hour time-series clinical objective data for 23,944 unique patient records. TCN performance is compared to both baseline and state-of-the-art machine learning models including logistic regression, random forest, gated recurrent unit with decay (GRU-D). Models are evaluated for binary classification tasks (LOS > 3 days, LOS > 7 days, mortality in-hospital, and mortality in-ICU) with and without data rebalancing and analyzed for clinical runtime feasibility. Data is split temporally, and evaluations utilize tenfold cross-validation (stratified splits) followed by simulated prospective hold-out validation. In mortality tasks, TCN outperforms baselines in 6 of 8 metrics (area under receiver operating characteristic, area under precision-recall curve (AUPRC), and F-1 measure for in-hospital mortality; AUPRC, accuracy, and F-1 for in-ICU mortality). In LOS tasks, TCN performs competitively to the GRU-D (best in 6 of 8) and the random forest model (best in 2 of 8). Rebalancing improves predictive power across multiple methods and outcome ratios. The TCN offers strong performance in mortality classification and offers improved computational efficiency on GPU-enabled systems over popular RNN architectures. Dataset rebalancing can improve model predictive power in imbalanced learning. We conclude that temporal convolutional networks should be included in model searches for critical care outcome prediction systems.
Collapse
|
107
|
Alsobhi M, Sachdev HS, Chevidikunnan MF, Basuodan R, K U DK, Khan F. Facilitators and Barriers of Artificial Intelligence Applications in Rehabilitation: A Mixed-Method Approach. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:15919. [PMID: 36497993 PMCID: PMC9737928 DOI: 10.3390/ijerph192315919] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 11/24/2022] [Accepted: 11/25/2022] [Indexed: 06/17/2023]
Abstract
Artificial intelligence (AI) has been used in physical therapy diagnosis and management for various impairments. Physical therapists (PTs) need to be able to utilize the latest innovative treatment techniques to improve the quality of care. The study aimed to describe PTs' views on AI and investigate multiple factors as indicators of AI knowledge, attitude, and adoption among PTs. Moreover, the study aimed to identify the barriers to using AI in rehabilitation. Two hundred and thirty-six PTs participated voluntarily in the study. A concurrent mixed-method design was used to document PTs' opinions regarding AI deployment in rehabilitation. A self-administered survey consisting of several aspects, including demographic, knowledge, uses, advantages, impacts, and barriers limiting AI utilization in rehabilitation, was used. A total of 63.3% of PTs reported that they had not experienced any kind of AI applications at work. The major factors predicting a higher level of AI knowledge among PTs were being a non-academic worker (OR = 1.77 [95% CI; 1.01 to 3.12], p = 0.04), being a senior PT (OR = 2.44, [95%CI: 1.40 to 4.22], p = 0.002), and having a Master/Doctorate degree (OR = 1.97, [95%CI: 1.11 to 3.50], p = 0.02). However, the cost and resources of AI were the major reported barriers to adopting AI-based technologies. The study highlighted a remarkable dearth of AI knowledge among PTs. AI and advanced knowledge in technology need to be urgently transferred to PTs.
Collapse
Affiliation(s)
- Mashael Alsobhi
- Department of Physical Therapy, Faculty of Medical Rehabilitation Sciences, King Abdulaziz University, Jeddah 22252, Saudi Arabia
| | - Harpreet Singh Sachdev
- Department of Neurology, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Mohamed Faisal Chevidikunnan
- Department of Physical Therapy, Faculty of Medical Rehabilitation Sciences, King Abdulaziz University, Jeddah 22252, Saudi Arabia
| | - Reem Basuodan
- Department of Rehabilitation Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Dhanesh Kumar K U
- Nitte Institute of Physiotherapy, Nitte University, Deralaktte, Mangalore 575022, India
| | - Fayaz Khan
- Department of Physical Therapy, Faculty of Medical Rehabilitation Sciences, King Abdulaziz University, Jeddah 22252, Saudi Arabia
| |
Collapse
|
108
|
Presented a Framework of Computational Modeling to Identify the Patient Admission Scheduling Problem in the Healthcare System. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:1938719. [PMID: 36483659 PMCID: PMC9726263 DOI: 10.1155/2022/1938719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 03/08/2022] [Accepted: 03/23/2022] [Indexed: 11/30/2022]
Abstract
Operating room scheduling is a prominent study topic due to its complexity and significance. The increasing number of technical operating room scheduling articles produced each year calls for another evaluation of the literature to enable academics to respond to new trends more quickly. The mathematical application of a model for the patient admission scheduling issue with stochastic arrivals and departures is the subject of this study. The approach for applying our model to real-world issues is discussed here. We present a solution technique for efficient computing, a numerical model analysis, and examples to demonstrate the methodology. This study looked at the challenge of assigning procedures to operate rooms in the face of ambiguity regarding surgery length and the arrival of emergency patients based on a flexible policy (capacity reservation). We demonstrate that the proposed methods derived from deterministic models are inadequate compared to the answers produced from our stochastic model using simple numerical examples. We also use heuristics to estimate the objective function to build more complicated numerical examples for large-scale issues, demonstrating that our methodology can be applied quickly to real-world situations that often include big information sets.
Collapse
|
109
|
Istasy P, Lee WS, Iansavichene A, Upshur R, Gyawali B, Burkell J, Sadikovic B, Lazo-Langner A, Chin-Yee B. The Impact of Artificial Intelligence on Health Equity in Oncology: Scoping Review. J Med Internet Res 2022; 24:e39748. [PMID: 36005841 PMCID: PMC9667381 DOI: 10.2196/39748] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Revised: 08/11/2022] [Accepted: 08/24/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND The field of oncology is at the forefront of advances in artificial intelligence (AI) in health care, providing an opportunity to examine the early integration of these technologies in clinical research and patient care. Hope that AI will revolutionize health care delivery and improve clinical outcomes has been accompanied by concerns about the impact of these technologies on health equity. OBJECTIVE We aimed to conduct a scoping review of the literature to address the question, "What are the current and potential impacts of AI technologies on health equity in oncology?" METHODS Following PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines for scoping reviews, we systematically searched MEDLINE and Embase electronic databases from January 2000 to August 2021 for records engaging with key concepts of AI, health equity, and oncology. We included all English-language articles that engaged with the 3 key concepts. Articles were analyzed qualitatively for themes pertaining to the influence of AI on health equity in oncology. RESULTS Of the 14,011 records, 133 (0.95%) identified from our review were included. We identified 3 general themes in the literature: the use of AI to reduce health care disparities (58/133, 43.6%), concerns surrounding AI technologies and bias (16/133, 12.1%), and the use of AI to examine biological and social determinants of health (55/133, 41.4%). A total of 3% (4/133) of articles focused on many of these themes. CONCLUSIONS Our scoping review revealed 3 main themes on the impact of AI on health equity in oncology, which relate to AI's ability to help address health disparities, its potential to mitigate or exacerbate bias, and its capability to help elucidate determinants of health. Gaps in the literature included a lack of discussion of ethical challenges with the application of AI technologies in low- and middle-income countries, lack of discussion of problems of bias in AI algorithms, and a lack of justification for the use of AI technologies over traditional statistical methods to address specific research questions in oncology. Our review highlights a need to address these gaps to ensure a more equitable integration of AI in cancer research and clinical practice. The limitations of our study include its exploratory nature, its focus on oncology as opposed to all health care sectors, and its analysis of solely English-language articles.
Collapse
Affiliation(s)
- Paul Istasy
- Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
- Rotman Institute of Philosophy, Western University, London, ON, Canada
| | - Wen Shen Lee
- Department of Pathology & Laboratory Medicine, Schulich School of Medicine, Western University, London, ON, Canada
| | | | - Ross Upshur
- Division of Clinical Public Health, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
- Bridgepoint Collaboratory for Research and Innovation, Lunenfeld Tanenbaum Research Institute, Sinai Health System, Toronto, ON, Canada
| | - Bishal Gyawali
- Division of Cancer Care and Epidemiology, Department of Oncology, Queen's University, Kingston, ON, Canada
- Division of Cancer Care and Epidemiology, Department of Public Health Sciences, Queen's University, Kingston, ON, Canada
| | - Jacquelyn Burkell
- Faculty of Information and Media Studies, Western University, London, ON, Canada
| | - Bekim Sadikovic
- Department of Pathology & Laboratory Medicine, Schulich School of Medicine, Western University, London, ON, Canada
| | - Alejandro Lazo-Langner
- Division of Hematology, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Benjamin Chin-Yee
- Rotman Institute of Philosophy, Western University, London, ON, Canada
- Division of Hematology, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
- Division of Hematology, Department of Medicine, London Health Sciences Centre, London, ON, Canada
| |
Collapse
|
110
|
Nguyen A, Ngo HN, Hong Y, Dang B, Nguyen BPT. Ethical principles for artificial intelligence in education. EDUCATION AND INFORMATION TECHNOLOGIES 2022; 28:4221-4241. [PMID: 36254344 PMCID: PMC9558020 DOI: 10.1007/s10639-022-11316-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 08/31/2022] [Indexed: 06/16/2023]
Abstract
The advancement of artificial intelligence in education (AIED) has the potential to transform the educational landscape and influence the role of all involved stakeholders. In recent years, the applications of AIED have been gradually adopted to progress our understanding of students' learning and enhance learning performance and experience. However, the adoption of AIED has led to increasing ethical risks and concerns regarding several aspects such as personal data and learner autonomy. Despite the recent announcement of guidelines for ethical and trustworthy AIED, the debate revolves around the key principles underpinning ethical AIED. This paper aims to explore whether there is a global consensus on ethical AIED by mapping and analyzing international organizations' current policies and guidelines. In this paper, we first introduce the opportunities offered by AI in education and potential ethical issues. Then, thematic analysis was conducted to conceptualize and establish a set of ethical principles by examining and synthesizing relevant ethical policies and guidelines for AIED. We discuss each principle and associated implications for relevant educational stakeholders, including students, teachers, technology developers, policymakers, and institutional decision-makers. The proposed set of ethical principles is expected to serve as a framework to inform and guide educational stakeholders in the development and deployment of ethical and trustworthy AIED as well as catalyze future development of related impact studies in the field.
Collapse
Affiliation(s)
- Andy Nguyen
- Learning & Educational Technology Research Unit (LET), University of Oulu, Oulu, Finland
| | - Ha Ngan Ngo
- Faculty of Education, Victoria University of Wellington, Wellington, New Zealand
| | - Yvonne Hong
- School of Information Management, Victoria University of Wellington, Wellington, New Zealand
| | - Belle Dang
- Learning & Educational Technology Research Unit (LET), University of Oulu, Oulu, Finland
| | - Bich-Phuong Thi Nguyen
- Faculty of English Language Teacher Education, VNU University of Languages and International Studies, Hanoi, Vietnam
| |
Collapse
|
111
|
Data governance functions to support responsible data stewardship in pediatric radiology research studies using artificial intelligence. Pediatr Radiol 2022; 52:2111-2119. [PMID: 35790559 DOI: 10.1007/s00247-022-05427-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Revised: 04/13/2022] [Accepted: 06/06/2022] [Indexed: 03/03/2023]
Abstract
The integration of human and machine intelligence promises to profoundly change the practice of medicine. The rapidly increasing adoption of artificial intelligence (AI) solutions highlights its potential to streamline physician work and optimize clinical decision-making, also in the field of pediatric radiology. Large imaging databases are necessary for training, validating and testing these algorithms. To better promote data accessibility in multi-institutional AI-enabled radiologic research, these databases centralize the large volumes of data required to effect accurate models and outcome predictions. However, such undertakings must consider the sensitivity of patient information and therefore utilize requisite data governance measures to safeguard data privacy and security, to recognize and mitigate the effects of bias and to promote ethical use. In this article we define data stewardship and data governance, review their key considerations and applicability to radiologic research in the pediatric context, and consider the associated best practices along with the ramifications of poorly executed data governance. We summarize several adaptable data governance frameworks and describe strategies for their implementation in the form of distributed and centralized approaches to data management.
Collapse
|
112
|
Feature importance in machine learning models: A fuzzy information fusion approach. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.09.053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
113
|
Lin FPY, Salih OS, Scott N, Jameson MB, Epstein RJ. Development and Validation of a Machine Learning Approach Leveraging Real-World Clinical Narratives as a Predictor of Survival in Advanced Cancer. JCO Clin Cancer Inform 2022; 6:e2200064. [DOI: 10.1200/cci.22.00064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
PURPOSE Predicting short-term mortality in patients with advanced cancer remains challenging. Whether digitalized clinical text can be used to build models to enhance survival prediction in this population is unclear. MATERIALS AND METHODS We conducted a single-centered retrospective cohort study in patients with advanced solid tumors. Clinical correspondence authored by oncologists at the first patient encounter was extracted from the electronic medical records. Machine learning (ML) models were trained using narratives from the derivation cohort, before being tested on a temporal validation cohort at the same site. Performance was benchmarked against Eastern Cooperative Oncology Group performance status (PS), comparing ML models alone (comparison 1) or in combination with PS (comparison 2), assessed by areas under receiver operating characteristic curves (AUCs) for predicting vital status at 11 time points from 2 to 52 weeks. RESULTS ML models were built on the derivation cohort (4,791 patients from 2001 to April 2017) and tested on the validation cohort of 726 patients (May 2017-June 2019). In 441 patients (61%) where clinical narratives were available and PS was documented, ML models outperformed the predictivity of PS (mean AUC improvement, 0.039, P < .001, comparison 1). Inclusion of both clinical text and PS in ML models resulted in further improvement in prediction accuracy over PS with a mean AUC improvement of 0.050 ( P < .001, comparison 2); the AUC was > 0.80 at all assessed time points for models incorporating clinical text. Exploratory analysis of oncologist's narratives revealed recurring descriptors correlating with survival, including referral patterns, mobility, physical functions, and concomitant medications. CONCLUSION Applying ML to oncologists' narratives with or without including patient's PS significantly improved survival prediction to 12 months, suggesting the utility of clinical text in building prognostic support tools.
Collapse
Affiliation(s)
- Frank Po-Yen Lin
- Kinghorn Centre for Clinical Genomics, Garvan Institute of Medical Research, Darlinghurst, Australia
- NHMRC Clinical Trials Centre, Sydney University, Camperdown, Australia
- Department of Medical Oncology, Waikato Hospital, Hamilton, New Zealand
- School of Clinical Medicine, University of New South Wales, Sydney, Australia
| | - Osama S.M. Salih
- Department of Medical Oncology, Waikato Hospital, Hamilton, New Zealand
- Auckland City Hospital, Auckland, New Zealand
| | - Nina Scott
- Waikato Clinical Campus, University of Auckland, Hamilton, New Zealand
| | - Michael B. Jameson
- Department of Medical Oncology, Waikato Hospital, Hamilton, New Zealand
- Waikato Clinical Campus, University of Auckland, Hamilton, New Zealand
| | - Richard J. Epstein
- School of Clinical Medicine, University of New South Wales, Sydney, Australia
- Cancer Research Division, Garvan Institute of Medical Research, Sydney, Australia
- New Hope Cancer Centre, Beijing United Hospital, Beijing, China
| |
Collapse
|
114
|
Lareyre F, Behrendt CA, Chaudhuri A, Ayache N, Delingette H, Raffort J. Big Data and Artificial Intelligence in Vascular Surgery: Time for Multidisciplinary Cross-Border Collaboration. Angiology 2022; 73:697-700. [PMID: 35815537 DOI: 10.1177/00033197221113146] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Affiliation(s)
- Fabien Lareyre
- Department of Vascular Surgery, 70607Hospital of Antibes Juan-les-Pins, Antibes, France.,Université Côte d'Azur, Inserm U1065, C3M, Nice, France
| | - Christian-Alexander Behrendt
- Research Group GermanVasc, Department of Vascular Medicine, University Heart and Vascular Centre UKE Hamburg, University Medical Centre Hamburg-Eppendorf, Hamburg, Germany
| | - Arindam Chaudhuri
- Bedfordshire - Milton Keynes Vascular Centre, 575329Bedfordshire Hospitals NHS Foundation Trust, Bedford, UK
| | - Nicholas Ayache
- Université Côte d'Azur84436 Inria, EPIONE Team, Sophia Antipolis, France.,Université Côte d'Azur 3IA Institute, France
| | - Hervé Delingette
- Université Côte d'Azur84436 Inria, EPIONE Team, Sophia Antipolis, France.,Université Côte d'Azur 3IA Institute, France
| | - Juliette Raffort
- Université Côte d'Azur, Inserm U1065, C3M, Nice, France.,Université Côte d'Azur 3IA Institute, France.,Department of clinical Biochemistry, 37045University Hospital of Nice, Nice, France
| |
Collapse
|
115
|
Liao F, Adelaine S, Afshar M, Patterson BW. Governance of Clinical AI applications to facilitate safe and equitable deployment in a large health system: Key elements and early successes. Front Digit Health 2022; 4:931439. [PMID: 36093386 PMCID: PMC9448877 DOI: 10.3389/fdgth.2022.931439] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 07/18/2022] [Indexed: 11/13/2022] Open
Abstract
One of the key challenges in successful deployment and meaningful adoption of AI in healthcare is health system-level governance of AI applications. Such governance is critical not only for patient safety and accountability by a health system, but to foster clinician trust to improve adoption and facilitate meaningful health outcomes. In this case study, we describe the development of such a governance structure at University of Wisconsin Health (UWH) that provides oversight of AI applications from assessment of validity and user acceptability through safe deployment with continuous monitoring for effectiveness. Our structure leverages a multi-disciplinary steering committee along with project specific sub-committees. Members of the committee formulate a multi-stakeholder perspective spanning informatics, data science, clinical operations, ethics, and equity. Our structure includes guiding principles that provide tangible parameters for endorsement of both initial deployment and ongoing usage of AI applications. The committee is tasked with ensuring principles of interpretability, accuracy, and fairness across all applications. To operationalize these principles, we provide a value stream to apply the principles of AI governance at different stages of clinical implementation. This structure has enabled effective clinical adoption of AI applications. Effective governance has provided several outcomes: (1) a clear and institutional structure for oversight and endorsement; (2) a path towards successful deployment that encompasses technologic, clinical, and operational, considerations; (3) a process for ongoing monitoring to ensure the solution remains acceptable as clinical practice and disease prevalence evolve; (4) incorporation of guidelines for the ethical and equitable use of AI applications.
Collapse
Affiliation(s)
- Frank Liao
- BerbeeWalsh Department of Emergency Medicine, UW-Madison, Madison, WI, United States
- Department of Information Services, UW Health, Madison, WI, United States
| | - Sabrina Adelaine
- Department of Information Services, UW Health, Madison, WI, United States
| | - Majid Afshar
- Department of Medicine, UW-Madison, Madison, WI, United States
- Department of Biostatistics and Medical Informatics, UW-Madison, Madison, WI, United States
| | - Brian W. Patterson
- BerbeeWalsh Department of Emergency Medicine, UW-Madison, Madison, WI, United States
- Department of Information Services, UW Health, Madison, WI, United States
- Department of Biostatistics and Medical Informatics, UW-Madison, Madison, WI, United States
- Department of Industrial and Systems Engineering, UW-Madison, Madison, WI, United States
| |
Collapse
|
116
|
Kanakaraj P, Ramadass K, Bao S, Basford M, Jones LM, Lee HH, Xu K, Schilling KG, Carr JJ, Terry JG, Huo Y, Sandler KL, Netwon AT, Landman BA. Workflow Integration of Research AI Tools into a Hospital Radiology Rapid Prototyping Environment. J Digit Imaging 2022; 35:1023-1033. [PMID: 35266088 PMCID: PMC9485498 DOI: 10.1007/s10278-022-00601-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 01/14/2022] [Accepted: 01/23/2022] [Indexed: 11/25/2022] Open
Abstract
The field of artificial intelligence (AI) in medical imaging is undergoing explosive growth, and Radiology is a prime target for innovation. The American College of Radiology Data Science Institute has identified more than 240 specific use cases where AI could be used to improve clinical practice. In this context, thousands of potential methods are developed by research labs and industry innovators. Deploying AI tools within a clinical enterprise, even on limited retrospective evaluation, is complicated by security and privacy concerns. Thus, innovation must be weighed against the substantive resources required for local clinical evaluation. To reduce barriers to AI validation while maintaining rigorous security and privacy standards, we developed the AI Imaging Incubator. The AI Imaging Incubator serves as a DICOM storage destination within a clinical enterprise where images can be directed for novel research evaluation under Institutional Review Board approval. AI Imaging Incubator is controlled by a secure HIPAA-compliant front end and provides access to a menu of AI procedures captured within network-isolated containers. Results are served via a secure website that supports research and clinical data formats. Deployment of new AI approaches within this system is streamlined through a standardized application programming interface. This manuscript presents case studies of the AI Imaging Incubator applied to randomizing lung biopsies on chest CT, liver fat assessment on abdomen CT, and brain volumetry on head MRI.
Collapse
Affiliation(s)
| | | | - Shunxing Bao
- Computer Science, Vanderbilt University, Nashville, TN USA
| | - Melissa Basford
- Vanderbilt Institute for Clinical and Translational Research, Vanderbilt University Medical Center, Nashville, TN USA
| | - Laura M. Jones
- Vanderbilt Institute for Clinical and Translational Research, Vanderbilt University Medical Center, Nashville, TN USA
| | - Ho Hin Lee
- Computer Science, Vanderbilt University, Nashville, TN USA
| | - Kaiwen Xu
- Computer Science, Vanderbilt University, Nashville, TN USA
| | - Kurt G. Schilling
- Vanderbilt University Institute of Imaging Science, Vanderbilt University Medical Center, Nashville, TN USA ,Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN USA
| | - John Jeffrey Carr
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN USA
| | - James Gregory Terry
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN USA
| | - Yuankai Huo
- Computer Science, Vanderbilt University, Nashville, TN USA ,Data Science Institute, Vanderbilt University, Nashville, TN USA
| | - Kim Lori Sandler
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN USA
| | - Allen T. Netwon
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN USA
| | - Bennett A. Landman
- Computer Science, Vanderbilt University, Nashville, TN USA ,Vanderbilt University Institute of Imaging Science, Vanderbilt University Medical Center, Nashville, TN USA ,Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN USA ,Electrical Engineering, Vanderbilt University, Nashville, TN USA ,Biomedical Engineering, Vanderbilt University, Nashville, TN USA ,Data Science Institute, Vanderbilt University, Nashville, TN USA
| |
Collapse
|
117
|
Choudhury A. Toward an Ecologically Valid Conceptual Framework for the Use of Artificial Intelligence in Clinical Settings: Need for Systems Thinking, Accountability, Decision-making, Trust, and Patient Safety Considerations in Safeguarding the Technology and Clinicians. JMIR Hum Factors 2022; 9:e35421. [PMID: 35727615 PMCID: PMC9257623 DOI: 10.2196/35421] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 03/26/2022] [Accepted: 05/20/2022] [Indexed: 01/29/2023] Open
Abstract
The health care management and the medical practitioner literature lack a descriptive conceptual framework for understanding the dynamic and complex interactions between clinicians and artificial intelligence (AI) systems. As most of the existing literature has been investigating AI's performance and effectiveness from a statistical (analytical) standpoint, there is a lack of studies ensuring AI's ecological validity. In this study, we derived a framework that focuses explicitly on the interaction between AI and clinicians. The proposed framework builds upon well-established human factors models such as the technology acceptance model and expectancy theory. The framework can be used to perform quantitative and qualitative analyses (mixed methods) to capture how clinician-AI interactions may vary based on human factors such as expectancy, workload, trust, cognitive variables related to absorptive capacity and bounded rationality, and concerns for patient safety. If leveraged, the proposed framework can help to identify factors influencing clinicians' intention to use AI and, consequently, improve AI acceptance and address the lack of AI accountability while safeguarding the patients, clinicians, and AI technology. Overall, this paper discusses the concepts, propositions, and assumptions of the multidisciplinary decision-making literature, constituting a sociocognitive approach that extends the theories of distributed cognition and, thus, will account for the ecological validity of AI.
Collapse
Affiliation(s)
- Avishek Choudhury
- Industrial and Management Systems Engineering, Benjamin M Statler College of Engineering and Mineral Resources, West Virginia University, Morgantown, WV, United States
| |
Collapse
|
118
|
Schneider J, Abraham R, Meske C, Vom Brocke J. Artificial Intelligence Governance For Businesses. INFORMATION SYSTEMS MANAGEMENT 2022. [DOI: 10.1080/10580530.2022.2085825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Johannes Schneider
- Institute of Information Systems, University of Liechtenstein, Vaduz, Liechtenstein
| | - Rene Abraham
- Institute of Information Systems, University of Liechtenstein, Vaduz, Liechtenstein
| | | | - Jan Vom Brocke
- Institute of Information Systems, University of Liechtenstein, Vaduz, Liechtenstein
| |
Collapse
|
119
|
Marotta A. When AI Is Wrong: Addressing Liability Challenges in Women’s Healthcare. JOURNAL OF COMPUTER INFORMATION SYSTEMS 2022. [DOI: 10.1080/08874417.2022.2089773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
120
|
Sharma M, Savage C, Nair M, Larsson I, Svedberg P, Nygren JM. Artificial Intelligence Applications in Health Care Practice: A Scoping Review (Preprint). J Med Internet Res 2022; 24:e40238. [PMID: 36197712 PMCID: PMC9582911 DOI: 10.2196/40238] [Citation(s) in RCA: 45] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 08/19/2022] [Accepted: 08/30/2022] [Indexed: 11/25/2022] Open
Abstract
Background Artificial intelligence (AI) is often heralded as a potential disruptor that will transform the practice of medicine. The amount of data collected and available in health care, coupled with advances in computational power, has contributed to advances in AI and an exponential growth of publications. However, the development of AI applications does not guarantee their adoption into routine practice. There is a risk that despite the resources invested, benefits for patients, staff, and society will not be realized if AI implementation is not better understood. Objective The aim of this study was to explore how the implementation of AI in health care practice has been described and researched in the literature by answering 3 questions: What are the characteristics of research on implementation of AI in practice? What types and applications of AI systems are described? What characteristics of the implementation process for AI systems are discernible? Methods A scoping review was conducted of MEDLINE (PubMed), Scopus, Web of Science, CINAHL, and PsycINFO databases to identify empirical studies of AI implementation in health care since 2011, in addition to snowball sampling of selected reference lists. Using Rayyan software, we screened titles and abstracts and selected full-text articles. Data from the included articles were charted and summarized. Results Of the 9218 records retrieved, 45 (0.49%) articles were included. The articles cover diverse clinical settings and disciplines; most (32/45, 71%) were published recently, were from high-income countries (33/45, 73%), and were intended for care providers (25/45, 56%). AI systems are predominantly intended for clinical care, particularly clinical care pertaining to patient-provider encounters. More than half (24/45, 53%) possess no action autonomy but rather support human decision-making. The focus of most research was on establishing the effectiveness of interventions (16/45, 35%) or related to technical and computational aspects of AI systems (11/45, 24%). Focus on the specifics of implementation processes does not yet seem to be a priority in research, and the use of frameworks to guide implementation is rare. Conclusions Our current empirical knowledge derives from implementations of AI systems with low action autonomy and approaches common to implementations of other types of information systems. To develop a specific and empirically based implementation framework, further research is needed on the more disruptive types of AI systems being implemented in routine care and on aspects unique to AI implementation in health care, such as building trust, addressing transparency issues, developing explainable and interpretable solutions, and addressing ethical concerns around privacy and data protection.
Collapse
Affiliation(s)
- Malvika Sharma
- Department of Learning, Informatics, Management and Ethics, Karolinska Institutet, Medical Management Centre, Stockholm, Sweden
| | - Carl Savage
- Department of Learning, Informatics, Management and Ethics, Karolinska Institutet, Medical Management Centre, Stockholm, Sweden
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Monika Nair
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Ingrid Larsson
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Petra Svedberg
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Jens M Nygren
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| |
Collapse
|
121
|
Leclercq C, Witt H, Hindricks G, Katra RP, Albert D, Belliger A, Cowie MR, Deneke T, Friedman P, Haschemi M, Lobban T, Lordereau I, McConnell MV, Rapallini L, Samset E, Turakhia MP, Singh JP, Svennberg E, Wadhwa M, Weidinger F. Wearables, telemedicine, and artificial intelligence in arrhythmias and heart failure: Proceedings of the European Society of Cardiology: Cardiovascular Round Table. Europace 2022; 24:1372-1383. [PMID: 35640917 DOI: 10.1093/europace/euac052] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 04/05/2022] [Indexed: 12/31/2022] Open
Abstract
Digital technology is now an integral part of medicine. Tools for detecting, screening, diagnosis, and monitoring health-related parameters have improved patient care and enabled individuals to identify issues leading to better management of their own health. Wearable technologies have integrated sensors and can measure physical activity, heart rate and rhythm, and glucose and electrolytes. For individuals at risk, wearables or other devices may be useful for early detection of atrial fibrillation or sub-clinical states of cardiovascular disease, disease management of cardiovascular diseases such as hypertension and heart failure, and lifestyle modification. Health data are available from a multitude of sources, namely clinical, laboratory and imaging data, genetic profiles, wearables, implantable devices, patient-generated measurements, and social and environmental data. Artificial intelligence is needed to efficiently extract value from this constantly increasing volume and variety of data and to help in its interpretation. Indeed, it is not the acquisition of digital information, but rather the smart handling and analysis that is challenging. There are multiple stakeholder groups involved in the development and effective implementation of digital tools. While the needs of these groups may vary, they also have many commonalities, including the following: a desire for data privacy and security; the need for understandable, trustworthy, and transparent systems; standardized processes for regulatory and reimbursement assessments; and better ways of rapidly assessing value.
Collapse
Affiliation(s)
- Christophe Leclercq
- Department of Cardiology, CHU Rennes and Inserm, LTSI, University of Rennes, Centre Cardio-Pneumologique, CHU Pontchaillou, Service de Cardiologie et Maladies Vasculaires, 2 Rue Henri le Guilloux, 35000, Rennes, France
| | - Henning Witt
- Department of Internal Medicine, Pfizer, Berlin, Germany
| | - Gerhard Hindricks
- Department of Electrophysiology, Heart Center, Leipzig Heart Institute, Leipzig, Germany
| | - Rodolphe P Katra
- Cardiac Rhythm Management, Research & Technology, Medtronic, Minneapolis, MN, USA
| | | | - Andrea Belliger
- Institute for Communication and Leadership, and Lucerne University of Education, Lucerne, Switzerland
| | - Martin R Cowie
- Royal Brompton Hospital & School of Cardiovascular Medicine & Sciences, Faculty of Life Sciences & Medicine, King's College London, London, UK
| | - Thomas Deneke
- Clinic for Interventional Electrophysiology and Arrhythmology Heart Center, Bad Neustadt, Germany
| | - Paul Friedman
- Department of Cardiovascular Medicine, Mayo Clinic, Rochester, MN, USA
| | - Mehdiyar Haschemi
- Siemens Healthineers, Segment Advanced Therapies, Clinical Segment Cardiovascular Care, Forchheim, Bavaria, Germany
| | - Trudie Lobban
- Atrial Fibrillation Association (AF Association), Arrhythmia Alliance (A-A), and STARS (Syncope Trust And Reflex anoxic Seizures), UK & International
| | | | - Michael V McConnell
- Fitbit/Google; Division of Cardiovascular Medicine, Stanford School of Medicine, Stanford, CA, USA
| | - Leonardo Rapallini
- Research and Development, Cardiac Diagnostics and Services Business, Medtronic, Minneapolis, MN, USA
| | - Eigil Samset
- GE Healthcare Cardiology Solutions, Chicago, IL, USA
| | - Mintu P Turakhia
- Center for Digital Health, Stanford University School of Medicine, Stanford, CA, USA.,VA Palo Alto Health Care System, Palo Alto, CA, USA
| | - Jagmeet P Singh
- Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Emma Svennberg
- Department Electrophysiology, Karolinska University Hospital, Karolinska Institutet, Stockholm, Sweden
| | | | - Franz Weidinger
- 2nd Medical Department with Cardiology and Intensive Care Medicine, Klinik Landstrasse, Vienna, Austria
| |
Collapse
|
122
|
Evolving Optimised Convolutional Neural Networks for Lung Cancer Classification. SIGNALS 2022. [DOI: 10.3390/signals3020018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Detecting pulmonary nodules early significantly contributes to the treatment success of lung cancer. Several deep learning models for medical image analysis have been developed to help classify pulmonary nodules. The design of convolutional neural network (CNN) architectures, however, is still heavily reliant on human domain knowledge. Manually designing CNN design solutions has been shown to limit the data’s utility by creating a co-dependency on the creator’s cognitive bias, which urges the development of smart CNN architecture design solutions. In this paper, an evolutionary algorithm is used to optimise the classification of pulmonary nodules with CNNs. The implementation of a genetic algorithm (GA) for CNN architectures design and hyperparameter optimisation is proposed, which approximates optimal solutions by implementing a range of bio-inspired mechanisms of natural selection and Darwinism. For comparison purposes, two manually designed deep learning models, FractalNet and Deep Local-Global Network, were trained. The results show an outstanding classification accuracy of the fittest GA-CNN (91.3%), which outperformed both manually designed models. The findings indicate that GAs pose advantageous solutions for diagnostic challenges, the development of which may to be fully automated in the future using GAs to design and optimise CNN architectures for various clinical applications.
Collapse
|
123
|
Morrison B, Boyle TA, Mahaffey T. Demonstrating institutional trustworthiness a framework for pharmacy regulatory authorities. Res Social Adm Pharm 2022; 18:3792-3799. [DOI: 10.1016/j.sapharm.2022.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 04/26/2022] [Accepted: 04/28/2022] [Indexed: 11/25/2022]
|
124
|
Khoury P, Srinivasan R, Kakumanu S, Ochoa S, Keswani A, Sparks R, Rider NL. A Framework for Augmented Intelligence in Allergy and Immunology Practice and Research—A Work Group Report of the AAAAI Health Informatics, Technology, and Education Committee. THE JOURNAL OF ALLERGY AND CLINICAL IMMUNOLOGY: IN PRACTICE 2022; 10:1178-1188. [PMID: 35300959 PMCID: PMC9205719 DOI: 10.1016/j.jaip.2022.01.047] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Revised: 01/19/2022] [Accepted: 01/20/2022] [Indexed: 10/18/2022]
Abstract
Artificial and augmented intelligence (AI) and machine learning (ML) methods are expanding into the health care space. Big data are increasingly used in patient care applications, diagnostics, and treatment decisions in allergy and immunology. How these technologies will be evaluated, approved, and assessed for their impact is an important consideration for researchers and practitioners alike. With the potential of ML, deep learning, natural language processing, and other assistive methods to redefine health care usage, a scaffold for the impact of AI technology on research and patient care in allergy and immunology is needed. An American Academy of Asthma Allergy and Immunology Health Information Technology and Education subcommittee workgroup was convened to perform a scoping review of AI within health care as well as the specialty of allergy and immunology to address impacts on allergy and immunology practice and research as well as potential challenges including education, AI governance, ethical and equity considerations, and potential opportunities for the specialty. There are numerous potential clinical applications of AI in allergy and immunology that range from disease diagnosis to multidimensional data reduction in electronic health records or immunologic datasets. For appropriate application and interpretation of AI, specialists should be involved in the design, validation, and implementation of AI in allergy and immunology. Challenges include incorporation of data science and bioinformatics into training of future allergists-immunologists.
Collapse
|
125
|
Artificial Intelligence and Machine Learning Approaches in Digital Education: A Systematic Revision. INFORMATION 2022. [DOI: 10.3390/info13040203] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
The use of artificial intelligence and machine learning techniques across all disciplines has exploded in the past few years, with the ever-growing size of data and the changing needs of higher education, such as digital education. Similarly, online educational information systems have a huge amount of data related to students in digital education. This educational data can be used with artificial intelligence and machine learning techniques to improve digital education. This study makes two main contributions. First, the study follows a repeatable and objective process of exploring the literature. Second, the study outlines and explains the literature’s themes related to the use of AI-based algorithms in digital education. The study findings present six themes related to the use of machines in digital education. The synthesized evidence in this study suggests that machine learning and deep learning algorithms are used in several themes of digital learning. These themes include using intelligent tutors, dropout predictions, performance predictions, adaptive and predictive learning and learning styles, analytics and group-based learning, and automation. artificial neural network and support vector machine algorithms appear to be utilized among all the identified themes, followed by random forest, decision tree, naive Bayes, and logistic regression algorithms.
Collapse
|
126
|
Basereh M, Caputo A, Brennan R. AccTEF: A Transparency and Accountability Evaluation Framework for Ontology-Based Systems. INTERNATIONAL JOURNAL OF SEMANTIC COMPUTING 2022. [DOI: 10.1142/s1793351x22400013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This paper proposes a new accountability and transparency evaluation framework (AccTEF) for ontology-based systems (OSysts). AccTEF is based on an analysis of the relation between a set of widely accepted data governance principles, i.e. findable, accessible, interoperable, reusable (FAIR) and accountability and transparency concepts. The evaluation of accountability and transparency of input ontologies and vocabularies of OSysts are addressed by analyzing the relation between vocabulary and ontology quality evaluation metrics, FAIR and accountability and transparency concepts. An ontology-based knowledge extraction pipeline is used as a use case in this study. Discovering the relation between FAIR and accountability and transparency helps in identifying and mitigating risks associated with deploying OSysts. This also allows providing design guidelines that help accountability and transparency to be embedded in OSysts. We found that FAIR can be used as a transparency indicator. We also found that the studied vocabulary and ontology quality evaluation metrics do not cover FAIR, accountability and transparency. Accordingly, we suggest these concepts should be considered as vocabulary and ontology quality evaluation aspects. To the best of our knowledge, it is the first time that the relation between FAIR and accountability and transparency concepts has been found and used for evaluation.
Collapse
Affiliation(s)
- Maryam Basereh
- School of Computing, Dublin City University, Glasnevin Campus, Dublin, Dublin 9, Ireland
| | - Annalina Caputo
- ADAPT Centre, School of Computing, Dublin City University, Glasnevin Campus, Dublin, Ireland
| | - Rob Brennan
- ADAPT Centre, School of Computing, Dublin City University, Dublin, Dublin 9, Ireland
| |
Collapse
|
127
|
Gundersen T, Bærøe K. The Future Ethics of Artificial Intelligence in Medicine: Making Sense of Collaborative Models. SCIENCE AND ENGINEERING ETHICS 2022; 28:17. [PMID: 35362822 PMCID: PMC8975759 DOI: 10.1007/s11948-022-00369-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Accepted: 02/21/2022] [Indexed: 05/14/2023]
Abstract
This article examines the role of medical doctors, AI designers, and other stakeholders in making applied AI and machine learning ethically acceptable on the general premises of shared decision-making in medicine. Recent policy documents such as the EU strategy on trustworthy AI and the research literature have often suggested that AI could be made ethically acceptable by increased collaboration between developers and other stakeholders. The article articulates and examines four central alternative models of how AI can be designed and applied in patient care, which we call the ordinary evidence model, the ethical design model, the collaborative model, and the public deliberation model. We argue that the collaborative model is the most promising for covering most AI technology, while the public deliberation model is called for when the technology is recognized as fundamentally transforming the conditions for ethical shared decision-making.
Collapse
Affiliation(s)
- Torbjørn Gundersen
- Centre for the Study of Professions, Oslo Metropolitan University, Oslo, Norway.
| | - Kristine Bærøe
- Department of Global Public Health and Primary Care, University of Bergen, Bergen, Norway
| |
Collapse
|
128
|
Čartolovni A, Tomičić A, Lazić Mosler E. Ethical, legal, and social considerations of AI-based medical decision-support tools: A scoping review. Int J Med Inform 2022; 161:104738. [PMID: 35299098 DOI: 10.1016/j.ijmedinf.2022.104738] [Citation(s) in RCA: 45] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Revised: 02/11/2022] [Accepted: 03/10/2022] [Indexed: 10/18/2022]
Abstract
INTRODUCTION Recent developments in the field of Artificial Intelligence (AI) applied to healthcare promise to solve many of the existing global issues in advancing human health and managing global health challenges. This comprehensive review aims not only to surface the underlying ethical and legal but also social implications (ELSI) that have been overlooked in recent reviews while deserving equal attention in the development stage, and certainly ahead of implementation in healthcare. It is intended to guide various stakeholders (eg. designers, engineers, clinicians) in addressing the ELSI of AI at the design stage using the Ethics by Design (EbD) approach. METHODS The authors followed a systematised scoping methodology and searched the following databases: Pubmed, Web of science, Ovid, Scopus, IEEE Xplore, EBSCO Search (Academic Search Premier, CINAHL, PSYCINFO, APA PsycArticles, ERIC) for the ELSI of AI in healthcare through January 2021. Data were charted and synthesised, and the authors conducted a descriptive and thematic analysis of the collected data. RESULTS After reviewing 1108 papers, 94 were included in the final analysis. Our results show a growing interest in the academic community for ELSI in the field of AI. The main issues of concern identified in our analysis fall into four main clusters of impact: AI algorithms, physicians, patients, and healthcare in general. The most prevalent issues are patient safety, algorithmic transparency, lack of proper regulation, liability & accountability, impact on patient-physician relationship and governance of AI empowered healthcare. CONCLUSIONS The results of our review confirm the potential of AI to significantly improve patient care, but the drawbacks to its implementation relate to complex ELSI that have yet to be addressed. Most ELSI refer to the impact on and extension of the reciprocal and fiduciary patient-physician relationship. With the integration of AIbased decision making tools, a bilateral patient-physician relationship may shift into a trilateral one.
Collapse
Affiliation(s)
- Anto Čartolovni
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Ilica 242, 10 000 Zagreb, Croatia; School of Medicine, Catholic University of Croatia, Ilica 242, 10 000 Zagreb, Croatia.
| | - Ana Tomičić
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Ilica 242, 10 000 Zagreb, Croatia.
| | - Elvira Lazić Mosler
- School of Medicine, Catholic University of Croatia, Ilica 242, 10 000 Zagreb, Croatia; General Hospital Dr. Ivo Pedišić, Sisak, Croatia.
| |
Collapse
|
129
|
Goergen SK, Frazer HM, Reddy S. Quality use of artificial intelligence in medical imaging: What do radiologists need to know? J Med Imaging Radiat Oncol 2022; 66:225-232. [PMID: 35243782 DOI: 10.1111/1754-9485.13379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Accepted: 12/14/2021] [Indexed: 11/27/2022]
Abstract
The application of artificial intelligence, and in particular machine learning, to the practice of radiology, is already impacting the quality of imaging care. It will increasingly do so in the future. Radiologists need to be aware of factors that govern the quality of these tools at the development, regulatory and clinical implementation stages in order to make judicious decisions about their use in daily practice.
Collapse
Affiliation(s)
- Stacy K Goergen
- Monash Imaging, Monash Health, Melbourne, Victoria, Australia.,Department of Imaging, School of Clinical Sciences, Monash University, Melbourne, Victoria, Australia
| | - Helen Ml Frazer
- St Vincent's BreastScreen, St Vincent's Hospital Melbourne, Melbourne, Victoria, Australia.,BreastScreen Victoria, Melbourne, Victoria, Australia
| | - Sandeep Reddy
- School of Medicine, Deakin University, Geelong, Victoria, Australia
| |
Collapse
|
130
|
Keshta I. AI-driven IoT for smart health care: Security and privacy issues. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
|
131
|
Mhlanga D. The Role of Artificial Intelligence and Machine Learning Amid the COVID-19 Pandemic: What Lessons Are We Learning on 4IR and the Sustainable Development Goals. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:1879. [PMID: 35162901 PMCID: PMC8835201 DOI: 10.3390/ijerph19031879] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 01/31/2022] [Accepted: 02/03/2022] [Indexed: 01/20/2023]
Abstract
The COVID-19 pandemic came with disruptions in every aspect of human existence, with all the sectors of the economies of the world affected greatly. In the health sector, the pandemic halted and reversed progress in health and subsequently shortened life expectancy, especially in developing and underdeveloped nations. On the other hand, machine learning and artificial intelligence contributed a great deal to the handling of the pandemic globally. Therefore, the current study aimed to assess the role played by artificial intelligence and machine learning in addressing the dangers posed by the COVID-19 pandemic, as well as extrapolate the lessons on the fourth industrial revolution and sustainable development goals. Using qualitative content analysis, the results indicated that artificial intelligence and machine learning played an important role in the response to the challenges posed by the COVID-19 pandemic. Artificial intelligence, machine learning, and various digital communication tools through telehealth performed meaningful roles in scaling customer communications, provided a platform for understanding how COVID-19 spreads, and sped up research and treatment of COVID-19, among other notable achievements. The lessons we draw from this is that, despite the disruptions and the rise in the number of unintended consequences of technology in the fourth industrial revolution, the role played by artificial intelligence and machine learning motivates us to conclude that governments must build trust in these technologies, to address health problems going forward, to ensure that the sustainable development goals related to good health and wellbeing are achieved.
Collapse
Affiliation(s)
- David Mhlanga
- Faculty of Business and Economics, University of Johannesburg, Johannesburg 2006, South Africa
| |
Collapse
|
132
|
SHIFTing artificial intelligence to be responsible in healthcare: A systematic review. Soc Sci Med 2022; 296:114782. [DOI: 10.1016/j.socscimed.2022.114782] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 02/02/2022] [Accepted: 02/03/2022] [Indexed: 12/12/2022]
|
133
|
Eichler GS, Imbert G, Branson J, Balibey R, Laramie J. Democratizing data at Novartis through clinical trial data access. Drug Discov Today 2022; 27:1533-1537. [DOI: 10.1016/j.drudis.2022.02.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 02/07/2022] [Accepted: 02/22/2022] [Indexed: 11/27/2022]
|
134
|
Oliva A, Grassi S, Vetrugno G, Rossi R, Della Morte G, Pinchi V, Caputo M. Management of Medico-Legal Risks in Digital Health Era: A Scoping Review. Front Med (Lausanne) 2022; 8:821756. [PMID: 35087854 PMCID: PMC8787306 DOI: 10.3389/fmed.2021.821756] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 12/20/2021] [Indexed: 12/11/2022] Open
Abstract
Artificial intelligence needs big data to develop reliable predictions. Therefore, storing and processing health data is essential for the new diagnostic and decisional technologies but, at the same time, represents a risk for privacy protection. This scoping review is aimed at underlying the medico-legal and ethical implications of the main artificial intelligence applications to healthcare, also focusing on the issues of the COVID-19 era. Starting from a summary of the United States (US) and European Union (EU) regulatory frameworks, the current medico-legal and ethical challenges are discussed in general terms before focusing on the specific issues regarding informed consent, medical malpractice/cognitive biases, automation and interconnectedness of medical devices, diagnostic algorithms and telemedicine. We aim at underlying that education of physicians on the management of this (new) kind of clinical risks can enhance compliance with regulations and avoid legal risks for the healthcare professionals and institutions.
Collapse
Affiliation(s)
- Antonio Oliva
- Legal Medicine, Department of Health Surveillance and Bioethics, Università Cattolica del Sacro Cuore, Rome, Italy
| | - Simone Grassi
- Legal Medicine, Department of Health Surveillance and Bioethics, Università Cattolica del Sacro Cuore, Rome, Italy
| | - Giuseppe Vetrugno
- Legal Medicine, Department of Health Surveillance and Bioethics, Università Cattolica del Sacro Cuore, Rome, Italy.,Risk Management Unit, Fondazione Policlinico A. Gemelli Istituto di Ricovero e Cura a Carattere Scientifico, Rome, Italy
| | - Riccardo Rossi
- Legal Medicine, Department of Health Surveillance and Bioethics, Università Cattolica del Sacro Cuore, Rome, Italy
| | - Gabriele Della Morte
- International Law, Institute of International Studies, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Vilma Pinchi
- Department of Health Sciences, Section of Forensic Medical Sciences, University of Florence, Florence, Italy
| | - Matteo Caputo
- Criminal Law, Department of Juridical Science, Università Cattolica del Sacro Cuore, Milan, Italy
| |
Collapse
|
135
|
Buck C, Doctor E, Hennrich J, Jöhnk J, Eymann T. General Practitioners' Attitudes Toward Artificial Intelligence-Enabled Systems: Interview Study. J Med Internet Res 2022; 24:e28916. [PMID: 35084342 PMCID: PMC8832268 DOI: 10.2196/28916] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 06/24/2021] [Accepted: 11/21/2021] [Indexed: 01/14/2023] Open
Abstract
Background General practitioners (GPs) care for a large number of patients with various diseases in very short timeframes under high uncertainty. Thus, systems enabled by artificial intelligence (AI) are promising and time-saving solutions that may increase the quality of care. Objective This study aims to understand GPs’ attitudes toward AI-enabled systems in medical diagnosis. Methods We interviewed 18 GPs from Germany between March 2020 and May 2020 to identify determinants of GPs’ attitudes toward AI-based systems in diagnosis. By analyzing the interview transcripts, we identified 307 open codes, which we then further structured to derive relevant attitude determinants. Results We merged the open codes into 21 concepts and finally into five categories: concerns, expectations, environmental influences, individual characteristics, and minimum requirements of AI-enabled systems. Concerns included all doubts and fears of the participants regarding AI-enabled systems. Expectations reflected GPs’ thoughts and beliefs about expected benefits and limitations of AI-enabled systems in terms of GP care. Environmental influences included influences resulting from an evolving working environment, key stakeholders’ perspectives and opinions, the available information technology hardware and software resources, and the media environment. Individual characteristics were determinants that describe a physician as a person, including character traits, demographic characteristics, and knowledge. In addition, the interviews also revealed the minimum requirements of AI-enabled systems, which were preconditions that must be met for GPs to contemplate using AI-enabled systems. Moreover, we identified relationships among these categories, which we conflate in our proposed model. Conclusions This study provides a thorough understanding of the perspective of future users of AI-enabled systems in primary care and lays the foundation for successful market penetration. We contribute to the research stream of analyzing and designing AI-enabled systems and the literature on attitudes toward technology and practice by fostering the understanding of GPs and their attitudes toward such systems. Our findings provide relevant information to technology developers, policymakers, and stakeholder institutions of GP care.
Collapse
Affiliation(s)
- Christoph Buck
- Department of Business & Information Systems Engineering, University of Bayreuth, Bayreuth, Germany.,Centre for Future Enterprise, Queensland University of Technology, Brisbane, Australia
| | - Eileen Doctor
- Project Group Business & Information Systems Engineering, Fraunhofer Institute for Applied Information Technology, Bayreuth, Germany
| | - Jasmin Hennrich
- Project Group Business & Information Systems Engineering, Fraunhofer Institute for Applied Information Technology, Bayreuth, Germany
| | - Jan Jöhnk
- Finance & Information Management Research Center, Bayreuth, Germany
| | - Torsten Eymann
- Department of Business & Information Systems Engineering, University of Bayreuth, Bayreuth, Germany.,Finance & Information Management Research Center, Bayreuth, Germany
| |
Collapse
|
136
|
Crossnohere NL, Elsaid M, Paskett J, Bose-Brill S, Bridges JFP. Guidelines for artificial intelligence in medicine: A literature review and content analysis of frameworks (Preprint). J Med Internet Res 2022; 24:e36823. [PMID: 36006692 PMCID: PMC9459836 DOI: 10.2196/36823] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Revised: 06/02/2022] [Accepted: 07/14/2022] [Indexed: 12/15/2022] Open
Abstract
Background Artificial intelligence (AI) is rapidly expanding in medicine despite a lack of consensus on its application and evaluation. Objective We sought to identify current frameworks guiding the application and evaluation of AI for predictive analytics in medicine and to describe the content of these frameworks. We also assessed what stages along the AI translational spectrum (ie, AI development, reporting, evaluation, implementation, and surveillance) the content of each framework has been discussed. Methods We performed a literature review of frameworks regarding the oversight of AI in medicine. The search included key topics such as “artificial intelligence,” “machine learning,” “guidance as topic,” and “translational science,” and spanned the time period 2014-2022. Documents were included if they provided generalizable guidance regarding the use or evaluation of AI in medicine. Included frameworks are summarized descriptively and were subjected to content analysis. A novel evaluation matrix was developed and applied to appraise the frameworks’ coverage of content areas across translational stages. Results Fourteen frameworks are featured in the review, including six frameworks that provide descriptive guidance and eight that provide reporting checklists for medical applications of AI. Content analysis revealed five considerations related to the oversight of AI in medicine across frameworks: transparency, reproducibility, ethics, effectiveness, and engagement. All frameworks include discussions regarding transparency, reproducibility, ethics, and effectiveness, while only half of the frameworks discuss engagement. The evaluation matrix revealed that frameworks were most likely to report AI considerations for the translational stage of development and were least likely to report considerations for the translational stage of surveillance. Conclusions Existing frameworks for the application and evaluation of AI in medicine notably offer less input on the role of engagement in oversight and regarding the translational stage of surveillance. Identifying and optimizing strategies for engagement are essential to ensure that AI can meaningfully benefit patients and other end users.
Collapse
Affiliation(s)
- Norah L Crossnohere
- Department of Biomedical Informatics, The Ohio State University College of Medicine, Columbus, OH, United States
- Division of General Internal Medicine, Department of Internal Medicine, The Ohio State University College of Medicine, Columbus, OH, United States
| | - Mohamed Elsaid
- Department of Biomedical Informatics, The Ohio State University College of Medicine, Columbus, OH, United States
| | - Jonathan Paskett
- Department of Biomedical Informatics, The Ohio State University College of Medicine, Columbus, OH, United States
| | - Seuli Bose-Brill
- Division of General Internal Medicine, Department of Internal Medicine, The Ohio State University College of Medicine, Columbus, OH, United States
| | - John F P Bridges
- Department of Biomedical Informatics, The Ohio State University College of Medicine, Columbus, OH, United States
| |
Collapse
|
137
|
Anklam E, Bahl MI, Ball R, Beger RD, Cohen J, Fitzpatrick S, Girard P, Halamoda-Kenzaoui B, Hinton D, Hirose A, Hoeveler A, Honma M, Hugas M, Ishida S, Kass GEN, Kojima H, Krefting I, Liachenko S, Liu Y, Masters S, Marx U, McCarthy T, Mercer T, Patri A, Pelaez C, Pirmohamed M, Platz S, Ribeiro AJS, Rodricks JV, Rusyn I, Salek RM, Schoonjans R, Silva P, Svendsen CN, Sumner S, Sung K, Tagle D, Tong L, Tong W, van den Eijnden-van-Raaij J, Vary N, Wang T, Waterton J, Wang M, Wen H, Wishart D, Yuan Y, Slikker Jr. W. Emerging technologies and their impact on regulatory science. Exp Biol Med (Maywood) 2022; 247:1-75. [PMID: 34783606 PMCID: PMC8749227 DOI: 10.1177/15353702211052280] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
There is an evolution and increasing need for the utilization of emerging cellular, molecular and in silico technologies and novel approaches for safety assessment of food, drugs, and personal care products. Convergence of these emerging technologies is also enabling rapid advances and approaches that may impact regulatory decisions and approvals. Although the development of emerging technologies may allow rapid advances in regulatory decision making, there is concern that these new technologies have not been thoroughly evaluated to determine if they are ready for regulatory application, singularly or in combinations. The magnitude of these combined technical advances may outpace the ability to assess fit for purpose and to allow routine application of these new methods for regulatory purposes. There is a need to develop strategies to evaluate the new technologies to determine which ones are ready for regulatory use. The opportunity to apply these potentially faster, more accurate, and cost-effective approaches remains an important goal to facilitate their incorporation into regulatory use. However, without a clear strategy to evaluate emerging technologies rapidly and appropriately, the value of these efforts may go unrecognized or may take longer. It is important for the regulatory science field to keep up with the research in these technically advanced areas and to understand the science behind these new approaches. The regulatory field must understand the critical quality attributes of these novel approaches and learn from each other's experience so that workforces can be trained to prepare for emerging global regulatory challenges. Moreover, it is essential that the regulatory community must work with the technology developers to harness collective capabilities towards developing a strategy for evaluation of these new and novel assessment tools.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Reza M Salek
- International Agency for Research on Cancer, France
| | | | | | | | | | | | | | - Li Tong
- Universities of Georgia Tech and Emory, USA
| | | | | | - Neil Vary
- Canadian Food Inspection Agency, Canada
| | - Tao Wang
- National Medical Products Administration, China
| | | | - May Wang
- Universities of Georgia Tech and Emory, USA
| | - Hairuo Wen
- National Institutes for Food and Drug Control, China
| | | | | | | |
Collapse
|
138
|
Sikstrom L, Maslej MM, Hui K, Findlay Z, Buchman DZ, Hill SL. Conceptualising fairness: three pillars for medical algorithms and health equity. BMJ Health Care Inform 2022; 29:e100459. [PMID: 35012941 PMCID: PMC8753410 DOI: 10.1136/bmjhci-2021-100459] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Accepted: 12/14/2021] [Indexed: 12/25/2022] Open
Abstract
OBJECTIVES Fairness is a core concept meant to grapple with different forms of discrimination and bias that emerge with advances in Artificial Intelligence (eg, machine learning, ML). Yet, claims to fairness in ML discourses are often vague and contradictory. The response to these issues within the scientific community has been technocratic. Studies either measure (mathematically) competing definitions of fairness, and/or recommend a range of governance tools (eg, fairness checklists or guiding principles). To advance efforts to operationalise fairness in medicine, we synthesised a broad range of literature. METHODS We conducted an environmental scan of English language literature on fairness from 1960-July 31, 2021. Electronic databases Medline, PubMed and Google Scholar were searched, supplemented by additional hand searches. Data from 213 selected publications were analysed using rapid framework analysis. Search and analysis were completed in two rounds: to explore previously identified issues (a priori), as well as those emerging from the analysis (de novo). RESULTS Our synthesis identified 'Three Pillars for Fairness': transparency, impartiality and inclusion. We draw on these insights to propose a multidimensional conceptual framework to guide empirical research on the operationalisation of fairness in healthcare. DISCUSSION We apply the conceptual framework generated by our synthesis to risk assessment in psychiatry as a case study. We argue that any claim to fairness must reflect critical assessment and ongoing social and political deliberation around these three pillars with a range of stakeholders, including patients. CONCLUSION We conclude by outlining areas for further research that would bolster ongoing commitments to fairness and health equity in healthcare.
Collapse
Affiliation(s)
- Laura Sikstrom
- Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Department of Anthropology, University of Toronto, Toronto, Ontario, Canada
| | - Marta M Maslej
- Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| | - Katrina Hui
- Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| | - Zoe Findlay
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| | - Daniel Z Buchman
- Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Sean L Hill
- Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
139
|
Towards Understanding the Usability Attributes of AI-Enabled eHealth Mobile Applications. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2021:5313027. [PMID: 34970424 PMCID: PMC8714331 DOI: 10.1155/2021/5313027] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Revised: 07/30/2021] [Accepted: 11/16/2021] [Indexed: 12/28/2022]
Abstract
Mobile application (app) use is increasingly becoming an essential part of our daily lives. Due to their significant usefulness, people rely on them to perform multiple tasks seamlessly in almost all aspects of everyday life. Similarly, there has been immense progress in artificial intelligence (AI) technology, especially deep learning, computer vision, natural language processing, and robotics. These technologies are now actively being implemented in smartphone apps and healthcare, providing multiple healthcare services. However, several factors affect the usefulness of mobile healthcare apps, and usability is an important one. There are various healthcare apps developed for each specific task, and the success of these apps depends on their performance. This study presents a systematic review of the existing apps and discusses their usability attributes. It highlights the usability models, outlines, and guidelines proposed in previous research for designing apps with improved usability characteristics. Thirty-nine research articles were reviewed and examined to identify the usability attributes, framework, and app design conducted. The results showed that satisfaction, efficiency, and learnability are the most important usability attributes to consider when designing eHealth mobile apps. Surprisingly, other significant attributes for healthcare apps, such as privacy and security, were not among the most indicated attributes in the studies.
Collapse
|
140
|
Murdoch B, Jandura A, Caulfield T. Privacy Considerations in the Canadian Regulation of Commercially-Operated Healthcare Artificial Intelligence. CANADIAN JOURNAL OF BIOETHICS 2022. [DOI: 10.7202/1094696ar] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
|
141
|
Chi WN, Reamer C, Gordon R, Sarswat N, Gupta C, White VanGompel E, Dayiantis J, Morton-Jost M, Ravichandran U, Larimer K, Victorson D, Erwin J, Halasyamani L, Solomonides A, Padman R, Shah NS. Continuous Remote Patient Monitoring: Evaluation of the Heart Failure Cascade Soft Launch. Appl Clin Inform 2021; 12:1161-1173. [PMID: 34965606 PMCID: PMC8716190 DOI: 10.1055/s-0041-1740480] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
OBJECTIVE We report on our experience of deploying a continuous remote patient monitoring (CRPM) study soft launch with structured cascading and escalation pathways on heart failure (HF) patients post-discharge. The lessons learned from the soft launch are used to modify and fine-tune the workflow process and study protocol. METHODS This soft launch was conducted at NorthShore University HealthSystem's Evanston Hospital from December 2020 to March 2021. Patients were provided with non-invasive wearable biosensors that continuously collect ambulatory physiological data, and a study phone that collects patient-reported outcomes. The physiological data are analyzed by machine learning algorithms, potentially identifying physiological perturbation in HF patients. Alerts from this algorithm may be cascaded with other patient status data to inform home health nurses' (HHNs') management via a structured protocol. HHNs review the monitoring platform daily. If the patient's status meets specific criteria, HHNs perform assessments and escalate patient cases to the HF team for further guidance on early intervention. RESULTS We enrolled five patients into the soft launch. Four participants adhered to study activities. Two out of five patients were readmitted, one due to HF, one due to infection. Observed miscommunication and protocol gaps were noted for protocol amendment. The study team adopted an organizational development method from change management theory to reconfigure the study protocol. CONCLUSION We sought to automate the monitoring aspects of post-discharge care by aligning a new technology that generates streaming data from a wearable device with a complex, multi-provider workflow into a novel protocol using iterative design, implementation, and evaluation methods to monitor post-discharge HF patients. CRPM with structured escalation and telemonitoring protocol shows potential to maintain patients in their home environment and reduce HF-related readmissions. Our results suggest that further education to engage and empower frontline workers using advanced technology is essential to scale up the approach.
Collapse
Affiliation(s)
- Wei Ning Chi
- Outcomes Research Network, NorthShore University HealthSystem, Evanston, Illinois, United States,Address for correspondence Wei Ning Chi, MBBS, MPH Research Institute, 1001 University PlEvanston, IL 60201United States
| | - Courtney Reamer
- Department of Medicine, NorthShore University HealthSystem, Evanston, Illinois, United States
| | - Robert Gordon
- Department of Medicine, NorthShore University HealthSystem, Evanston, Illinois, United States
| | - Nitasha Sarswat
- Department of Medicine, NorthShore University HealthSystem, Evanston, Illinois, United States,Department of Medicine, University of Chicago Pritzker School of Medicine, Chicago, Illinois, United States
| | - Charu Gupta
- Department of Medicine, NorthShore University HealthSystem, Evanston, Illinois, United States
| | - Emily White VanGompel
- Department of Family Medicine, NorthShore University HealthSystem, Evanston, Illinois, United States,Department of Family Medicine, University of Chicago Pritzker School of Medicine, Chicago, Illinois, United States
| | - Julie Dayiantis
- Home and Hospice Services, NorthShore University HealthSystem, Evanston, Illinois, United States
| | - Melissa Morton-Jost
- Home and Hospice Services, NorthShore University HealthSystem, Evanston, Illinois, United States
| | - Urmila Ravichandran
- Health Information Technology, NorthShore University HealthSystem, Evanston, Illinois, United States
| | - Karen Larimer
- Clinical Department, physIQ, Inc., Chicago, Illinois, United States
| | - David Victorson
- Northwestern University Feinberg School of Medicine, Evanston, Illinois, United States
| | - John Erwin
- Department of Medicine, NorthShore University HealthSystem, Evanston, Illinois, United States,Department of Medicine, University of Chicago Pritzker School of Medicine, Chicago, Illinois, United States
| | - Lakshmi Halasyamani
- Department of Family Medicine, NorthShore University HealthSystem, Evanston, Illinois, United States,Department of Family Medicine, University of Chicago Pritzker School of Medicine, Chicago, Illinois, United States
| | - Anthony Solomonides
- Outcomes Research Network, NorthShore University HealthSystem, Evanston, Illinois, United States
| | - Rema Padman
- The Heinz College of Information Systems and Public Policy, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
| | - Nirav S. Shah
- Department of Medicine, NorthShore University HealthSystem, Evanston, Illinois, United States,Department of Medicine, University of Chicago Pritzker School of Medicine, Chicago, Illinois, United States
| |
Collapse
|
142
|
Möllmann NR, Mirbabaie M, Stieglitz S. Is it alright to use artificial intelligence in digital health? A systematic literature review on ethical considerations. Health Informatics J 2021; 27:14604582211052391. [PMID: 34935557 DOI: 10.1177/14604582211052391] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
The application of artificial intelligence (AI) not only yields in advantages for healthcare but raises several ethical questions. Extant research on ethical considerations of AI in digital health is quite sparse and a holistic overview is lacking. A systematic literature review searching across 853 peer-reviewed journals and conferences yielded in 50 relevant articles categorized in five major ethical principles: beneficence, non-maleficence, autonomy, justice, and explicability. The ethical landscape of AI in digital health is portrayed including a snapshot guiding future development. The status quo highlights potential areas with little empirical but required research. Less explored areas with remaining ethical questions are validated and guide scholars' efforts by outlining an overview of addressed ethical principles and intensity of studies including correlations. Practitioners understand novel questions AI raises eventually leading to properly regulated implementations and further comprehend that society is on its way from supporting technologies to autonomous decision-making systems.
Collapse
Affiliation(s)
- Nicholas Rj Möllmann
- Research Group Digital Communication and Transformation, 27170University of Duisburg-Essen, Duisburg, Germany
| | - Milad Mirbabaie
- Faculty of Business Administration and Economics, 9168Paderborn University, Paderborn, Germany
| | - Stefan Stieglitz
- Research Group Digital Communication and Transformation, 27170University of Duisburg-Essen, Duisburg, Germany
| |
Collapse
|
143
|
Esmaeilzadeh P, Mirzaei T, Dharanikota S. Patients' Perceptions Toward Human-Artificial Intelligence Interaction in Health Care: Experimental Study. J Med Internet Res 2021; 23:e25856. [PMID: 34842535 PMCID: PMC8663518 DOI: 10.2196/25856] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 05/04/2021] [Accepted: 10/26/2021] [Indexed: 12/24/2022] Open
Abstract
Background It is believed that artificial intelligence (AI) will be an integral part of health care services in the near future and will be incorporated into several aspects of clinical care such as prognosis, diagnostics, and care planning. Thus, many technology companies have invested in producing AI clinical applications. Patients are one of the most important beneficiaries who potentially interact with these technologies and applications; thus, patients’ perceptions may affect the widespread use of clinical AI. Patients should be ensured that AI clinical applications will not harm them, and that they will instead benefit from using AI technology for health care purposes. Although human-AI interaction can enhance health care outcomes, possible dimensions of concerns and risks should be addressed before its integration with routine clinical care. Objective The main objective of this study was to examine how potential users (patients) perceive the benefits, risks, and use of AI clinical applications for their health care purposes and how their perceptions may be different if faced with three health care service encounter scenarios. Methods We designed a 2×3 experiment that crossed a type of health condition (ie, acute or chronic) with three different types of clinical encounters between patients and physicians (ie, AI clinical applications as substituting technology, AI clinical applications as augmenting technology, and no AI as a traditional in-person visit). We used an online survey to collect data from 634 individuals in the United States. Results The interactions between the types of health care service encounters and health conditions significantly influenced individuals’ perceptions of privacy concerns, trust issues, communication barriers, concerns about transparency in regulatory standards, liability risks, benefits, and intention to use across the six scenarios. We found no significant differences among scenarios regarding perceptions of performance risk and social biases. Conclusions The results imply that incompatibility with instrumental, technical, ethical, or regulatory values can be a reason for rejecting AI applications in health care. Thus, there are still various risks associated with implementing AI applications in diagnostics and treatment recommendations for patients with both acute and chronic illnesses. The concerns are also evident if the AI applications are used as a recommendation system under physician experience, wisdom, and control. Prior to the widespread rollout of AI, more studies are needed to identify the challenges that may raise concerns for implementing and using AI applications. This study could provide researchers and managers with critical insights into the determinants of individuals’ intention to use AI clinical applications. Regulatory agencies should establish normative standards and evaluation guidelines for implementing AI in health care in cooperation with health care institutions. Regular audits and ongoing monitoring and reporting systems can be used to continuously evaluate the safety, quality, transparency, and ethical factors of AI clinical applications.
Collapse
Affiliation(s)
- Pouyan Esmaeilzadeh
- Department of Information Systems and Business Analytics, College of Business, Florida International University, Miami, FL, United States
| | - Tala Mirzaei
- Department of Information Systems and Business Analytics, College of Business, Florida International University, Miami, FL, United States
| | - Spurthy Dharanikota
- Department of Information Systems and Business Analytics, College of Business, Florida International University, Miami, FL, United States
| |
Collapse
|
144
|
Sentiment Analysis in Twitter Based on Knowledge Graph and Deep Learning Classification. ELECTRONICS 2021. [DOI: 10.3390/electronics10222739] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The traditional way to address the problem of sentiment classification is based on machine learning techniques; however, these models are not able to grasp all the richness of the text that comes from different social media, personal web pages, blogs, etc., ignoring the semantic of the text. Knowledge graphs give a way to extract structured knowledge from images and texts in order to facilitate their semantic analysis. This work proposes a new hybrid approach for Sentiment Analysis based on Knowledge Graphs and Deep Learning techniques to identify the sentiment polarity (positive or negative) in short documents, such as posts on Twitter. In this proposal, tweets are represented as graphs; then, graph similarity metrics and a Deep Learning classification algorithm are applied to produce sentiment predictions. This approach facilitates the traceability and interpretability of the classification results, thanks to the integration of the Local Interpretable Model-agnostic Explanations (LIME) model at the end of the pipeline. LIME allows raising trust in predictive models, since the model is not a black box anymore. Uncovering the black box allows understanding and interpreting how the network could distinguish between sentiment polarities. Each phase of the proposed approach conformed by pre-processing, graph construction, dimensionality reduction, graph similarity, sentiment prediction, and interpretability steps is described. The proposal is compared with character n-gram embeddings-based Deep Learning models to perform Sentiment Analysis. Results show that the proposal is able to outperforms classical n-gram models, with a recall up to 89% and F1-score of 88%.
Collapse
|
145
|
Weinert L, Müller J, Svensson L, Heinze O. The perspective of IT decision makers on factors influencing adoption and implementation of AI-technologies in 40 German Hospitals: Descriptive Analysis (Preprint). JMIR Med Inform 2021; 10:e34678. [PMID: 35704378 PMCID: PMC9244653 DOI: 10.2196/34678] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 02/15/2022] [Accepted: 03/11/2022] [Indexed: 02/06/2023] Open
Abstract
Background New artificial intelligence (AI) tools are being developed at a high speed. However, strategies and practical experiences surrounding the adoption and implementation of AI in health care are lacking. This is likely because of the high implementation complexity of AI, legacy IT infrastructure, and unclear business cases, thus complicating AI adoption. Research has recently started to identify the factors influencing AI readiness of organizations. Objective This study aimed to investigate the factors influencing AI readiness as well as possible barriers to AI adoption and implementation in German hospitals. We also assessed the status quo regarding the dissemination of AI tools in hospitals. We focused on IT decision makers, a seldom studied but highly relevant group. Methods We created a web-based survey based on recent AI readiness and implementation literature. Participants were identified through a publicly accessible database and contacted via email or invitational leaflets sent by mail, in some cases accompanied by a telephonic prenotification. The survey responses were analyzed using descriptive statistics. Results We contacted 609 possible participants, and our database recorded 40 completed surveys. Most participants agreed or rather agreed with the statement that AI would be relevant in the future, both in Germany (37/40, 93%) and in their own hospital (36/40, 90%). Participants were asked whether their hospitals used or planned to use AI technologies. Of the 40 participants, 26 (65%) answered “yes.” Most AI technologies were used or planned for patient care, followed by biomedical research, administration, and logistics and central purchasing. The most important barriers to AI were lack of resources (staff, knowledge, and financial). Relevant possible opportunities for using AI were increase in efficiency owing to time-saving effects, competitive advantages, and increase in quality of care. Most AI tools in use or in planning have been developed with external partners. Conclusions Few tools have been implemented in routine care, and many hospitals do not use or plan to use AI in the future. This can likely be explained by missing or unclear business cases or the need for a modern IT infrastructure to integrate AI tools in a usable manner. These shortcomings complicate decision-making and resource attribution. As most AI technologies already in use were developed in cooperation with external partners, these relationships should be fostered. IT decision makers should assess their hospitals’ readiness for AI individually with a focus on resources. Further research should continue to monitor the dissemination of AI tools and readiness factors to determine whether improvements can be made over time. This monitoring is especially important with regard to government-supported investments in AI technologies that could alleviate financial burdens. Qualitative studies with hospital IT decision makers should be conducted to further explore the reasons for slow AI.
Collapse
Affiliation(s)
- Lina Weinert
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Julia Müller
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Laura Svensson
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Oliver Heinze
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| |
Collapse
|
146
|
Bélisle-Pipon JC, Couture V, Roy MC, Ganache I, Goetghebeur M, Cohen IG. What Makes Artificial Intelligence Exceptional in Health Technology Assessment? Front Artif Intell 2021; 4:736697. [PMID: 34796318 PMCID: PMC8594317 DOI: 10.3389/frai.2021.736697] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Accepted: 09/23/2021] [Indexed: 12/20/2022] Open
Abstract
The application of artificial intelligence (AI) may revolutionize the healthcare system, leading to enhance efficiency by automatizing routine tasks and decreasing health-related costs, broadening access to healthcare delivery, targeting more precisely patient needs, and assisting clinicians in their decision-making. For these benefits to materialize, governments and health authorities must regulate AI, and conduct appropriate health technology assessment (HTA). Many authors have highlighted that AI health technologies (AIHT) challenge traditional evaluation and regulatory processes. To inform and support HTA organizations and regulators in adapting their processes to AIHTs, we conducted a systematic review of the literature on the challenges posed by AIHTs in HTA and health regulation. Our research question was: What makes artificial intelligence exceptional in HTA? The current body of literature appears to portray AIHTs as being exceptional to HTA. This exceptionalism is expressed along 5 dimensions: 1) AIHT's distinctive features; 2) their systemic impacts on health care and the health sector; 3) the increased expectations towards AI in health; 4) the new ethical, social and legal challenges that arise from deploying AI in the health sector; and 5) the new evaluative constraints that AI poses to HTA. Thus, AIHTs are perceived as exceptional because of their technological characteristics and potential impacts on society at large. As AI implementation by governments and health organizations carries risks of generating new, and amplifying existing, challenges, there are strong arguments for taking into consideration the exceptional aspects of AIHTs, especially as their impacts on the healthcare system will be far greater than that of drugs and medical devices. As AIHTs begin to be increasingly introduced into the health care sector, there is a window of opportunity for HTA agencies and scholars to consider AIHTs' exceptionalism and to work towards only deploying clinically, economically, socially acceptable AIHTs in the health care system.
Collapse
Affiliation(s)
| | | | | | - Isabelle Ganache
- Institut National D’Excellence en Santé et en Services Sociaux (INESSS), Montréal, Québec, QC, Canada
| | - Mireille Goetghebeur
- Institut National D’Excellence en Santé et en Services Sociaux (INESSS), Montréal, Québec, QC, Canada
| | | |
Collapse
|
147
|
Lang M, Bernier A, Knoppers BM. AI in Cardiovascular Imaging: "Unexplainable" Legal and Ethical Challenges? Can J Cardiol 2021; 38:225-233. [PMID: 34737036 DOI: 10.1016/j.cjca.2021.10.009] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Revised: 10/28/2021] [Accepted: 10/28/2021] [Indexed: 02/08/2023] Open
Abstract
Nowhere is the influence of artificial intelligence (AI) likely to be more profoundly felt than in healthcare, from patient triage and diagnosis to surgery and follow-up. Over the medium term, these impacts will be more acute in the cardiovascular imaging context, in which AI models are already successfully performing at roughly human levels of accuracy and efficiency in certain applications. Yet, the adoption of unexplainable AI systems for cardiovascular imaging still raises significant legal and ethical challenges. We focus in particular on challenges posed by the unexplainable character of deep learning and other forms of sophisticated AI modelling used for cardiovascular imaging by briefly outlining the systems being developed in this space, describing how they work, and considering how they might generate outputs that are not reviewable by physicians or system programmers. We suggest that an unexplainable tendency presents two specific ethico-legal concerns: (1) difficulty for health regulators and (2) confusion about the assignment of liability for error or fault in the use of AI systems. We suggest that addressing these concerns is critical for ensuring AI's successful implementation in cardiovascular imaging.
Collapse
Affiliation(s)
- Michael Lang
- Academic Associate, Centre of Genomics and Policy, McGill University Faculty of Medicine and Health Sciences
| | - Alexander Bernier
- Academic Associate, Centre of Genomics and Policy, McGill University Faculty of Medicine and Health Sciences
| | - Bartha Maria Knoppers
- Full Professor, Canada Research Chair in Law and Medicine and Director of the Centre of Genomics and Policy, McGill University Faculty of Medicine and Health Sciences.
| |
Collapse
|
148
|
He S, Leanse LG, Feng Y. Artificial intelligence and machine learning assisted drug delivery for effective treatment of infectious diseases. Adv Drug Deliv Rev 2021; 178:113922. [PMID: 34461198 DOI: 10.1016/j.addr.2021.113922] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Revised: 07/14/2021] [Accepted: 08/09/2021] [Indexed: 12/23/2022]
Abstract
In the era of antimicrobial resistance, the prevalence of multidrug-resistant microorganisms that resist conventional antibiotic treatment has steadily increased. Thus, it is now unquestionable that infectious diseases are significant global burdens that urgently require innovative treatment strategies. Emerging studies have demonstrated that artificial intelligence (AI) can transform drug delivery to promote effective treatment of infectious diseases. In this review, we propose to evaluate the significance, essential principles, and popular tools of AI in drug delivery for infectious disease treatment. Specifically, we will focus on the achievements and key findings of current research, as well as the applications of AI on drug delivery throughout the whole antimicrobial treatment process, with an emphasis on drug development, treatment regimen optimization, drug delivery system and administration route design, and drug delivery outcome prediction. To that end, the challenges of AI in drug delivery for infectious disease treatments and their current solutions and future perspective will be presented and discussed.
Collapse
Affiliation(s)
- Sheng He
- Boston Children's Hospital, Harvard Medical School, Harvard University, Boston, MA, USA.
| | - Leon G Leanse
- Massachusetts General Hospital, Harvard Medical School, Harvard University, Boston, MA, USA
| | - Yanfang Feng
- Massachusetts General Hospital, Harvard Medical School, Harvard University, Boston, MA, USA.
| |
Collapse
|
149
|
Reddy S, Rogers W, Makinen VP, Coiera E, Brown P, Wenzel M, Weicken E, Ansari S, Mathur P, Casey A, Kelly B. Evaluation framework to guide implementation of AI systems into healthcare settings. BMJ Health Care Inform 2021; 28:bmjhci-2021-100444. [PMID: 34642177 PMCID: PMC8513218 DOI: 10.1136/bmjhci-2021-100444] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 09/30/2021] [Indexed: 01/10/2023] Open
Abstract
Objectives To date, many artificial intelligence (AI) systems have been developed in healthcare, but adoption has been limited. This may be due to inappropriate or incomplete evaluation and a lack of internationally recognised AI standards on evaluation. To have confidence in the generalisability of AI systems in healthcare and to enable their integration into workflows, there is a need for a practical yet comprehensive instrument to assess the translational aspects of the available AI systems. Currently available evaluation frameworks for AI in healthcare focus on the reporting and regulatory aspects but have little guidance regarding assessment of the translational aspects of the AI systems like the functional, utility and ethical components. Methods To address this gap and create a framework that assesses real-world systems, an international team has developed a translationally focused evaluation framework termed ‘Translational Evaluation of Healthcare AI (TEHAI)’. A critical review of literature assessed existing evaluation and reporting frameworks and gaps. Next, using health technology evaluation and translational principles, reporting components were identified for consideration. These were independently reviewed for consensus inclusion in a final framework by an international panel of eight expert. Results TEHAI includes three main components: capability, utility and adoption. The emphasis on translational and ethical features of the model development and deployment distinguishes TEHAI from other evaluation instruments. In specific, the evaluation components can be applied at any stage of the development and deployment of the AI system. Discussion One major limitation of existing reporting or evaluation frameworks is their narrow focus. TEHAI, because of its strong foundation in translation research models and an emphasis on safety, translational value and generalisability, not only has a theoretical basis but also practical application to assessing real-world systems. Conclusion The translational research theoretic approach used to develop TEHAI should see it having application not just for evaluation of clinical AI in research settings, but more broadly to guide evaluation of working clinical systems.
Collapse
Affiliation(s)
- Sandeep Reddy
- School of Medicine, Deakin University, Geelong, Victoria, Australia
| | - Wendy Rogers
- Department of Philosophy, Macquarie University, Sydney, New South Wales, Australia
| | - Ville-Petteri Makinen
- South Australian Health and Medical Research Institute, Adelaide, South Australia, Australia
| | - Enrico Coiera
- Australian Institute of Health Innovation, Macquarie University, Sydney, New South Wales, Australia
| | - Pieta Brown
- Orion Health, Auckland, Auckland, New Zealand
| | - Markus Wenzel
- Fraunhofer Institute for Telecommunications Heinrich-Hertz-Institute HHI, Berlin, Germany
| | - Eva Weicken
- Fraunhofer Institute for Telecommunications Heinrich-Hertz-Institute HHI, Berlin, Germany
| | - Saba Ansari
- Deakin University Faculty of Health, Geelong, Victoria, Australia
| | - Piyush Mathur
- Anesthesiology Institute, Cleveland Clinic, Cleveland, Ohio, USA
| | - Aaron Casey
- South Australian Health and Medical Research Institute, Adelaide, South Australia, Australia
| | - Blair Kelly
- Deakin University Faculty of Health, Geelong, Victoria, Australia
| |
Collapse
|
150
|
Choudhury A, Perumalla S. Detecting breast cancer using artificial intelligence: Convolutional neural network. Technol Health Care 2021; 29:33-43. [PMID: 32444590 DOI: 10.3233/thc-202226] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
BACKGROUND One of the most broadly founded approaches to envisage cancer treatment relies upon a pathologist's efficiency to visually inspect the appearances of bio-markers on the invasive tumor tissue section. Lately, deep learning techniques have radically enriched the ability of computers to identify objects in images fostering the prospect for fully automated computer-aided diagnosis. Given the noticeable role of nuclear structure in cancer detection, AI's pattern recognizing ability can expedite the diagnostic process. OBJECTIVE In this study, we propose and implement an image classification technique to identify breast cancer. METHODS We implement the convolutional neural network (CNN) on breast cancer image data set to identify invasive ductal carcinoma (IDC). RESULT The proposed CNN model after data augmentation yielded 78.4% classification accuracy. 16% of IDC (-) were predicted incorrectly (false negative) whereas 25% of IDC (+) were predicted incorrectly (false positive). CONCLUSION The results achieved by the proposed approach have shown that it is feasible to employ a convolutional neural network particularly for breast cancer classification tasks. However, a common problem in any artificial intelligence algorithm is its dependence on the data set. Therefore, the performance of the proposed model might not be generalized.
Collapse
Affiliation(s)
- Avishek Choudhury
- School of Systems and Entereprises, Stevens Institute of Technology, Hoboken, NJ, USA
| | - Sunanda Perumalla
- Clinical and Business Intelligence, Integris Health, Oklahoma City, OK, USA
| |
Collapse
|