1
|
Weber S, Wyszynski M, Godefroid M, Plattfaut R, Niehaves B. How do medical professionals make sense (or not) of AI? A social-media-based computational grounded theory study and an online survey. Comput Struct Biotechnol J 2024; 24:146-159. [PMID: 38434249 PMCID: PMC10904922 DOI: 10.1016/j.csbj.2024.02.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 02/14/2024] [Accepted: 02/14/2024] [Indexed: 03/05/2024] Open
Abstract
To investigate opinions and attitudes of medical professionals towards adopting AI-enabled healthcare technologies in their daily business, we used a mixed-methods approach. Study 1 employed a qualitative computational grounded theory approach analyzing 181 Reddit threads in the several subreddits of r/medicine. By utilizing an unsupervised machine learning clustering method, we identified three key themes: (1) consequences of AI, (2) physician-AI relationship, and (3) a proposed way forward. In particular Reddit posts related to the first two themes indicated that the medical professionals' fear of being replaced by AI and skepticism toward AI played a major role in the argumentations. Moreover, the results suggest that this fear is driven by little or moderate knowledge about AI. Posts related to the third theme focused on factual discussions about how AI and medicine have to be designed to become broadly adopted in health care. Study 2 quantitatively examined the relationship between the fear of AI, knowledge about AI, and medical professionals' intention to use AI-enabled technologies in more detail. Results based on a sample of 223 medical professionals who participated in the online survey revealed that the intention to use AI technologies increases with increasing knowledge about AI and that this effect is moderated by the fear of being replaced by AI.
Collapse
Affiliation(s)
- Sebastian Weber
- University of Bremen, Digital Public, Bibliothekstr. 1, 28359 Bremen, Germany
| | - Marc Wyszynski
- University of Bremen, Digital Public, Bibliothekstr. 1, 28359 Bremen, Germany
| | - Marie Godefroid
- University of Siegen, Information Systems, Kohlbettstr. 15, 57072 Siegen, Germany
| | - Ralf Plattfaut
- University of Duisburg-Essen, Information Systems and Transformation Management, Universitätsstr. 9, 45141 Essen, Germany
| | - Bjoern Niehaves
- University of Bremen, Digital Public, Bibliothekstr. 1, 28359 Bremen, Germany
| |
Collapse
|
2
|
Li Q, Qin Y. AI in medical education: medical student perception, curriculum recommendations and design suggestions. BMC Med Educ 2023; 23:852. [PMID: 37946176 PMCID: PMC10637014 DOI: 10.1186/s12909-023-04700-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 09/19/2023] [Indexed: 11/12/2023]
Abstract
Medical AI has transformed modern medicine and created a new environment for future doctors. However, medical education has failed to keep pace with these advances, and it is essential to provide systematic education on medical AI to current medical undergraduate and postgraduate students. To address this issue, our study utilized the Unified Theory of Acceptance and Use of Technology model to identify key factors that influence the acceptance and intention to use medical AI. We collected data from 1,243 undergraduate and postgraduate students from 13 universities and 33 hospitals, and 54.3% reported prior experience using medical AI. Our findings indicated that medical postgraduate students have a higher level of awareness in using medical AI than undergraduate students. The intention to use medical AI is positively associated with factors such as performance expectancy, habit, hedonic motivation, and trust. Therefore, future medical education should prioritize promoting students' performance in training, and courses should be designed to be both easy to learn and engaging, ensuring that students are equipped with the necessary skills to succeed in their future medical careers.
Collapse
Affiliation(s)
- Qianying Li
- Antai College of economics and management, Shanghai Jiao Tong University, Shanghai, China
| | - Yunhao Qin
- Department of Orthopedics, Shanghai Sixth People's Hospital, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
3
|
Papp L, Haberl D, Ecsedi B, Spielvogel CP, Krajnc D, Grahovac M, Moradi S, Drexler W. DEBI-NN: Distance-encoding biomorphic-informational neural networks for minimizing the number of trainable parameters. Neural Netw 2023; 167:517-532. [PMID: 37690213 DOI: 10.1016/j.neunet.2023.08.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 08/11/2023] [Accepted: 08/17/2023] [Indexed: 09/12/2023]
Abstract
Modern artificial intelligence (AI) approaches mainly rely on neural network (NN) or deep NN methodologies. However, these approaches require large amounts of data to train, given, that the number of their trainable parameters has a polynomial relationship to their neuron counts. This property renders deep NNs challenging to apply in fields operating with small, albeit representative datasets such as healthcare. In this paper, we propose a novel neural network architecture which trains spatial positions of neural soma and axon pairs, where weights are calculated by axon-soma distances of connected neurons. We refer to this method as distance-encoding biomorphic-informational (DEBI) neural network. This concept significantly minimizes the number of trainable parameters compared to conventional neural networks. We demonstrate that DEBI models can yield comparable predictive performance in tabular and imaging datasets, where they require a fraction of trainable parameters compared to conventional NNs, resulting in a highly scalable solution.
Collapse
Affiliation(s)
- Laszlo Papp
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria.
| | - David Haberl
- Division of Nuclear Medicine, Medical University of Vienna, Vienna, Austria
| | - Boglarka Ecsedi
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria; Georgia Institute of Technology, Atlanta, GA, USA
| | | | - Denis Krajnc
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Marko Grahovac
- Division of Nuclear Medicine, Medical University of Vienna, Vienna, Austria
| | - Sasan Moradi
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Wolfgang Drexler
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
4
|
Wang Y, Song Y, Ma Z, Han X. Multidisciplinary considerations of fairness in medical AI: A scoping review. Int J Med Inform 2023; 178:105175. [PMID: 37595374 DOI: 10.1016/j.ijmedinf.2023.105175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 08/02/2023] [Accepted: 08/04/2023] [Indexed: 08/20/2023]
Abstract
INTRODUCTION Artificial Intelligence (AI) technology has been developed significantly in recent years. The fairness of medical AI is of great concern due to its direct relation to human life and health. This review aims to analyze the existing research literature on fairness in medical AI from the perspectives of computer science, medical science, and social science (including law and ethics). The objective of the review is to examine the similarities and differences in the understanding of fairness, explore influencing factors, and investigate potential measures to implement fairness in medical AI across English and Chinese literature. METHODS This study employed a scoping review methodology and selected the following databases: Web of Science, MEDLINE, Pubmed, OVID, CNKI, WANFANG Data, etc., for the fairness issues in medical AI through February 2023. The search was conducted using various keywords such as "artificial intelligence," "machine learning," "medical," "algorithm," "fairness," "decision-making," and "bias." The collected data were charted, synthesized, and subjected to descriptive and thematic analysis. RESULTS After reviewing 468 English papers and 356 Chinese papers, 53 and 42 were included in the final analysis. Our results show the three different disciplines all show significant differences in the research on the core issues. Data is the foundation that affects medical AI fairness in addition to algorithmic bias and human bias. Legal, ethical, and technological measures all promote the implementation of medical AI fairness. CONCLUSIONS Our review indicates a consensus regarding the importance of data fairness as the foundation for achieving fairness in medical AI across multidisciplinary perspectives. However, there are substantial discrepancies in core aspects such as the concept, influencing factors, and implementation measures of fairness in medical AI. Consequently, future research should facilitate interdisciplinary discussions to bridge the cognitive gaps between different fields and enhance the practical implementation of fairness in medical AI.
Collapse
Affiliation(s)
- Yue Wang
- School of Law, Xi'an Jiaotong University, No.28, Xianning West Road, Xi'an, Shaanxi, 710049, PR China.
| | - Yaxin Song
- School of Law, Xi'an Jiaotong University, No.28, Xianning West Road, Xi'an, Shaanxi, 710049, PR China.
| | - Zhuo Ma
- School of Law, Xi'an Jiaotong University, No.28, Xianning West Road, Xi'an, Shaanxi, 710049, PR China.
| | - Xiaoxue Han
- Xi'an Jiaotong University Library, No.28, Xianning West Road, Xi'an, Shaanxi, 710049, PR China.
| |
Collapse
|
5
|
Ali H, Qadir J, Alam T, Househ M, Shah Z. Revolutionizing Healthcare with Foundation AI Models. Stud Health Technol Inform 2023; 305:469-470. [PMID: 37387067 DOI: 10.3233/shti230533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/01/2023]
Abstract
ChatGPT is a foundation Artificial Intelligence (AI) model that has opened up new opportunities in digital healthcare. Particularly, it can serve as a co-pilot tool for doctors in the interpretation, summarization, and completion of reports. Furthermore, it can build upon the ability to access the large literature and knowledge on the internet. So, chatGPT could generate acceptable responses for the medical examination. Hence. It offers the possibility of enhancing healthcare accessibility, expandability, and effectiveness. Nonetheless, chatGPT is vulnerable to inaccuracies, false information, and bias. This paper briefly describes the potential of Foundation AI models to transform future healthcare by presenting ChatGPT as an example tool.
Collapse
Affiliation(s)
- Hazrat Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | - Junaid Qadir
- Department of Computer Engineering, Qatar University, Doha, Qatar
| | - Tanvir Alam
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | - Mowafa Househ
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | - Zubair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| |
Collapse
|
6
|
Curchoe CL. Unlock the algorithms: Regulation of adaptive algorithms in reproduction. Fertil Steril 2023:S0015-0282(23)00523-X. [PMID: 37217091 DOI: 10.1016/j.fertnstert.2023.05.152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 05/17/2023] [Indexed: 05/24/2023]
Abstract
In the USA, the Food and Drug Administration (FDA) plans to regulate AI/ML software systems as medical devices to improve the quality, consistency, and transparency of their performance across specific age, racial, and ethnic groups. Embryology procedures do not fall under the federal regulation of "CLIA 88." They are not tests, per se, they are cell-based procedures. Likewise, many ad-on procedures (PGT) related to embryology, are considered to be "lab developed tests," and are not subject to FDA regulation at present. Should predictive artificial intelligence algorithms in reproduction be considered as medical devices or lab developed tests? Certain indications certainly carry higher risk, such as medication dosage, where the consequences of mismanagement could be severe, whereas others, such as embryo selection are non-interventional (selecting from a patient's own embryos and the course of treatment does not change) and present little to no risk. The regulatory landscape is complex, involving data diversity and performance, real-world evidence, cybersecurity, and post-market surveillance.
Collapse
|
7
|
Rueda J, Rodríguez JD, Jounou IP, Hortal-Carmona J, Ausín T, Rodríguez-Arias D. "Just" accuracy? Procedural fairness demands explainability in AI-based medical resource allocations. AI Soc 2022:1-12. [PMID: 36573157 PMCID: PMC9769482 DOI: 10.1007/s00146-022-01614-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 12/05/2022] [Indexed: 12/24/2022]
Abstract
The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps to maximize patients' benefits and optimizes limited resources. However, we claim that the opaqueness of the algorithmic black box and its absence of explainability threatens core commitments of procedural fairness such as accountability, avoidance of bias, and transparency. To illustrate this, we discuss liver transplantation as a case of critical medical resources in which the lack of explainability in AI-based allocation algorithms is procedurally unfair. Finally, we provide a number of ethical recommendations for when considering the use of unexplainable algorithms in the distribution of health-related resources.
Collapse
Affiliation(s)
- Jon Rueda
- Department of Philosophy 1, University of Granada, Granada, Spain
- FiloLab Scientific Unit of Excellence, University of Granada, Granada, Spain
| | | | | | | | - Txetxu Ausín
- Institute of Philosophy, Spanish National Research Council, Madrid, Spain
| | - David Rodríguez-Arias
- Department of Philosophy 1, University of Granada, Granada, Spain
- FiloLab Scientific Unit of Excellence, University of Granada, Granada, Spain
| |
Collapse
|
8
|
Rammuni Silva RS, Fernando P. Effective Utilization of Multiple Convolutional Neural Networks for Chest X-Ray Classification. SN Comput Sci 2022; 3:492. [PMID: 36188757 PMCID: PMC9514177 DOI: 10.1007/s42979-022-01390-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Accepted: 08/26/2022] [Indexed: 06/16/2023]
Abstract
Out of the numerous types of Medical Imaging modalities available, radiography stands out a bit more than others due to its capabilities of diagnosing diseases and conditions, including life-threatening conditions. Its affordability is another main reason for its prevalence. Chest Radiography holds even higher importance, as it focuses a critical area of the human body. However, interpreting a Chest Radiography image can be challenging and usually done by an experienced Radiologist for accurate results. There are two main issues related to this. One is that in some countries, experienced Radiologists are scarce. The other issue is that the inevitability of human errors in diagnoses. Researchers attempt to use Artificial Intelligence to address these two issues. Most of the existing work incorporates Convolutional Neural Networks for this purpose. This paper presents a novel way of parallelizing multiple architectures of Convolutional Neural Networks focusing on Chest X-ray classification. The paper further presents a comprehensive evaluation of the existing architectures with the parallelized results of them using our method. We used four large-scale datasets, including a non-medical one, for the evaluation of our models. We managed to achieve better accuracy for 9 out 13 and 11 out of 14 labels on our two main evaluation datasets. The paper concludes by presenting the limitations and future improvements possible for the system.
Collapse
|
9
|
Estiri H, Strasser ZH, Rashidian S, Klann JG, Wagholikar KB, McCoy TH, Murphy SN. An Objective Framework for Evaluating Unrecognized Bias in Medical AI Models Predicting COVID-19 Outcomes. J Am Med Inform Assoc 2022; 29:1334-1341. [PMID: 35511151 PMCID: PMC9277645 DOI: 10.1093/jamia/ocac070] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Revised: 04/04/2022] [Accepted: 04/27/2022] [Indexed: 12/15/2022] Open
Abstract
OBJECTIVE The increasing translation of artificial intelligence (AI)/machine learning (ML) models into clinical practice brings an increased risk of direct harm from modeling bias; however, bias remains incompletely measured in many medical AI applications. This article aims to provide a framework for objective evaluation of medical AI from multiple aspects, focusing on binary classification models. MATERIALS AND METHODS Using data from over 56 thousand Mass General Brigham (MGB) patients with confirmed severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), we evaluate unrecognized bias in four AI models developed during the early months of the pandemic in Boston, Massachusetts that predict risks of hospital admission, ICU admission, mechanical ventilation, and death after a SARS-CoV-2 infection purely based on their pre-infection longitudinal medical records. Models were evaluated both retrospectively and prospectively using model-level metrics of discrimination, accuracy, and reliability, and a novel individual-level metric for error. RESULTS We found inconsistent instances of model-level bias in the prediction models. From an individual-level aspect, however, we found most all models performing with slightly higher error rates for older patients. DISCUSSION While a model can be biased against certain protected groups (i.e., perform worse) in certain tasks, it can be at the same time biased towards another protected group (i.e., perform better). As such, current bias evaluation studies may lack a full depiction of the variable effects of a model on its subpopulations. CONCLUSION Only a holistic evaluation, a diligent search for unrecognized bias, can provide enough information for an unbiased judgment of AI bias that can invigorate follow-up investigations on identifying the underlying roots of bias and ultimately make a change.
Collapse
Affiliation(s)
- Hossein Estiri
- Laboratory of Computer Science, Massachusetts General Hospital, Boston, MA, 02144, USA.,Department of Medicine, Massachusetts General Hospital, Boston, MA, 02114, USA
| | - Zachary H Strasser
- Laboratory of Computer Science, Massachusetts General Hospital, Boston, MA, 02144, USA.,Department of Medicine, Massachusetts General Hospital, Boston, MA, 02114, USA
| | | | - Jeffrey G Klann
- Laboratory of Computer Science, Massachusetts General Hospital, Boston, MA, 02144, USA.,Department of Medicine, Massachusetts General Hospital, Boston, MA, 02114, USA.,Research Information Science and Computing, Mass General Brigham, Somerville, MA, 02145, USA
| | - Kavishwar B Wagholikar
- Laboratory of Computer Science, Massachusetts General Hospital, Boston, MA, 02144, USA.,Department of Medicine, Massachusetts General Hospital, Boston, MA, 02114, USA
| | - Thomas H McCoy
- Center for Quantitative Health, Massachusetts General Hospital, Boston, MA, 02114, USA
| | - Shawn N Murphy
- Laboratory of Computer Science, Massachusetts General Hospital, Boston, MA, 02144, USA.,Research Information Science and Computing, Mass General Brigham, Somerville, MA, 02145, USA.,Department of Biomedical Informatics, Harvard Medical School, Boston, MA, 02115, USA.,Department of Neurology, Massachusetts General Hospital, Boston, MA, 02114, USA
| |
Collapse
|
10
|
Zhou X, Ye Q, Yang X, Chen J, Ma H, Xia J, Del Ser J, Yang G. AI-based medical e-diagnosis for fast and automatic ventricular volume measurement in patients with normal pressure hydrocephalus. Neural Comput Appl 2022; 35:1-10. [PMID: 35228779 PMCID: PMC8866920 DOI: 10.1007/s00521-022-07048-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 01/31/2022] [Indexed: 11/16/2022]
Abstract
Based on CT and MRI images acquired from normal pressure hydrocephalus (NPH) patients, using machine learning methods, we aim to establish a multimodal and high-performance automatic ventricle segmentation method to achieve an efficient and accurate automatic measurement of the ventricular volume. First, we extract the brain CT and MRI images of 143 definite NPH patients. Second, we manually label the ventricular volume (VV) and intracranial volume (ICV). Then, we use the machine learning method to extract features and establish automatic ventricle segmentation model. Finally, we verify the reliability of the model and achieved automatic measurement of VV and ICV. In CT images, the Dice similarity coefficient (DSC), intraclass correlation coefficient (ICC), Pearson correlation, and Bland-Altman analysis of the automatic and manual segmentation result of the VV were 0.95, 0.99, 0.99, and 4.2 ± 2.6, respectively. The results of ICV were 0.96, 0.99, 0.99, and 6.0 ± 3.8, respectively. The whole process takes 3.4 ± 0.3 s. In MRI images, the DSC, ICC, Pearson correlation, and Bland-Altman analysis of the automatic and manual segmentation result of the VV were 0.94, 0.99, 0.99, and 2.0 ± 0.6, respectively. The results of ICV were 0.93, 0.99, 0.99, and 7.9 ± 3.8, respectively. The whole process took 1.9 ± 0.1 s. We have established a multimodal and high-performance automatic ventricle segmentation method to achieve efficient and accurate automatic measurement of the ventricular volume of NPH patients. This can help clinicians quickly and accurately understand the situation of NPH patient's ventricles.
Collapse
Affiliation(s)
- Xi Zhou
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Shenzhen University, Shenzhen Second People’s Hospital, 3002 SunGang Road West, Shenzhen, 518035 Guangdong Province China
| | - Qinghao Ye
- Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA USA
| | - Xiaolin Yang
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Shenzhen University, Shenzhen Second People’s Hospital, 3002 SunGang Road West, Shenzhen, 518035 Guangdong Province China
| | - Jiakun Chen
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Shenzhen University, Shenzhen Second People’s Hospital, 3002 SunGang Road West, Shenzhen, 518035 Guangdong Province China
| | - Haiqin Ma
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Shenzhen University, Shenzhen Second People’s Hospital, 3002 SunGang Road West, Shenzhen, 518035 Guangdong Province China
| | - Jun Xia
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Shenzhen University, Shenzhen Second People’s Hospital, 3002 SunGang Road West, Shenzhen, 518035 Guangdong Province China
| | - Javier Del Ser
- University of the Basque Country (UPV/EHU), 48013 Bilbao, Spain
- TECNALIA, Basque Research and Technology Alliance (BRTA), 48160 Derio, Spain
| | - Guang Yang
- Royal Brompton Hospital, London, UK
- National Heart and Lung Institute, Imperial College London, London, UK
| |
Collapse
|
11
|
McLennan S, Fiske A, Tigard D, Müller R, Haddadin S, Buyx A. Embedded ethics: a proposal for integrating ethics into the development of medical AI. BMC Med Ethics 2022; 23:6. [PMID: 35081955 PMCID: PMC8793193 DOI: 10.1186/s12910-022-00746-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Accepted: 01/20/2022] [Indexed: 12/22/2022] Open
Abstract
The emergence of ethical concerns surrounding artificial intelligence (AI) has led to an explosion of high-level ethical principles being published by a wide range of public and private organizations. However, there is a need to consider how AI developers can be practically assisted to anticipate, identify and address ethical issues regarding AI technologies. This is particularly important in the development of AI intended for healthcare settings, where applications will often interact directly with patients in various states of vulnerability. In this paper, we propose that an ‘embedded ethics’ approach, in which ethicists and developers together address ethical issues via an iterative and continuous process from the outset of development, could be an effective means of integrating robust ethical considerations into the practical development of medical AI.
Collapse
Affiliation(s)
- Stuart McLennan
- Institute of History and Ethics in Medicine, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany.
| | - Amelia Fiske
- Institute of History and Ethics in Medicine, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
| | - Daniel Tigard
- Institute of History and Ethics in Medicine, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany
| | - Ruth Müller
- Munich Center for Technology in Society, School of Management and School of Life Sciences, Technical University of Munich, Munich, Germany
| | - Sami Haddadin
- Munich School of Robotics and Machine Intelligence, Technical University of Munich, Munich, Germany
| | - Alena Buyx
- Institute of History and Ethics in Medicine, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany.,Munich School of Robotics and Machine Intelligence, Technical University of Munich, Munich, Germany
| |
Collapse
|
12
|
Abstract
Although existing work draws attention to a range of obstacles in realizing fair AI, the field lacks an account that emphasizes how these worries hang together in a systematic way. Furthermore, a review of the fair AI and philosophical literature demonstrates the unsuitability of 'treat like cases alike' and other intuitive notions as conceptions of fairness. That review then generates three desiderata for a replacement conception of fairness valuable to AI research: (1) It must provide a meta-theory for understanding tradeoffs, entailing that it must be flexible enough to capture diverse species of objection to decisions. (2) It must not appeal to an impartial perspective (neutral data, objective data, or final arbiter.) (3) It must foreground the way in which judgments of fairness are sensitive to context, i.e., to historical and institutional states of affairs. We argue that a conception of fairness as appropriate concession in the historical iteration of institutional decisions meets these three desiderata. On the basis of this definition, we organize the insights of commentators into a process-structure map of the ethical territory that we hope will bring clarity to computer scientists and ethicists analyzing Fair AI while clearing some ground for further technical and philosophical work.
Collapse
Affiliation(s)
- Ryan van Nood
- Department of Philosophy, Purdue University, 100 N. University Street, West Lafayette, IN, 47907, USA
| | - Christopher Yeomans
- Department of Philosophy, Purdue University, 100 N. University Street, West Lafayette, IN, 47907, USA.
| |
Collapse
|
13
|
Yoo J, Jun TJ, Kim YH. xECGNet: Fine-tuning attention map within convolutional neural network to improve detection and explainability of concurrent cardiac arrhythmias. Comput Methods Programs Biomed 2021; 208:106281. [PMID: 34333207 DOI: 10.1016/j.cmpb.2021.106281] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 07/08/2021] [Indexed: 06/13/2023]
Abstract
Background and objectiveDetecting abnormal patterns within an electrocardiogram (ECG) is crucial for diagnosing cardiovascular diseases. We start from two unresolved problems in applying deep-learning-based ECG classification models to clinical practice: first, although multiple cardiac arrhythmia (CA) types may co-occur in real life, the majority of previous detection methods have focused on one-to-one relationships between ECG and CA type, and second, it has been difficult to explain how neural-network-based CA classifiers make decisions. We hypothesize that fine-tuning attention maps with regard to all possible combinations of ground-truth (GT) labels will improve both the detection and interpretability of co-occurring CAs. Methods To test our hypothesis, we propose an end-to-end convolutional neural network (CNN), xECGNet, that fine-tunes the attention map to resemble the averaged response maps of GT labels. Fine-tuning is achieved by adding to the objective function a regularization loss between the attention map and the reference (averaged) map. Performance is assessed by F1 score and subset accuracy. Results The main experiment demonstrates that fine-tuning alone significantly improves a model's multilabel subset accuracy from 75.8% to 84.5% when compared with the baseline model. Also, xECGNet shows the highest F1 score of 0.812 and yields a more explainable map that encompasses multiple CA types, when compared to other baseline methods. Conclusions xECGNet has implications in that it tackles the two obstacles for the clinical application of CNN-based CA detection models with a simple solution of adding one additional term to the objective function.
Collapse
Affiliation(s)
- Jungsun Yoo
- Division of Cardiology, Asan Medical Center, Seoul, Republic of Korea
| | - Tae Joon Jun
- Health Innovation Big Data Center, Asan Institute for Life Sciences, Asan Medical Center, Seoul, Republic of Korea.
| | - Young-Hak Kim
- Division of Cardiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.
| |
Collapse
|
14
|
Abstract
This article considers recent ethical topics relating to medical AI. After a general discussion of recent medical AI innovations, and a more analytic look at related ethical issues such as data privacy, physician dependency on poorly understood AI helpware, bias in data used to create algorithms post-GDPR, and changes to the patient-physician relationship, the article examines the issue of so-called robot doctors. Whereas the so-called democratization of healthcare due to health wearables and increased access to medical information might suggest a positive shift in the patient-physician relationship, the physician's 'need to care' might be irreplaceable, and robot healthcare workers ('robot carers') might be seen as contributing to dehumanized healthcare practices.
Collapse
|
15
|
Abstract
We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: (1) Are networks explainable, and if so, what does it mean to explain the output of a network? And (2) what does it mean for a network to be interpretable? We argue that accounts of "explanation" tailored specifically to neural networks have ineffectively reinvented the wheel. In response to (1), we show how four familiar accounts of explanation apply to neural networks as they would to any scientific phenomenon. We diagnose the confusion about explaining neural networks within the machine learning literature as an equivocation on "explainability," "understandability" and "interpretability." To remedy this, we distinguish between these notions, and answer (2) by offering a theory and typology of interpretation in machine learning. Interpretation is something one does to an explanation with the aim of producing another, more understandable, explanation. As with explanation, there are various concepts and methods involved in interpretation: Total or Partial, Global or Local, and Approximative or Isomorphic. Our account of "interpretability" is consistent with uses in the machine learning literature, in keeping with the philosophy of explanation and understanding, and pays special attention to medical artificial intelligence systems.
Collapse
Affiliation(s)
- Adrian Erasmus
- Institute for the Future of Knowledge, University of Johannesburg, Johannesburg, South Africa
- Department of History and Philosophy of Science, University of Cambridge, Free School Ln., Cambridge, CB2 3RH UK
| | - Tyler D. P. Brunet
- Department of History and Philosophy of Science, University of Cambridge, Free School Ln., Cambridge, CB2 3RH UK
| | - Eyal Fisher
- Cancer Research UK Cambridge Institute, University of Cambridge, Li Ka Shing Centre, Robinson Way, Cambridge, CB2 0RE UK
| |
Collapse
|