1
|
Aharoni E, Fernandes S, Brady DJ, Alexander C, Criner M, Queen K, Rando J, Nahmias E, Crespo V. Attributions toward artificial agents in a modified Moral Turing Test. Sci Rep 2024; 14:8458. [PMID: 38688951 PMCID: PMC11061136 DOI: 10.1038/s41598-024-58087-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Accepted: 03/25/2024] [Indexed: 05/02/2024] Open
Abstract
Advances in artificial intelligence (AI) raise important questions about whether people view moral evaluations by AI systems similarly to human-generated moral evaluations. We conducted a modified Moral Turing Test (m-MTT), inspired by Allen et al. (Exp Theor Artif Intell 352:24-28, 2004) proposal, by asking people to distinguish real human moral evaluations from those made by a popular advanced AI language model: GPT-4. A representative sample of 299 U.S. adults first rated the quality of moral evaluations when blinded to their source. Remarkably, they rated the AI's moral reasoning as superior in quality to humans' along almost all dimensions, including virtuousness, intelligence, and trustworthiness, consistent with passing what Allen and colleagues call the comparative MTT. Next, when tasked with identifying the source of each evaluation (human or computer), people performed significantly above chance levels. Although the AI did not pass this test, this was not because of its inferior moral reasoning but, potentially, its perceived superiority, among other possible explanations. The emergence of language models capable of producing moral responses perceived as superior in quality to humans' raises concerns that people may uncritically accept potentially harmful moral guidance from AI. This possibility highlights the need for safeguards around generative language models in matters of morality.
Collapse
Affiliation(s)
- Eyal Aharoni
- Department of Psychology, Georgia State University, Atlanta, GA, USA.
- Department of Philosophy, Georgia State University, Atlanta, GA, USA.
- Neuroscience Institute, Georgia State University, Atlanta, GA, USA.
| | | | - Daniel J Brady
- Department of Psychology, Georgia State University, Atlanta, GA, USA
| | - Caelan Alexander
- Department of Psychology, Georgia State University, Atlanta, GA, USA
| | - Michael Criner
- Department of Psychology, Georgia State University, Atlanta, GA, USA
| | - Kara Queen
- Department of Psychology, Georgia State University, Atlanta, GA, USA
| | | | - Eddy Nahmias
- Department of Philosophy, Georgia State University, Atlanta, GA, USA
- Neuroscience Institute, Georgia State University, Atlanta, GA, USA
| | - Victor Crespo
- Department of Philosophy, Duke University, Durham, NC, USA
| |
Collapse
|
2
|
Baldassarre A, Padovan M. Regulatory and Ethical Considerations on Artificial Intelligence for Occupational Medicine. Med Lav 2024; 115:e2024013. [PMID: 38686573 DOI: 10.23749/mdl.v115i2.15881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Accepted: 04/05/2024] [Indexed: 05/02/2024]
Abstract
Generative artificial intelligence and Large Language Models are reshaping labor dynamics and occupational health practices. As AI continues to evolve, there's a critical need to customize ethical considerations for its specific impacts on occupational health. Recognizing potential ethical challenges and dilemmas, stakeholders and physicians are urged to proactively adjust the practice of occupational medicine in response to shifting ethical paradigms. By advocating for a comprehensive review of the International Commission on Occupational Health ICOH code of Ethics, we can ensure responsible medical AI deployment, safeguarding the well-being of workers amidst the transformative effects of automation in healthcare.
Collapse
|
3
|
Katwaroo AR, Adesh VS, Lowtan A, Umakanthan S. The diagnostic, therapeutic, and ethical impact of artificial intelligence in modern medicine. Postgrad Med J 2024; 100:289-296. [PMID: 38159301 DOI: 10.1093/postmj/qgad135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 12/02/2023] [Indexed: 01/03/2024]
Abstract
In the evolution of modern medicine, artificial intelligence (AI) has been proven to provide an integral aspect of revolutionizing clinical diagnosis, drug discovery, and patient care. With the potential to scrutinize colossal amounts of medical data, radiological and histological images, and genomic data in healthcare institutions, AI-powered systems can recognize, determine, and associate patterns and provide impactful insights that would be strenuous and challenging for clinicians to detect during their daily clinical practice. The outcome of AI-mediated search offers more accurate, personalized patient diagnoses, guides in research for new drug therapies, and provides a more effective multidisciplinary treatment plan that can be implemented for patients with chronic diseases. Among the many promising applications of AI in modern medicine, medical imaging stands out distinctly as an area with tremendous potential. AI-powered algorithms can now accurately and sensitively identify cancer cells and other lesions in medical images with greater accuracy and sensitivity. This allows for earlier diagnosis and treatment, which can significantly impact patient outcomes. This review provides a comprehensive insight into diagnostic, therapeutic, and ethical issues with the advent of AI in modern medicine.
Collapse
Affiliation(s)
- Arun Rabindra Katwaroo
- Department of Medicine, Trinidad Institute of Medical Technology, St Augustine, Trinidad and Tobago
| | | | - Amrita Lowtan
- Department of Preclinical Sciences, Faculty of Medical Sciences, The University of the West Indies, St. Augustine, Trinidad and Tobago
| | - Srikanth Umakanthan
- Department of Paraclinical Sciences, Faculty of Medical Sciences, The University of the West Indies, St. Augustine, Trinidad and Tobago
| |
Collapse
|
4
|
Heidt A. 'Without these tools, I'd be lost': how generative AI aids in accessibility. Nature 2024; 628:462-463. [PMID: 38589449 DOI: 10.1038/d41586-024-01003-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/10/2024]
|
5
|
Busch F, Adams LC, Bressem KK. Spotlight on the biomedical ethical integration of AI in medical education - Response to: 'An explorative assessment of ChatGPT as an aid in medical education: Use it with caution'. Med Teach 2024; 46:594-595. [PMID: 38104590 DOI: 10.1080/0142159x.2023.2293655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Accepted: 12/07/2023] [Indexed: 12/19/2023]
Affiliation(s)
- Felix Busch
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| | - Lisa C Adams
- Department of Radiology, Klinikum rechts der Isar, Technische Universität München (TUM), Munich, Germany
| | - Keno K Bressem
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Berlin, Germany
| |
Collapse
|
6
|
Schelble BG, Lopez J, Textor C, Zhang R, McNeese NJ, Pak R, Freeman G. Towards Ethical AI: Empirically Investigating Dimensions of AI Ethics, Trust Repair, and Performance in Human-AI Teaming. Hum Factors 2024; 66:1037-1055. [PMID: 35938319 DOI: 10.1177/00187208221116952] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
OBJECTIVE Determining the efficacy of two trust repair strategies (apology and denial) for trust violations of an ethical nature by an autonomous teammate. BACKGROUND While ethics in human-AI interaction is extensively studied, little research has investigated how decisions with ethical implications impact trust and performance within human-AI teams and their subsequent repair. METHOD Forty teams of two participants and one autonomous teammate completed three team missions within a synthetic task environment. The autonomous teammate made an ethical or unethical action during each mission, followed by an apology or denial. Measures of individual team trust, autonomous teammate trust, human teammate trust, perceived autonomous teammate ethicality, and team performance were taken. RESULTS Teams with unethical autonomous teammates had significantly lower trust in the team and trust in the autonomous teammate. Unethical autonomous teammates were also perceived as substantially more unethical. Neither trust repair strategy effectively restored trust after an ethical violation, and autonomous teammate ethicality was not related to the team score, but unethical autonomous teammates did have shorter times. CONCLUSION Ethical violations significantly harm trust in the overall team and autonomous teammate but do not negatively impact team score. However, current trust repair strategies like apologies and denials appear ineffective in restoring trust after this type of violation. APPLICATION This research highlights the need to develop trust repair strategies specific to human-AI teams and trust violations of an ethical nature.
Collapse
Affiliation(s)
- Beau G Schelble
- Human-Centered Computing, Clemson University, Clemson, SC, USA
| | - Jeremy Lopez
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Claire Textor
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Rui Zhang
- Human-Centered Computing, Clemson University, Clemson, SC, USA
| | | | - Richard Pak
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Guo Freeman
- Human-Centered Computing, Clemson University, Clemson, SC, USA
| |
Collapse
|
7
|
Chowdhury R. AI-fuelled election campaigns are here - where are the rules? Nature 2024; 628:237. [PMID: 38594400 DOI: 10.1038/d41586-024-00995-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2024]
|
8
|
Ananya. AI image generators often give racist and sexist results: can they be fixed? Nature 2024; 627:722-725. [PMID: 38503880 DOI: 10.1038/d41586-024-00674-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/21/2024]
|
9
|
Veritti D, Rubinato L, Sarao V, De Nardin A, Foresti GL, Lanzetta P. Behind the mask: a critical perspective on the ethical, moral, and legal implications of AI in ophthalmology. Graefes Arch Clin Exp Ophthalmol 2024; 262:975-982. [PMID: 37747539 PMCID: PMC10907411 DOI: 10.1007/s00417-023-06245-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 07/24/2023] [Accepted: 09/15/2023] [Indexed: 09/26/2023] Open
Abstract
PURPOSE This narrative review aims to provide an overview of the dangers, controversial aspects, and implications of artificial intelligence (AI) use in ophthalmology and other medical-related fields. METHODS We conducted a decade-long comprehensive search (January 2013-May 2023) of both academic and grey literature, focusing on the application of AI in ophthalmology and healthcare. This search included key web-based academic databases, non-traditional sources, and targeted searches of specific organizations and institutions. We reviewed and selected documents for relevance to AI, healthcare, ethics, and guidelines, aiming for a critical analysis of ethical, moral, and legal implications of AI in healthcare. RESULTS Six main issues were identified, analyzed, and discussed. These include bias and clinical safety, cybersecurity, health data and AI algorithm ownership, the "black-box" problem, medical liability, and the risk of widening inequality in healthcare. CONCLUSION Solutions to address these issues include collecting high-quality data of the target population, incorporating stronger security measures, using explainable AI algorithms and ensemble methods, and making AI-based solutions accessible to everyone. With careful oversight and regulation, AI-based systems can be used to supplement physician decision-making and improve patient care and outcomes.
Collapse
Affiliation(s)
- Daniele Veritti
- Department of Medicine - Ophthalmology, University of Udine, Udine, Italy.
| | - Leopoldo Rubinato
- Department of Medicine - Ophthalmology, University of Udine, Udine, Italy
| | - Valentina Sarao
- Department of Medicine - Ophthalmology, University of Udine, Udine, Italy
- Istituto Europeo di Microchirurgia Oculare - IEMO, Udine, Italy
| | - Axel De Nardin
- Department of Mathematics, Informatics and Physics, University of Udine, Udine, Italy
| | - Gian Luca Foresti
- Department of Mathematics, Informatics and Physics, University of Udine, Udine, Italy
| | - Paolo Lanzetta
- Department of Medicine - Ophthalmology, University of Udine, Udine, Italy
- Istituto Europeo di Microchirurgia Oculare - IEMO, Udine, Italy
| |
Collapse
|
10
|
Gibney E. Chatbot AI makes racist judgements on the basis of dialect. Nature 2024; 627:476-477. [PMID: 38480953 DOI: 10.1038/d41586-024-00779-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/21/2024]
|
11
|
Pawar VV, Farooqui S. Ethical consideration for implementing AI in healthcare: A chat GPT perspective. Oral Oncol 2024; 149:106682. [PMID: 38185022 DOI: 10.1016/j.oraloncology.2023.106682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Accepted: 12/31/2023] [Indexed: 01/09/2024]
Affiliation(s)
- Vikas V Pawar
- Dr. D. Y. Patil Vidyapeeth, Centre for Online Learning Sant-Tukaram Nagar, Pimpri, Pune 411018, MH, India.
| | - Safia Farooqui
- Dr. D. Y. Patil Vidyapeeth, Centre for Online Learning Sant-Tukaram Nagar, Pimpri, Pune 411018, MH, India
| |
Collapse
|
12
|
Harfouche AL, Petousi V, Jung W. AI ethics on the road to responsible AI plant science and societal welfare. Trends Plant Sci 2024; 29:104-107. [PMID: 38199829 DOI: 10.1016/j.tplants.2023.12.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Accepted: 12/22/2023] [Indexed: 01/12/2024]
Abstract
The swiftness of artificial intelligence (AI) progress in plant science begets relevant ethical questions with significant scientific and societal implications. Embracing a principled approach to regulation, ethics review and monitoring, and human-centric interpretable informed AI (HIAI), we can begin to navigate our voyage towards ethical and socially responsible AI.
Collapse
Affiliation(s)
- Antoine L Harfouche
- Department for Innovation in Biological, Agro-food and Forest systems, University of Tuscia, Viterbo 01100, Italy.
| | - Vasiliki Petousi
- Department of Sociology, University of Crete, Rethymno 74100, Greece
| | - Wonsup Jung
- School of Liberal Studies, Kyungnam University, Changwon-si 51767, Republic of Korea
| |
Collapse
|
13
|
King RD, Scassa T, Kramer S, Kitano H. Stockholm declaration on AI ethics: why others should sign. Nature 2024; 626:716. [PMID: 38378827 DOI: 10.1038/d41586-024-00517-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
|
14
|
Jones N. How journals are fighting back against a wave of questionable images. Nature 2024; 626:697-698. [PMID: 38347210 DOI: 10.1038/d41586-024-00372-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2024]
|
15
|
Canali S, Barone-Adesi F. Can AI deliver advice that is judgement-free for science policy? Nature 2023; 624:252. [PMID: 38086938 DOI: 10.1038/d41586-023-03949-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
|
16
|
|
17
|
Yoshida K. Mentor-trainee dialogue on proper use of AI tools. Nature 2023; 624:523. [PMID: 38114681 DOI: 10.1038/d41586-023-04062-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2023]
|
18
|
Jones N. The world's week on AI safety: powerful computing efforts launched to boost research. Nature 2023; 623:229-230. [PMID: 37923957 DOI: 10.1038/d41586-023-03472-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2023]
|
19
|
Hanson B, Stall S, Cutcher-Gershenfeld J, Vrouwenvelder K, Wirz C, Rao YD, Peng G. Garbage in, garbage out: mitigating risks and maximizing benefits of AI in research. Nature 2023; 623:28-31. [PMID: 37907636 DOI: 10.1038/d41586-023-03316-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
|
20
|
Reardon S. Mind-reading machines are here: is it time to worry? Nature 2023; 617:236. [PMID: 37130904 DOI: 10.1038/d41586-023-01486-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
|
21
|
|
22
|
|
23
|
Couture V, Roy MC, Dez E, Laperle S, Bélisle-Pipon JC. Ethical Implications of Artificial Intelligence in Population Health and the Public’s Role in its Governance: Perspectives from a Citizen and Expert Panel (Preprint). J Med Internet Res 2022; 25:e44357. [PMID: 37104026 PMCID: PMC10176139 DOI: 10.2196/44357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 02/14/2023] [Accepted: 03/10/2023] [Indexed: 03/12/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) systems are widely used in the health care sector. Mainly applied for individualized care, AI is increasingly aimed at population health. This raises important ethical considerations but also calls for responsible governance, considering that this will affect the population. However, the literature points to a lack of citizen participation in the governance of AI in health. Therefore, it is necessary to investigate the governance of the ethical and societal implications of AI in population health. OBJECTIVE This study aimed to explore the perspectives and attitudes of citizens and experts regarding the ethics of AI in population health, the engagement of citizens in AI governance, and the potential of a digital app to foster citizen engagement. METHODS We recruited a panel of 21 citizens and experts. Using a web-based survey, we explored their perspectives and attitudes on the ethical issues of AI in population health, the relative role of citizens and other actors in AI governance, and the ways in which citizens can be supported to participate in AI governance through a digital app. The responses of the participants were analyzed quantitatively and qualitatively. RESULTS According to the participants, AI is perceived to be already present in population health and its benefits are regarded positively, but there is a consensus that AI has substantial societal implications. The participants also showed a high level of agreement toward involving citizens into AI governance. They highlighted the aspects to be considered in the creation of a digital app to foster this involvement. They recognized the importance of creating an app that is both accessible and transparent. CONCLUSIONS These results offer avenues for the development of a digital app to raise awareness, to survey, and to support citizens' decision-making regarding the ethical, legal, and social issues of AI in population health.
Collapse
Affiliation(s)
| | | | - Emma Dez
- School of Research, Sciences Po Paris, Paris, France
| | - Samuel Laperle
- Department of Linguistics, Université du Québec à Montréal, Montréal, QC, Canada
| | | |
Collapse
|
24
|
Whicher D, Rapp T. The Value of Artificial Intelligence for Healthcare Decision Making-Lessons Learned. Value Health 2022; 25:328-330. [PMID: 35227442 DOI: 10.1016/j.jval.2021.12.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Accepted: 12/29/2021] [Indexed: 06/14/2023]
Affiliation(s)
| | - Thomas Rapp
- University of Paris, Paris, France; Sciences Po, LIEPP, Paris, France
| |
Collapse
|
25
|
Chauhan C, Gullapalli RR. Ethics of AI in Pathology: Current Paradigms and Emerging Issues. Am J Pathol 2021; 191:1673-1683. [PMID: 34252382 PMCID: PMC8485059 DOI: 10.1016/j.ajpath.2021.06.011] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Revised: 06/18/2021] [Accepted: 06/24/2021] [Indexed: 02/06/2023]
Abstract
Deep learning has rapidly advanced artificial intelligence (AI) and algorithmic decision-making (ADM) paradigms, affecting many traditional fields of medicine, including pathology, which is a heavily data-centric specialty of medicine. The structured nature of pathology data repositories makes it highly attractive to AI researchers to train deep learning models to improve health care delivery. Additionally, there are enormous financial incentives driving adoption of AI and ADM due to promise of increased efficiency of the health care delivery process. AI, if used unethically, may exacerbate existing inequities of health care, especially if not implemented correctly. There is an urgent need to harness the vast power of AI in an ethically and morally justifiable manner. This review explores the key issues involving AI ethics in pathology. Issues related to ethical design of pathology AI studies and the potential risks associated with implementation of AI and ADM within the pathology workflow are discussed. Three key foundational principles of ethical AI: transparency, accountability, and governance, are described in the context of pathology. The future practice of pathology must be guided by these principles. Pathologists should be aware of the potential of AI to deliver superlative health care and the ethical pitfalls associated with it. Finally, pathologists must have a seat at the table to drive future implementation of ethical AI in the practice of pathology.
Collapse
Affiliation(s)
- Chhavi Chauhan
- American Society of Investigative Pathology, Rockville, Maryland
| | - Rama R Gullapalli
- Department of Pathology, University of New Mexico, Albuquerque, New Mexico; Department of Chemical and Biological Engineering, University of New Mexico, Albuquerque, New Mexico.
| |
Collapse
|
26
|
Parums DV. Editorial: Artificial Intelligence (AI) in Clinical Medicine and the 2020 CONSORT-AI Study Guidelines. Med Sci Monit 2021; 27:e933675. [PMID: 34176921 PMCID: PMC8252890 DOI: 10.12659/msm.933675] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2021] [Accepted: 06/21/2021] [Indexed: 12/21/2022] Open
Abstract
Artificial intelligence (AI) in clinical medicine includes physical robotics and devices and virtual AI and machine learning. Concerns have been raised regarding ethical issues for the use of AI in surgery, including guidance for surgical decisions, patient confidentiality, and the need for support from controlled clinical trials to use these methods so that clinical guidelines can be developed. The most common applications for virtual AI include disease diagnosis, health monitoring and digital patient consultations, clinical training, patient data management, drug development, and personalized medicine. In September 2020, the CONSORT-A1 extension was developed with 14 additional items that should be reported for AI studies that include clear descriptions of the AI intervention, skills required, study setting, inputs and outputs of the AI intervention, analysis of errors, and the human and AI interactions. This Editorial aims to present current applications and challenges of AI in clinical medicine and the importance of the new 2020 CONSORT-AI study guidelines.
Collapse
Affiliation(s)
- Dinah V Parums
- Science Editor, Medical Science Monitor, International Scientific Information, Inc., Mellville, NY, USA
| |
Collapse
|
27
|
Affiliation(s)
- Stephen Cave
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK
| | - Jess Whittlestone
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK
| | - Rune Nyrup
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK
| | - Sean O hEigeartaigh
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK
| | - Rafael A Calvo
- Dyson School of Design Engineering, Imperial College London, UK
| |
Collapse
|
28
|
Affiliation(s)
- Avni Malik
- College of Arts and Sciences, University of Virginia, Charlottesville, Virginia, United States of America
| | - Paranjay Patel
- School of Medicine, University of Virginia, Charlottesville, Virginia, United States of America
| | - Lubaina Ehsan
- Department of Pediatrics, Division of Gastroenterology, Hepatology, and Nutrition, School of Medicine, University of Virginia, Charlottesville, Virginia, United States of America
| | - Shan Guleria
- School of Medicine, University of Virginia, Charlottesville, Virginia, United States of America
| | - Thomas Hartka
- Department of Emergency Medicine, School of Medicine, University of Virginia, Charlottesville, Virginia, United States of America
| | - Sodiq Adewole
- Department of Systems and Information Engineering, School of Data Science, University of Virginia, Charlottesville, Virginia, United States of America
| | - Sana Syed
- Department of Pediatrics, Division of Gastroenterology, Hepatology, and Nutrition, School of Medicine, University of Virginia, Charlottesville, Virginia, United States of America
| |
Collapse
|
29
|
Affiliation(s)
- Greta R Bauer
- Greta R. Bauer is with the departments of Epidemiology and Biostatistics and Gender, Sexuality and Women's Studies, Western University, London, ON, Canada. Daniel J. Lizotte is with the departments of Epidemiology and Biostatistics and Computer Science, and the Schulich Interfaculty Program in Public Health, Western University
| | - Daniel J Lizotte
- Greta R. Bauer is with the departments of Epidemiology and Biostatistics and Gender, Sexuality and Women's Studies, Western University, London, ON, Canada. Daniel J. Lizotte is with the departments of Epidemiology and Biostatistics and Computer Science, and the Schulich Interfaculty Program in Public Health, Western University
| |
Collapse
|
30
|
McCradden MD, Joshi S, Anderson JA, Mazwi M, Goldenberg A, Zlotnik Shaul R. Patient safety and quality improvement: Ethical principles for a regulatory approach to bias in healthcare machine learning. J Am Med Inform Assoc 2020; 27:2024-2027. [PMID: 32585698 PMCID: PMC7727331 DOI: 10.1093/jamia/ocaa085] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 05/01/2020] [Indexed: 12/27/2022] Open
Abstract
Accumulating evidence demonstrates the impact of bias that reflects social inequality on the performance of machine learning (ML) models in health care. Given their intended placement within healthcare decision making more broadly, ML tools require attention to adequately quantify the impact of bias and reduce its potential to exacerbate inequalities. We suggest that taking a patient safety and quality improvement approach to bias can support the quantification of bias-related effects on ML. Drawing from the ethical principles underpinning these approaches, we argue that patient safety and quality improvement lenses support the quantification of relevant performance metrics, in order to minimize harm while promoting accountability, justice, and transparency. We identify specific methods for operationalizing these principles with the goal of attending to bias to support better decision making in light of controllable and uncontrollable factors.
Collapse
Affiliation(s)
- Melissa D McCradden
- Bioethics Department, The Hospital for Sick Children, Toronto, Ontario, Canada
| | | | - James A Anderson
- Bioethics Department, The Hospital for Sick Children, Toronto, Ontario, Canada
- Institute for Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
- Joint Centre for Bioethics, University of Toronto, Toronto, Ontario, Canada
| | - Mjaye Mazwi
- Department of Critical Care Medicine, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Anna Goldenberg
- Vector Institute, Toronto, Ontario, Canada
- Genetics and Genome Biology, The Hospital for Sick Children, Peter Gilgan Centre for Research and Learning, Toronto, Ontario, Canada
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- CIFAR, Toronto, Ontario, Canada
| | - Randi Zlotnik Shaul
- Bioethics Department, The Hospital for Sick Children, Toronto, Ontario, Canada
- Department of Paediatrics, University of Toronto, Toronto, ON, Canada
- Child Health Evaluative Sciences, The Hospital for Sick Children, Peter Gilgan Centre for Research, Toronto, Ontario, Canada
| |
Collapse
|
31
|
Abstract
Diagnostic processes typically rely on traditional and laborious methods, that are prone to human error, resulting in frequent misdiagnosis of diseases. Computational approaches are being increasingly used for more precise diagnosis of the clinical pathology, diagnosis of genetic and microbial diseases, and analysis of clinical chemistry data. These approaches are progressively used for improving the reliability of testing, resulting in reduced diagnostic errors. Artificial intelligence (AI)-based computational approaches mostly rely on training sets obtained from patient data stored in clinical databases. However, the use of AI is associated with several ethical issues, including patient privacy and data ownership. The capacity of AI-based mathematical models to interpret complex clinical data frequently leads to data bias and reporting of erroneous results based on patient data. In order to improve the reliability of computational approaches in clinical diagnostics, strategies to reduce data bias and analyzing real-life patient data need to be further refined.
Collapse
Affiliation(s)
- Mohammed A Alaidarous
- Department of Medical Laboratory Sciences, College of Applied Medical Sciences, Majmaah University, Majmaah, Kingdom of Saudi Arabia. E-mail.
| |
Collapse
|
32
|
Abstract
Artificial intelligence is rapidly transforming the landscape of medicine. Specifically, algorithms powered by deep learning are already gaining increasingly wide adoption in fields such as radiology, pathology, and preventive medicine. Forensic psychiatry is a complex and intricate specialty that seeks to balance the disparate approaches of psychiatric science, which strives to explain human behavior deterministically, and the law, which emphasizes free choice and moral responsibility. This balancing, a central task of the forensic psychiatrist, is necessarily fraught with ambiguity. Such a complex task may intuitively seem impenetrable to artificial intelligence. This article first aims to challenge this assumption and then seeks to address the unique concerns posed by the adoption of artificial intelligence in violence risk assessment and prediction. The relevant ethics concerns are analyzed within the framework of traditional bioethics principles. Finally, recommendations for practitioners, ethicists, and others are offered as a starting point for further discussion.
Collapse
Affiliation(s)
- Richard G Cockerill
- Dr. Cockerill is a Fellow in Forensic Psychiatry, UCLA-Semel Institute for Neuroscience and Behavior, Los Angeles, California.
| |
Collapse
|
33
|
Abstract
Artificial intelligence (AI) is among the fastest developing areas of advanced technology in medicine. The most important qualia of AI which makes it different from other advanced technology products is its ability to improve its original program and decision-making algorithms via deep learning abilities. This difference is the reason that AI technology stands out from the ethical issues of other advanced technology artifacts. The ethical issues of AI technology vary from privacy and confidentiality of personal data to ethical status and value of AI entities in a wide spectrum, depending on their capability of deep learning and scope of the domains in which they operate. Developing ethical norms and guidelines for planning, development, production, and usage of AI technology has become an important issue to overcome these problems. In this respect three outstanding documents have been produced:1. The Montréal Declaration for Responsible Development of Artificial Intelligence2. Ethics Guidelines for Trustworthy AI3. Asilomar Artificial Intelligence PrinciplesIn this study, these three documents will be analyzed with respect to the ethical principles and values they involve, their perspectives for approaching ethical issues, and their prospects for ethical reasoning when one or more of these values and principles are in conflict. Then, the sufficiency of these guidelines for addressing current or prospective ethical issues emerging from the existence of AI technology in medicine will be evaluated. The discussion will be pursued in terms of the ambiguity of interlocutors and efficiency for working out ethical dilemmas occurring in practical life.
Collapse
Affiliation(s)
- Banu Buruk
- Department of History of Medicine and Ethics, TOBB ETU University Medical School, Ankara, Turkey.
| | - Perihan Elif Ekmekci
- Department of History of Medicine and Ethics, TOBB ETU University Medical School, Ankara, Turkey
| | - Berna Arda
- Department of History of Medicine and Ethics, Ankara University Medical School, Ankara, Turkey
| |
Collapse
|
34
|
Affiliation(s)
- John D McGreevey
- University of Pennsylvania Health System, Perelman School of Medicine, Section of Hospital Medicine, Division of General Internal Medicine, Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia
| | - C William Hanson
- Perelman School of Medicine, University of Pennsylvania, University of Pennsylvania Health System, Philadelphia
- School of Engineering and Applied Science, University of Pennsylvania, Philadelphia
| | - Ross Koppel
- Perelman School of Medicine, University of Pennsylvania, University of Pennsylvania Health System, Philadelphia
- Department of Medical Informatics, Jacobs School of Medicine, University at Buffalo, State University of New York, Buffalo
| |
Collapse
|
35
|
Abstract
Artificial intelligence surveillance can be used to diagnose individual cases, track the spread of Covid‐19, and help provide care. The use of AI for surveillance purposes (such as detecting new Covid‐19 cases and gathering data from healthy and ill individuals) in a pandemic raises multiple concerns ranging from privacy to discrimination to access to care. Luckily, there exist several frameworks that can help guide stakeholders, especially physicians but also AI developers and public health officials, as they navigate these treacherous shoals. While these frameworks were not explicitly designed for AI surveillance during a pandemic, they can be adapted to help address concerns regarding privacy, human rights, and due process and equality. In a time where the rapid implementation of all tools available is critical to ending a pandemic, physicians, public health officials, and technology companies should understand the criteria for the ethical implementation of AI surveillance.
Collapse
|
36
|
Campbell CG, Ting DSW, Keane PA, Foster PJ. The potential application of artificial intelligence for diagnosis and management of glaucoma in adults. Br Med Bull 2020; 134:21-33. [PMID: 32518944 DOI: 10.1093/bmb/ldaa012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Revised: 04/02/2020] [Accepted: 04/02/2020] [Indexed: 12/26/2022]
Abstract
BACKGROUND Glaucoma is the most frequent cause of irreversible blindness worldwide. There is no cure, but early detection and treatment can slow the progression and prevent loss of vision. It has been suggested that artificial intelligence (AI) has potential application for detection and management of glaucoma. SOURCES OF DATA This literature review is based on articles published in peer-reviewed journals. AREAS OF AGREEMENT There have been significant advances in both AI and imaging techniques that are able to identify the early signs of glaucomatous damage. Machine and deep learning algorithms show capabilities equivalent to human experts, if not superior. AREAS OF CONTROVERSY Concerns that the increased reliance on AI may lead to deskilling of clinicians. GROWING POINTS AI has potential to be used in virtual review clinics, telemedicine and as a training tool for junior doctors. Unsupervised AI techniques offer the potential of uncovering currently unrecognized patterns of disease. If this promise is fulfilled, AI may then be of use in challenging cases or where a second opinion is desirable. AREAS TIMELY FOR DEVELOPING RESEARCH There is a need to determine the external validity of deep learning algorithms and to better understand how the 'black box' paradigm reaches results.
Collapse
Affiliation(s)
- Cara G Campbell
- UCL Institute of Ophthalmology, Faculty of Brain Science, University College London, 11-43 Bath Street, London EC1V 9EL, UK
| | - Daniel S W Ting
- Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London EC1V 2PD, UK
| | - Pearse A Keane
- UCL Institute of Ophthalmology, Faculty of Brain Science, University College London, 11-43 Bath Street, London EC1V 9EL, UK
- Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London EC1V 2PD, UK
- National Institute for Health Research Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust NHS Foundation Trust, 2/12 Wolfson Building and UCL Institute of Ophthalmology, 11-43 Bath Street, London EC1V 9EL, UK
| | - Paul J Foster
- UCL Institute of Ophthalmology, Faculty of Brain Science, University College London, 11-43 Bath Street, London EC1V 9EL, UK
- Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London EC1V 2PD, UK
- National Institute for Health Research Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust NHS Foundation Trust, 2/12 Wolfson Building and UCL Institute of Ophthalmology, 11-43 Bath Street, London EC1V 9EL, UK
| |
Collapse
|
37
|
Abstract
Ethics can be interesting and fascinatingly compelling because of the subtle natures of its solutions in ambiguous situations. Articles on ethical issues and college courses on ethics rarely present answers to the questions that are posed. That is because ethical responses are highly situational and depend a lot on commonly accepted, but not codified, beliefs, and attitudes.
Collapse
|
38
|
Abstract
Artificial intelligence has dramatically changed the world as we know it, but is yet to fully embrace 'hot' cognition, i.e., the way an intelligent being's thinking is affected by their emotional state. Artificial intelligence encompassing hot cognition will not only usher in enhanced machine-human interactions, but will also promote a much needed ethical approach. Theory of Mind, the ability of the human mind to attribute mental states to others, is a key component of hot cognition. To endow machines with (limited) Theory of Mind capabilities, computer scientists will need to work closely with psychiatrists, psychologists and neuroscientists. They will need to develop new models, but also to formally define what problems need to be solved and how the results should be assessed.
Collapse
Affiliation(s)
- F. Cuzzolin
- School of Engineering, Computing and Mathematics, Oxford Brookes University, Oxford, UK
| | - A. Morelli
- Universita’ degli Studi Suor Orsola Benincasa, Naples, Italy
| | - B. Cîrstea
- School of Engineering, Computing and Mathematics, Oxford Brookes University, Oxford, UK
| | - B. J. Sahakian
- Department of Psychiatry, University of Cambridge, Cambridge, UK
| |
Collapse
|
39
|
Ho CWL, Ali J, Caals K. Ensuring trustworthy use of artificial intelligence and big data analytics in health insurance. Bull World Health Organ 2020; 98:263-269. [PMID: 32284650 PMCID: PMC7133481 DOI: 10.2471/blt.19.234732] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2019] [Revised: 01/13/2020] [Accepted: 01/21/2020] [Indexed: 12/22/2022] Open
Abstract
Technological advances in big data (large amounts of highly varied data from many different sources that may be processed rapidly), data sciences and artificial intelligence can improve health-system functions and promote personalized care and public good. However, these technologies will not replace the fundamental components of the health system, such as ethical leadership and governance, or avoid the need for a robust ethical and regulatory environment. In this paper, we discuss what a robust ethical and regulatory environment might look like for big data analytics in health insurance, and describe examples of safeguards and participatory mechanisms that should be established. First, a clear and effective data governance framework is critical. Legal standards need to be enacted and insurers should be encouraged and given incentives to adopt a human-centred approach in the design and use of big data analytics and artificial intelligence. Second, a clear and accountable process is necessary to explain what information can be used and how it can be used. Third, people whose data may be used should be empowered through their active involvement in determining how their personal data may be managed and governed. Fourth, insurers and governance bodies, including regulators and policy-makers, need to work together to ensure that the big data analytics based on artificial intelligence that are developed are transparent and accurate. Unless an enabling ethical environment is in place, the use of such analytics will likely contribute to the proliferation of unconnected data systems, worsen existing inequalities, and erode trustworthiness and trust.
Collapse
Affiliation(s)
- Calvin W L Ho
- Faculty of Law, Cheng Yu Tung Tower, Centennial Campus, The University of Hong Kong, Pokfulam, Hong Kong Special Administrative Region, China
| | - Joseph Ali
- Department of International Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, United States of America
| | - Karel Caals
- Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| |
Collapse
|
40
|
Vollmer S, Mateen BA, Bohner G, Király FJ, Ghani R, Jonsson P, Cumbers S, Jonas A, McAllister KSL, Myles P, Granger D, Birse M, Branson R, Moons KGM, Collins GS, Ioannidis JPA, Holmes C, Hemingway H. Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness. BMJ 2020; 368:l6927. [PMID: 32198138 DOI: 10.1136/bmj.l6927] [Citation(s) in RCA: 148] [Impact Index Per Article: 37.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Affiliation(s)
- Sebastian Vollmer
- Alan Turing Institute, Kings Cross, London, UK
- Departments of Mathematics and Statistics, University of Warwick, Coventry, UK
| | - Bilal A Mateen
- Alan Turing Institute, Kings Cross, London, UK
- Warwick Medical School, University of Warwick, Coventry, UK
- Kings College Hospital, Denmark Hill, London, UK
| | - Gergo Bohner
- Alan Turing Institute, Kings Cross, London, UK
- Departments of Mathematics and Statistics, University of Warwick, Coventry, UK
| | - Franz J Király
- Alan Turing Institute, Kings Cross, London, UK
- Department of Statistical Science, University College London, London, UK
| | | | - Pall Jonsson
- Science Policy and Research, National Institute for Health and Care Excellence, Manchester, UK
| | - Sarah Cumbers
- Health and Social Care Directorate, National Institute for Health and Care Excellence, London, UK
| | - Adrian Jonas
- Data and Analytics Group, National Institute for Health and Care Excellence, London, UK
| | | | - Puja Myles
- Clinical Practice Research Datalink, Medicines and Healthcare products Regulatory Agency, London, UK
| | - David Granger
- Medicines and Healthcare products Regulatory Agency, London, UK
| | - Mark Birse
- Medicines and Healthcare products Regulatory Agency, London, UK
| | - Richard Branson
- Medicines and Healthcare products Regulatory Agency, London, UK
| | - Karel G M Moons
- Julius Centre for Health Sciences and Primary Care, UMC Utrecht, Utrecht University, Utrecht, Netherlands
| | - Gary S Collins
- UK EQUATOR Centre, Centre for Statistics in Medicine, NDORMS, University of Oxford, Oxford, UK
| | - John P A Ioannidis
- Meta-Research Innovation Centre at Stanford, Stanford University, Stanford, CA, USA
| | - Chris Holmes
- Alan Turing Institute, Kings Cross, London, UK
- Department of Statistics, University of Oxford, Oxford OX1 3LB, UK
| | - Harry Hemingway
- Health Data Research UK London, University College London, London, UK
- Institute of Health Informatics, University College London, London, UK
- National Institute for Health Research, University College London Hospitals Biomedical Research Centre, University College London, London, UK
| |
Collapse
|
41
|
Affiliation(s)
- Michael E Matheny
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee
- Geriatric Research Education & Clinical Care Service, Tennessee Valley Healthcare System VA, Nashville
| | | | | |
Collapse
|
42
|
Abstract
This paper examines the ethical pitfalls and challenges that non-ethicists, such as researchers and programmers in the fields of computer science, artificial intelligence and robotics, face when building moral machines. Whether ethics is "computable" depends on how programmers understand ethics in the first place and on the adequacy of their understanding of the ethical problems and methodological challenges in these fields. Researchers and programmers face at least two types of problems due to their general lack of ethical knowledge or expertise. The first type is so-called rookie mistakes, which could be addressed by providing these people with the necessary ethical knowledge. The second, more difficult methodological issue concerns areas of peer disagreement in ethics, where no easy solutions are currently available. This paper examines several existing approaches to highlight the ethical pitfalls and challenges involved. Familiarity with these and similar problems can help programmers to avoid pitfalls and build better moral machines. The paper concludes that ethical decisions regarding moral robots should be based on avoiding what is immoral (i.e. prohibiting certain immoral actions) in combination with a pluralistic ethical method of solving moral problems, rather than relying on a particular ethical approach, so as to avoid a normative bias.
Collapse
Affiliation(s)
- John-Stewart Gordon
- Department of Philosophy and Social Critique, Faculty of Political Science and Diplomacy, Vytautas Magnus University, V. Putvinskio g. 23 (R 403), 44243, Kaunas, Lithuania.
- Research Cluster for Applied Ethics, Faculty of Law, Vytautas Magnus University, V. Putvinskio g. 23 (R 403), 44243, Kaunas, Lithuania.
| |
Collapse
|
43
|
Affiliation(s)
- Jessica Morley
- Oxford Internet Institute, University of Oxford, Oxford OX1 3JS, UK.
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, Oxford OX1 3JS, UK; Alan Turing Institute, London, UK
| |
Collapse
|
44
|
Portacolone E, Halpern J, Luxenberg J, Harrison KL, Covinsky KE. Ethical Issues Raised by the Introduction of Artificial Companions to Older Adults with Cognitive Impairment: A Call for Interdisciplinary Collaborations. J Alzheimers Dis 2020; 76:445-455. [PMID: 32250295 PMCID: PMC7437496 DOI: 10.3233/jad-190952] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Due to the high costs of providing long-term care to older adults with cognitive impairment, artificial companions are increasingly considered as a cost-efficient way to provide support. Artificial companions can comfort, entertain, and inform, and even induce a sense of being in a close relationship. Sensors and algorithms are increasingly leading to applications that exude a life-like feel. We focus on a case study of an artificial companion for people with cognitive impairment. This companion is an avatar on an electronic tablet that is displayed as a dog or a cat. Whereas artificial intelligence guides most artificial companions, this application also relies on technicians "behind" the on-screen avatar, who via surveillance, interact with users. This case is notable because it particularly illustrates the tension between the endless opportunities offered by technology and the ethical issues stemming from limited regulations. Reviewing the case through the lens of biomedical ethics, concerns of deception, monitoring and tracking, as well as informed consent and social isolation are raised by the introduction of this technology to users with cognitive impairment. We provide a detailed description of the case, review the main ethical issues and present two theoretical frameworks, the "human-driven technology" platform and the emancipatory gerontology framework, to inform the design of future applications.
Collapse
Affiliation(s)
- Elena Portacolone
- Institute for Health & Aging, University of California San Francisco, San Francisco, CA, USA
| | - Jodi Halpern
- School of Public Health, University of California Berkeley, Berkeley, CA, USA
| | | | - Krista L. Harrison
- Division of Geriatric Medicine, University of California San Francisco, San Francisco, CA, USA
- Philip R. Lee Institute for Health Policy Studies, University of California San Francisco, San Francisco, CA, USA
| | - Kenneth E. Covinsky
- Division of Geriatric Medicine, University of California San Francisco, San Francisco, CA, USA
| |
Collapse
|
45
|
McCradden MD, Baba A, Saha A, Ahmad S, Boparai K, Fadaiefard P, Cusimano MD. Ethical concerns around use of artificial intelligence in health care research from the perspective of patients with meningioma, caregivers and health care providers: a qualitative study. CMAJ Open 2020; 8:E90-E95. [PMID: 32071143 PMCID: PMC7028163 DOI: 10.9778/cmajo.20190151] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND As artificial intelligence (AI) approaches in research increase and AI becomes more integrated into medicine, there is a need to understand perspectives from members of the Canadian public and medical community. The aim of this project was to investigate current perspectives on ethical issues surrounding AI in health care. METHODS In this qualitative study, adult patients with meningioma and their caregivers were recruited consecutively (August 2018-February 2019) from a neurosurgical clinic in Toronto. Health care providers caring for these patients were recruited through snowball sampling. Based on a nonsystematic literature search, we constructed 3 vignettes that sought participants' views on hypothetical issues surrounding potential AI applications in health care. The vignettes were presented to participants in interviews, which lasted 15-45 minutes. Responses were transcribed and coded for concepts, frequency of response types and larger concepts emerging from the interview. RESULTS We interviewed 30 participants: 18 patients, 7 caregivers and 5 health care providers. For each question, a variable number of responses were recorded. The majority of participants endorsed nonconsented use of health data but advocated for disclosure and transparency. Few patients and caregivers felt that allocation of health resources should be done via computerized output, and a majority stated that it was inappropriate to delegate such decisions to a computer. Almost all participants felt that selling health data should be prohibited, and a minority stated that less privacy is acceptable for the goal of improving health. Certain caveats were identified, including the desire for deidentification of data and use within trusted institutions. INTERPRETATION In this preliminary study, patients and caregivers reported a mixture of hopefulness and concern around the use of AI in health care research, whereas providers were generally more skeptical. These findings provide a point of departure for institutions adopting health AI solutions to consider the ethical implications of this work by understanding stakeholders' perspectives.
Collapse
Affiliation(s)
- Melissa D McCradden
- Division of Neurosurgery (McCradden, Baba, Saha, Boparai, Fadaiefard, Cusimano), St. Michael's Hospital, Unity Health Toronto; Dalla Lana School of Public Health (Cusimano), University of Toronto, Toronto, Ont.
| | - Ami Baba
- Division of Neurosurgery (McCradden, Baba, Saha, Boparai, Fadaiefard, Cusimano), St. Michael's Hospital, Unity Health Toronto; Dalla Lana School of Public Health (Cusimano), University of Toronto, Toronto, Ont
| | - Ashirbani Saha
- Division of Neurosurgery (McCradden, Baba, Saha, Boparai, Fadaiefard, Cusimano), St. Michael's Hospital, Unity Health Toronto; Dalla Lana School of Public Health (Cusimano), University of Toronto, Toronto, Ont
| | - Sidra Ahmad
- Division of Neurosurgery (McCradden, Baba, Saha, Boparai, Fadaiefard, Cusimano), St. Michael's Hospital, Unity Health Toronto; Dalla Lana School of Public Health (Cusimano), University of Toronto, Toronto, Ont
| | - Kanwar Boparai
- Division of Neurosurgery (McCradden, Baba, Saha, Boparai, Fadaiefard, Cusimano), St. Michael's Hospital, Unity Health Toronto; Dalla Lana School of Public Health (Cusimano), University of Toronto, Toronto, Ont
| | - Pantea Fadaiefard
- Division of Neurosurgery (McCradden, Baba, Saha, Boparai, Fadaiefard, Cusimano), St. Michael's Hospital, Unity Health Toronto; Dalla Lana School of Public Health (Cusimano), University of Toronto, Toronto, Ont
| | - Michael D Cusimano
- Division of Neurosurgery (McCradden, Baba, Saha, Boparai, Fadaiefard, Cusimano), St. Michael's Hospital, Unity Health Toronto; Dalla Lana School of Public Health (Cusimano), University of Toronto, Toronto, Ont
| |
Collapse
|
46
|
Abstract
Although decision-making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access "the knowledge within the machine." Without an explanation in terms of reasons or a rationale for particular decisions in individual cases, some commentators regard ceding medical decision-making to black box systems as contravening the profound moral responsibilities of clinicians. I argue, however, that opaque decisions are more common in medicine than critics realize. Moreover, as Aristotle noted over two millennia ago, when our knowledge of causal systems is incomplete and precarious-as it often is in medicine-the ability to explain how results are produced can be less important than the ability to produce such results and empirically verify their accuracy.
Collapse
|
47
|
Johnson SLJ. AI, Machine Learning, and Ethics in Health Care. J Leg Med 2019; 39:427-441. [PMID: 31940250 DOI: 10.1080/01947648.2019.1690604] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Revised: 10/17/2019] [Accepted: 11/03/2019] [Indexed: 06/10/2023]
|
48
|
Gruson D. ["The ethical risks associated with artificial intelligence must be identified and regulated"]. Soins 2019; 64:48-50. [PMID: 31542122 DOI: 10.1016/j.soin.2019.06.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Artificial intelligence and its applications in healthcare inevitably raise ethical questions. The 'human guarantee' is at the heart of the discussions. Interview with Cynthia Fleury-Perkins, member of the French national advisory ethics committee and holder of the Humanities and Health Chair of the Conservatoire national des arts et métiers.
Collapse
Affiliation(s)
- David Gruson
- Chaire santé de Sciences-Po, 13, rue de l'Université, 75007 Paris, France.
| |
Collapse
|
49
|
Abstract
The digital revolution is disrupting the ways in which health research is conducted, and subsequently, changing healthcare. Direct-to-consumer wellness products and mobile apps, pervasive sensor technologies and access to social network data offer exciting opportunities for researchers to passively observe and/or track patients 'in the wild' and 24/7. The volume of granular personal health data gathered using these technologies is unprecedented, and is increasingly leveraged to inform personalized health promotion and disease treatment interventions. The use of artificial intelligence in the health sector is also increasing. Although rich with potential, the digital health ecosystem presents new ethical challenges for those making decisions about the selection, testing, implementation and evaluation of technologies for use in healthcare. As the 'Wild West' of digital health research unfolds, it is important to recognize who is involved, and identify how each party can and should take responsibility to advance the ethical practices of this work. While not a comprehensive review, we describe the landscape, identify gaps to be addressed, and offer recommendations as to how stakeholders can and should take responsibility to advance socially responsible digital health research.
Collapse
Affiliation(s)
- Camille Nebeker
- Department of Family Medicine and Public Health, School of Medicine, University of California San Diego, La Jolla, CA, 92093, USA.
- Research Center for Optimal Digital Ethics in Health, Qualcomm Institute and School of Medicine, University of California San Diego, La Jolla, CA, 92093, USA.
| | - John Torous
- Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Ave, Boston, MA, 02215, USA
| | | |
Collapse
|
50
|
Car J, Sheikh A, Wicks P, Williams MS. Beyond the hype of big data and artificial intelligence: building foundations for knowledge and wisdom. BMC Med 2019; 17:143. [PMID: 31311603 PMCID: PMC6636050 DOI: 10.1186/s12916-019-1382-x] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Accepted: 07/02/2019] [Indexed: 01/12/2023] Open
Abstract
Big data, coupled with the use of advanced analytical approaches, such as artificial intelligence (AI), have the potential to improve medical outcomes and population health. Data that are routinely generated from, for example, electronic medical records and smart devices have become progressively easier and cheaper to collect, process, and analyze. In recent decades, this has prompted a substantial increase in biomedical research efforts outside traditional clinical trial settings. Despite the apparent enthusiasm of researchers, funders, and the media, evidence is scarce for successful implementation of products, algorithms, and services arising that make a real difference to clinical care. This article collection provides concrete examples of how "big data" can be used to advance healthcare and discusses some of the limitations and challenges encountered with this type of research. It primarily focuses on real-world data, such as electronic medical records and genomic medicine, considers new developments in AI and digital health, and discusses ethical considerations and issues related to data sharing. Overall, we remain positive that big data studies and associated new technologies will continue to guide novel, exciting research that will ultimately improve healthcare and medicine-but we are also realistic that concerns remain about privacy, equity, security, and benefit to all.
Collapse
Affiliation(s)
- Josip Car
- Centre for Population Health Sciences (CePHaS), Lee Kong Chian School of Medicine, Nanyang Technological University Singapore, Clinical Sciences Building, 11 Mandalay Road, Singapore, 308232, Singapore
| | - Aziz Sheikh
- The Usher Institute, The University of Edinburgh, Edinburgh, EH8 9DX, Scotland, UK
| | - Paul Wicks
- PatientsLikeMe, 160 Second Street, Cambridge, MA, 02142, USA.
| | - Marc S Williams
- Genomic Medicine Institute, Geisinger, 100 North Academy Avenue, Danville, PA, 17822, USA
| |
Collapse
|