1
|
Ho A, Bavli I, Mahal R, McKeown MJ. Multi-Level Ethical Considerations of Artificial Intelligence Health Monitoring for People Living with Parkinson's Disease. AJOB Empir Bioeth 2024; 15:178-191. [PMID: 37889210 DOI: 10.1080/23294515.2023.2274582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
Artificial intelligence (AI) has garnered tremendous attention in health care, and many hope that AI can enhance our health system's ability to care for people with chronic and degenerative conditions, including Parkinson's Disease (PD). This paper reports the themes and lessons derived from a qualitative study with people living with PD, family caregivers, and health care providers regarding the ethical dimensions of using AI to monitor, assess, and predict PD symptoms and progression. Thematic analysis identified ethical concerns at four intersecting levels: personal, interpersonal, professional/institutional, and societal levels. Reflecting on potential benefits of predictive algorithms that can continuously collect and process longitudinal data, participants expressed a desire for more timely, ongoing, and accurate information that could enhance management of day-to-day fluctuations and facilitate clinical and personal care as their disease progresses. Nonetheless, they voiced concerns about intersecting ethical questions around evolving illness identities, familial and professional care relationships, privacy, and data ownership/governance. The multi-layer analysis provides a helpful way to understand the ethics of using AI in monitoring and managing PD and other chronic/degenerative conditions.
Collapse
Affiliation(s)
- Anita Ho
- Centre for Applied Ethics, School of Population and Public Health, University of British Columbia, Vancouver, Canada
| | - Itai Bavli
- Centre for Applied Ethics, School of Population and Public Health, University of British Columbia, Vancouver, Canada
| | - Ravneet Mahal
- Pacific Parkinson's Research Centre, University of British Columbia, Vancouver, Canada
| | - Martin J McKeown
- Pacific Parkinson's Research Centre, University of British Columbia, Vancouver, Canada
| |
Collapse
|
2
|
Khan SD, Hoodbhoy Z, Raja MHR, Kim JY, Hogg HDJ, Manji AAA, Gulamali F, Hasan A, Shaikh A, Tajuddin S, Khan NS, Patel MR, Balu S, Samad Z, Sendak MP. Frameworks for procurement, integration, monitoring, and evaluation of artificial intelligence tools in clinical settings: A systematic review. PLOS DIGITAL HEALTH 2024; 3:e0000514. [PMID: 38809946 PMCID: PMC11135672 DOI: 10.1371/journal.pdig.0000514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 04/18/2024] [Indexed: 05/31/2024]
Abstract
Research on the applications of artificial intelligence (AI) tools in medicine has increased exponentially over the last few years but its implementation in clinical practice has not seen a commensurate increase with a lack of consensus on implementing and maintaining such tools. This systematic review aims to summarize frameworks focusing on procuring, implementing, monitoring, and evaluating AI tools in clinical practice. A comprehensive literature search, following PRSIMA guidelines was performed on MEDLINE, Wiley Cochrane, Scopus, and EBSCO databases, to identify and include articles recommending practices, frameworks or guidelines for AI procurement, integration, monitoring, and evaluation. From the included articles, data regarding study aim, use of a framework, rationale of the framework, details regarding AI implementation involving procurement, integration, monitoring, and evaluation were extracted. The extracted details were then mapped on to the Donabedian Plan, Do, Study, Act cycle domains. The search yielded 17,537 unique articles, out of which 47 were evaluated for inclusion based on their full texts and 25 articles were included in the review. Common themes extracted included transparency, feasibility of operation within existing workflows, integrating into existing workflows, validation of the tool using predefined performance indicators and improving the algorithm and/or adjusting the tool to improve performance. Among the four domains (Plan, Do, Study, Act) the most common domain was Plan (84%, n = 21), followed by Study (60%, n = 15), Do (52%, n = 13), & Act (24%, n = 6). Among 172 authors, only 1 (0.6%) was from a low-income country (LIC) and 2 (1.2%) were from lower-middle-income countries (LMICs). Healthcare professionals cite the implementation of AI tools within clinical settings as challenging owing to low levels of evidence focusing on integration in the Do and Act domains. The current healthcare AI landscape calls for increased data sharing and knowledge translation to facilitate common goals and reap maximum clinical benefit.
Collapse
Affiliation(s)
- Sarim Dawar Khan
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Zahra Hoodbhoy
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
- Department of Paediatrics and Child Health, Aga Khan University, Karachi, Pakistan
| | | | - Jee Young Kim
- Duke Institute for Health Innovation, Duke University School of Medicine, Durham, North Carolina, United States
| | - Henry David Jeffry Hogg
- Population Health Science Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
- Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Afshan Anwar Ali Manji
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Freya Gulamali
- Duke Institute for Health Innovation, Duke University School of Medicine, Durham, North Carolina, United States
| | - Alifia Hasan
- Duke Institute for Health Innovation, Duke University School of Medicine, Durham, North Carolina, United States
| | - Asim Shaikh
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Salma Tajuddin
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Nida Saddaf Khan
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Manesh R. Patel
- Duke Clinical Research Institute, Duke University School of Medicine, Durham, North Carolina, United States
- Division of Cardiology, Duke University School of Medicine, Durham, North Carolina, United States
| | - Suresh Balu
- Duke Institute for Health Innovation, Duke University School of Medicine, Durham, North Carolina, United States
| | - Zainab Samad
- CITRIC Health Data Science Centre, Department of Medicine, Aga Khan University, Karachi, Pakistan
- Department of Medicine, Aga Khan University, Karachi, Pakistan
| | - Mark P. Sendak
- Duke Institute for Health Innovation, Duke University School of Medicine, Durham, North Carolina, United States
| |
Collapse
|
3
|
Maris MT, Koçar A, Willems DL, Pols J, Tan HL, Lindinger GL, Bak MAR. Ethical use of artificial intelligence to prevent sudden cardiac death: an interview study of patient perspectives. BMC Med Ethics 2024; 25:42. [PMID: 38575931 PMCID: PMC10996273 DOI: 10.1186/s12910-024-01042-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 03/27/2024] [Indexed: 04/06/2024] Open
Abstract
BACKGROUND The emergence of artificial intelligence (AI) in medicine has prompted the development of numerous ethical guidelines, while the involvement of patients in the creation of these documents lags behind. As part of the European PROFID project we explore patient perspectives on the ethical implications of AI in care for patients at increased risk of sudden cardiac death (SCD). AIM Explore perspectives of patients on the ethical use of AI, particularly in clinical decision-making regarding the implantation of an implantable cardioverter-defibrillator (ICD). METHODS Semi-structured, future scenario-based interviews were conducted among patients who had either an ICD and/or a heart condition with increased risk of SCD in Germany (n = 9) and the Netherlands (n = 15). We used the principles of the European Commission's Ethics Guidelines for Trustworthy AI to structure the interviews. RESULTS Six themes arose from the interviews: the ability of AI to rectify human doctors' limitations; the objectivity of data; whether AI can serve as second opinion; AI explainability and patient trust; the importance of the 'human touch'; and the personalization of care. Overall, our results reveal a strong desire among patients for more personalized and patient-centered care in the context of ICD implantation. Participants in our study express significant concerns about the further loss of the 'human touch' in healthcare when AI is introduced in clinical settings. They believe that this aspect of care is currently inadequately recognized in clinical practice. Participants attribute to doctors the responsibility of evaluating AI recommendations for clinical relevance and aligning them with patients' individual contexts and values, in consultation with the patient. CONCLUSION The 'human touch' patients exclusively ascribe to human medical practitioners extends beyond sympathy and kindness, and has clinical relevance in medical decision-making. Because this cannot be replaced by AI, we suggest that normative research into the 'right to a human doctor' is needed. Furthermore, policies on patient-centered AI integration in clinical practice should encompass the ethics of everyday practice rather than only principle-based ethics. We suggest that an empirical ethics approach grounded in ethnographic research is exceptionally well-suited to pave the way forward.
Collapse
Affiliation(s)
- Menno T Maris
- Department of Ethics, Law and Humanities, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands.
| | - Ayca Koçar
- Institute for Healthcare Management and Health Sciences, University of Bayreuth, Bayreuth, Germany
| | - Dick L Willems
- Department of Ethics, Law and Humanities, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Jeannette Pols
- Department of Ethics, Law and Humanities, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
- Department of Anthropology, University of Amsterdam, Amsterdam, The Netherlands
| | - Hanno L Tan
- Department of Clinical and Experimental Cardiology, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
- Netherlands Heart Institute, Utrecht, The Netherlands
| | - Georg L Lindinger
- Institute for Healthcare Management and Health Sciences, University of Bayreuth, Bayreuth, Germany
| | - Marieke A R Bak
- Department of Ethics, Law and Humanities, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
- Institute of History and Ethics in Medicine, TUM School of Medicine, Technical University of Munich, Munich, Germany
| |
Collapse
|
4
|
Moy S, Irannejad M, Manning SJ, Farahani M, Ahmed Y, Gao E, Prabhune R, Lorenz S, Mirza R, Klinger C. Patient Perspectives on the Use of Artificial Intelligence in Health Care: A Scoping Review. J Patient Cent Res Rev 2024; 11:51-62. [PMID: 38596349 PMCID: PMC11000703 DOI: 10.17294/2330-0698.2029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2024] Open
Abstract
Purpose Artificial intelligence (AI) technology is being rapidly adopted into many different branches of medicine. Although research has started to highlight the impact of AI on health care, the focus on patient perspectives of AI is scarce. This scoping review aimed to explore the literature on adult patients' perspectives on the use of an array of AI technologies in the health care setting for design and deployment. Methods This scoping review followed Arksey and O'Malley's framework and Preferred Reporting Items for Systematic Reviews and Meta-Analysis for Scoping Reviews (PRISMA-ScR). To evaluate patient perspectives, we conducted a comprehensive literature search using eight interdisciplinary electronic databases, including grey literature. Articles published from 2015 to 2022 that focused on patient views regarding AI technology in health care were included. Thematic analysis was performed on the extracted articles. Results Of the 10,571 imported studies, 37 articles were included and extracted. From the 33 peer-reviewed and 4 grey literature articles, the following themes on AI emerged: (i) Patient attitudes, (ii) Influences on patient attitudes, (iii) Considerations for design, and (iv) Considerations for use. Conclusions Patients are key stakeholders essential to the uptake of AI in health care. The findings indicate that patients' needs and expectations are not fully considered in the application of AI in health care. Therefore, there is a need for patient voices in the development of AI in health care.
Collapse
Affiliation(s)
- Sally Moy
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Mona Irannejad
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | | | - Mehrdad Farahani
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Yomna Ahmed
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Ellis Gao
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Radhika Prabhune
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Suzan Lorenz
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Raza Mirza
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Christopher Klinger
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
- National Initiative for the Care of the Elderly, Toronto, Canada
| |
Collapse
|
5
|
Grauman Å, Ancillotti M, Veldwijk J, Mascalzoni D. Precision cancer medicine and the doctor-patient relationship: a systematic review and narrative synthesis. BMC Med Inform Decis Mak 2023; 23:286. [PMID: 38098034 PMCID: PMC10722840 DOI: 10.1186/s12911-023-02395-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 12/06/2023] [Indexed: 12/17/2023] Open
Abstract
BACKGROUND The implementation of precision medicine is likely to have a huge impact on clinical cancer care, while the doctor-patient relationship is a crucial aspect of cancer care that needs to be preserved. This systematic review aimed to map out perceptions and concerns regarding how the implementation of precision medicine will impact the doctor-patient relationship in cancer care so that threats against the doctor-patient relationship can be addressed. METHODS Electronic databases (Pubmed, Scopus, Web of Science, Social Science Premium Collection) were searched for articles published from January 2010 to December 2021, including qualitative, quantitative, and theoretical methods. Two reviewers completed title and abstract screening, full-text screening, and data extraction. Findings were summarized and explained using narrative synthesis. RESULTS Four themes were generated from the included articles (n = 35). Providing information addresses issues of information transmission and needs, and of complex concepts such as genetics and uncertainty. Making decisions in a trustful relationship addresses opacity issues, the role of trust, and and physicians' attitude towards the role of precision medicine tools in decision-making. Managing negative reactions of non-eligible patients addresses patients' unmet expectations of precision medicine. Conflicting roles in the blurry line between clinic and research addresses issues stemming from physicians' double role as doctors and researchers. CONCLUSIONS Many findings have previously been addressed in doctor-patient communication and clinical genetics. However, precision medicine adds complexity to these fields and further emphasizes the importance of clear communication on specific themes like the distinction between genomic and gene expression and patients' expectations about access, eligibility, effectiveness, and side effects of targeted therapies.
Collapse
Affiliation(s)
- Å Grauman
- Centre for Research Ethics and Bioethics, Uppsala University, Box 564, Uppsala, SE-751 22, Sweden.
| | - M Ancillotti
- Centre for Research Ethics and Bioethics, Uppsala University, Box 564, Uppsala, SE-751 22, Sweden
| | - J Veldwijk
- Erasmus School of Health Policy & Management, Erasmus University Rotterdam, Rotterdam, the Netherlands
- Erasmus Choice Modelling Centre, Erasmus University Rotterdam, Rotterdam, the Netherlands
| | - D Mascalzoni
- Centre for Research Ethics and Bioethics, Uppsala University, Box 564, Uppsala, SE-751 22, Sweden
- Erasmus Choice Modelling Centre, Erasmus University Rotterdam, Rotterdam, the Netherlands
| |
Collapse
|
6
|
Li LT, Haley LC, Boyd AK, Bernstam EV. Technical/Algorithm, Stakeholder, and Society (TASS) barriers to the application of artificial intelligence in medicine: A systematic review. J Biomed Inform 2023; 147:104531. [PMID: 37884177 DOI: 10.1016/j.jbi.2023.104531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 09/14/2023] [Accepted: 10/22/2023] [Indexed: 10/28/2023]
Abstract
INTRODUCTION The use of artificial intelligence (AI), particularly machine learning and predictive analytics, has shown great promise in health care. Despite its strong potential, there has been limited use in health care settings. In this systematic review, we aim to determine the main barriers to successful implementation of AI in healthcare and discuss potential ways to overcome these challenges. METHODS We conducted a literature search in PubMed (1/1/2001-1/1/2023). The search was restricted to publications in the English language, and human study subjects. We excluded articles that did not discuss AI, machine learning, predictive analytics, and barriers to the use of these techniques in health care. Using grounded theory methodology, we abstracted concepts to identify major barriers to AI use in medicine. RESULTS We identified a total of 2,382 articles. After reviewing the 306 included papers, we developed 19 major themes, which we categorized into three levels: the Technical/Algorithm, Stakeholder, and Social levels (TASS). These themes included: Lack of Explainability, Need for Validation Protocols, Need for Standards for Interoperability, Need for Reporting Guidelines, Need for Standardization of Performance Metrics, Lack of Plan for Updating Algorithm, Job Loss, Skills Loss, Workflow Challenges, Loss of Patient Autonomy and Consent, Disturbing the Patient-Clinician Relationship, Lack of Trust in AI, Logistical Challenges, Lack of strategic plan, Lack of Cost-effectiveness Analysis and Proof of Efficacy, Privacy, Liability, Bias and Social Justice, and Education. CONCLUSION We identified 19 major barriers to the use of AI in healthcare and categorized them into three levels: the Technical/Algorithm, Stakeholder, and Social levels (TASS). Future studies should expand on barriers in pediatric care and focus on developing clearly defined protocols to overcome these barriers.
Collapse
Affiliation(s)
- Linda T Li
- Department of Surgery, Division of Pediatric Surgery, Icahn School of Medicine at Mount Sinai, 1 Gustave L. Levy Pl, New York, NY 10029, United States; McWilliams School of Biomedical Informatics at UT Health Houston, 7000 Fannin St, Suite 600, Houston, TX 77030, United States.
| | - Lauren C Haley
- McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| | - Alexandra K Boyd
- McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| | - Elmer V Bernstam
- McWilliams School of Biomedical Informatics at UT Health Houston, 7000 Fannin St, Suite 600, Houston, TX 77030, United States; McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| |
Collapse
|
7
|
Kim JP, Ryan K, Kasun M, Hogg J, Dunn LB, Roberts LW. Physicians' and Machine Learning Researchers' Perspectives on Ethical Issues in the Early Development of Clinical Machine Learning Tools: Qualitative Interview Study. JMIR AI 2023; 2:e47449. [PMID: 38875536 PMCID: PMC11041441 DOI: 10.2196/47449] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 08/20/2023] [Accepted: 09/16/2023] [Indexed: 06/16/2024]
Abstract
BACKGROUND Innovative tools leveraging artificial intelligence (AI) and machine learning (ML) are rapidly being developed for medicine, with new applications emerging in prediction, diagnosis, and treatment across a range of illnesses, patient populations, and clinical procedures. One barrier for successful innovation is the scarcity of research in the current literature seeking and analyzing the views of AI or ML researchers and physicians to support ethical guidance. OBJECTIVE This study aims to describe, using a qualitative approach, the landscape of ethical issues that AI or ML researchers and physicians with professional exposure to AI or ML tools observe or anticipate in the development and use of AI and ML in medicine. METHODS Semistructured interviews were used to facilitate in-depth, open-ended discussion, and a purposeful sampling technique was used to identify and recruit participants. We conducted 21 semistructured interviews with a purposeful sample of AI and ML researchers (n=10) and physicians (n=11). We asked interviewees about their views regarding ethical considerations related to the adoption of AI and ML in medicine. Interviews were transcribed and deidentified by members of our research team. Data analysis was guided by the principles of qualitative content analysis. This approach, in which transcribed data is broken down into descriptive units that are named and sorted based on their content, allows for the inductive emergence of codes directly from the data set. RESULTS Notably, both researchers and physicians articulated concerns regarding how AI and ML innovations are shaped in their early development (ie, the problem formulation stage). Considerations encompassed the assessment of research priorities and motivations, clarity and centeredness of clinical needs, professional and demographic diversity of research teams, and interdisciplinary knowledge generation and collaboration. Phase-1 ethical issues identified by interviewees were notably interdisciplinary in nature and invited questions regarding how to align priorities and values across disciplines and ensure clinical value throughout the development and implementation of medical AI and ML. Relatedly, interviewees suggested interdisciplinary solutions to these issues, for example, more resources to support knowledge generation and collaboration between developers and physicians, engagement with a broader range of stakeholders, and efforts to increase diversity in research broadly and within individual teams. CONCLUSIONS These qualitative findings help elucidate several ethical challenges anticipated or encountered in AI and ML for health care. Our study is unique in that its use of open-ended questions allowed interviewees to explore their sentiments and perspectives without overreliance on implicit assumptions about what AI and ML currently are or are not. This analysis, however, does not include the perspectives of other relevant stakeholder groups, such as patients, ethicists, industry researchers or representatives, or other health care professionals beyond physicians. Additional qualitative and quantitative research is needed to reproduce and build on these findings.
Collapse
Affiliation(s)
- Jane Paik Kim
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Palo Alto, CA, United States
| | - Katie Ryan
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Palo Alto, CA, United States
| | - Max Kasun
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Palo Alto, CA, United States
| | - Justin Hogg
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Palo Alto, CA, United States
| | - Laura B Dunn
- Department of Psychiatry, University of Arkansas for Medical Sciences, Arkansas, CA, United States
| | - Laura Weiss Roberts
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Palo Alto, CA, United States
| |
Collapse
|
8
|
Gould DJ, Dowsey MM, Glanville-Hearst M, Spelman T, Bailey JA, Choong PFM, Bunzli S. Patients' Views on AI for Risk Prediction in Shared Decision-Making for Knee Replacement Surgery: Qualitative Interview Study. J Med Internet Res 2023; 25:e43632. [PMID: 37721797 PMCID: PMC10546266 DOI: 10.2196/43632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 05/04/2023] [Accepted: 08/21/2023] [Indexed: 09/19/2023] Open
Abstract
BACKGROUND The use of artificial intelligence (AI) in decision-making around knee replacement surgery is increasing, and this technology holds promise to improve the prediction of patient outcomes. Ambiguity surrounds the definition of AI, and there are mixed views on its application in clinical settings. OBJECTIVE In this study, we aimed to explore the understanding and attitudes of patients who underwent knee replacement surgery regarding AI in the context of risk prediction for shared clinical decision-making. METHODS This qualitative study involved patients who underwent knee replacement surgery at a tertiary referral center for joint replacement surgery. The participants were selected based on their age and sex. Semistructured interviews explored the participants' understanding of AI and their opinions on its use in shared clinical decision-making. Data collection and reflexive thematic analyses were conducted concurrently. Recruitment continued until thematic saturation was achieved. RESULTS Thematic saturation was achieved with 19 interviews and confirmed with 1 additional interview, resulting in 20 participants being interviewed (female participants: n=11, 55%; male participants: n=9, 45%; median age: 66 years). A total of 11 (55%) participants had a substantial postoperative complication. Three themes captured the participants' understanding of AI and their perceptions of its use in shared clinical decision-making. The theme Expectations captured the participants' views of themselves as individuals with the right to self-determination as they sought therapeutic solutions tailored to their circumstances, needs, and desires, including whether to use AI at all. The theme Empowerment highlighted the potential of AI to enable patients to develop realistic expectations and equip them with personalized risk information to discuss in shared decision-making conversations with the surgeon. The theme Partnership captured the importance of symbiosis between AI and clinicians because AI has varied levels of interpretability and understanding of human emotions and empathy. CONCLUSIONS Patients who underwent knee replacement surgery in this study had varied levels of familiarity with AI and diverse conceptualizations of its definitions and capabilities. Educating patients about AI through nontechnical explanations and illustrative scenarios could help inform their decision to use it for risk prediction in the shared decision-making process with their surgeon. These findings could be used in the process of developing a questionnaire to ascertain the views of patients undergoing knee replacement surgery on the acceptability of AI in shared clinical decision-making. Future work could investigate the accuracy of this patient group's understanding of AI, beyond their familiarity with it, and how this influences their acceptance of its use. Surgeons may play a key role in finding a place for AI in the clinical setting as the uptake of this technology in health care continues to grow.
Collapse
Affiliation(s)
- Daniel J Gould
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
| | - Michelle M Dowsey
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
- Department of Orthopaedics, St Vincent's Hospital Melbourne, Melbourne, Australia
| | | | - Tim Spelman
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
| | - James A Bailey
- School of Computing and Information Systems, University of Melbourne, Melbourne, Australia
| | - Peter F M Choong
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
- Department of Orthopaedics, St Vincent's Hospital Melbourne, Melbourne, Australia
| | - Samantha Bunzli
- School of Health Sciences and Social Work, Griffith University, Brisbane, Australia
| |
Collapse
|
9
|
McCradden MD. Ethics, First. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:55-56. [PMID: 37647467 DOI: 10.1080/15265161.2023.2237459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
10
|
Bouhouita-Guermech S, Gogognon P, Bélisle-Pipon JC. Specific challenges posed by artificial intelligence in research ethics. Front Artif Intell 2023; 6:1149082. [PMID: 37483869 PMCID: PMC10358356 DOI: 10.3389/frai.2023.1149082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 06/13/2023] [Indexed: 07/25/2023] Open
Abstract
Background The twenty first century is often defined as the era of Artificial Intelligence (AI), which raises many questions regarding its impact on society. It is already significantly changing many practices in different fields. Research ethics (RE) is no exception. Many challenges, including responsibility, privacy, and transparency, are encountered. Research ethics boards (REB) have been established to ensure that ethical practices are adequately followed during research projects. This scoping review aims to bring out the challenges of AI in research ethics and to investigate if REBs are equipped to evaluate them. Methods Three electronic databases were selected to collect peer-reviewed articles that fit the inclusion criteria (English or French, published between 2016 and 2021, containing AI, RE, and REB). Two instigators independently reviewed each piece by screening with Covidence and then coding with NVivo. Results From having a total of 657 articles to review, we were left with a final sample of 28 relevant papers for our scoping review. The selected literature described AI in research ethics (i.e., views on current guidelines, key ethical concept and approaches, key issues of the current state of AI-specific RE guidelines) and REBs regarding AI (i.e., their roles, scope and approaches, key practices and processes, limitations and challenges, stakeholder perceptions). However, the literature often described REBs ethical assessment practices of projects in AI research as lacking knowledge and tools. Conclusion Ethical reflections are taking a step forward while normative guidelines adaptation to AI's reality is still dawdling. This impacts REBs and most stakeholders involved with AI. Indeed, REBs are not equipped enough to adequately evaluate AI research ethics and require standard guidelines to help them do so.
Collapse
Affiliation(s)
| | | | - Jean-Christophe Bélisle-Pipon
- School of Public Health, Université de Montréal, Montréal, QC, Canada
- Faculty of Health Sciences, Simon Fraser University, Burnaby, BC, Canada
| |
Collapse
|
11
|
Koechli C, Zwahlen DR, Schucht P, Windisch P. Radiomics and machine learning for predicting the consistency of benign tumors of the central nervous system: A systematic review. Eur J Radiol 2023; 164:110866. [PMID: 37207398 DOI: 10.1016/j.ejrad.2023.110866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 04/28/2023] [Accepted: 05/03/2023] [Indexed: 05/21/2023]
Abstract
PURPOSE Predicting the consistency of benign central nervous system (CNS) tumors prior to surgery helps to improve surgical outcomes. This review summarizes and analyzes the literature on using radiomics and/or machine learning (ML) for consistency prediction. METHOD The Medical Literature Analysis and Retrieval System Online (MEDLINE) database was screened for studies published in English from January 1st 2000. Data was extracted according to the PRISMA guidelines and quality of the studies was assessed in compliance with the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2). RESULTS Eight publications were included focusing on pituitary macroadenomas (n = 5), pituitary adenomas (n = 1), and meningiomas (n = 2) using a retrospective (n = 6), prospective (n = 1), and unknown (n = 1) study design with a total of 763 patients for the consistency prediction. The studies reported an area under the curve (AUC) of 0.71-0.99 for their respective best performing model regarding the consistency prediction. Of all studies, four articles validated their models internally whereas none validated their models externally. Two articles stated making data available on request with the remaining publications lacking information with regard to data availability. CONCLUSIONS The research on consistency prediction of CNS tumors is still at an early stage regarding the use of radiomics and different ML techniques. Best-practice procedures regarding radiomics and ML need to be followed more rigorously to facilitate the comparison between publications and, accordingly, the possible implementation into clinical practice in the future.
Collapse
Affiliation(s)
- Carole Koechli
- Department of Radiation Oncology, Kantonsspital Winterthur, 8401 Winterthur, Switzerland; Universitätsklinik für Neurochirurgie, Bern University Hospital, 3010 Bern, Switzerland.
| | - Daniel R Zwahlen
- Department of Radiation Oncology, Kantonsspital Winterthur, 8401 Winterthur, Switzerland
| | - Philippe Schucht
- Universitätsklinik für Neurochirurgie, Bern University Hospital, 3010 Bern, Switzerland
| | - Paul Windisch
- Department of Radiation Oncology, Kantonsspital Winterthur, 8401 Winterthur, Switzerland
| |
Collapse
|
12
|
Thai K, Tsiandoulas KH, Stephenson EA, Menna-Dack D, Zlotnik Shaul R, Anderson JA, Shinewald AR, Ampofo A, McCradden MD. Perspectives of Youths on the Ethical Use of Artificial Intelligence in Health Care Research and Clinical Care. JAMA Netw Open 2023; 6:e2310659. [PMID: 37126349 PMCID: PMC10152306 DOI: 10.1001/jamanetworkopen.2023.10659] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/02/2023] Open
Abstract
Importance Understanding the views and values of patients is of substantial importance to developing the ethical parameters of artificial intelligence (AI) use in medicine. Thus far, there is limited study on the views of children and youths. Their perspectives contribute meaningfully to the integration of AI in medicine. Objective To explore the moral attitudes and views of children and youths regarding research and clinical care involving health AI at the point of care. Design, Setting, and Participants This qualitative study recruited participants younger than 18 years during a 1-year period (October 2021 to March 2022) at a large urban pediatric hospital. A total of 44 individuals who were receiving or had previously received care at a hospital or rehabilitation clinic contacted the research team, but 15 were found to be ineligible. Of the 29 who consented to participate, 1 was lost to follow-up, resulting in 28 participants who completed the interview. Exposures Participants were interviewed using vignettes on 3 main themes: (1) health data research, (2) clinical AI trials, and (3) clinical use of AI. Main Outcomes and Measures Thematic description of values surrounding health data research, interventional AI research, and clinical use of AI. Results The 28 participants included 6 children (ages, 10-12 years) and 22 youths (ages, 13-17 years) (16 female, 10 male, and 3 trans/nonbinary/gender diverse). Mean (SD) age was 15 (2) years. Participants were highly engaged and quite knowledgeable about AI. They expressed a positive view of research intended to help others and had strong feelings about the uses of their health data for AI. Participants expressed appreciation for the vulnerability of potential participants in interventional AI trials and reinforced the importance of respect for their preferences regardless of their decisional capacity. A strong theme for the prospective use of clinical AI was the desire to maintain bedside interaction between the patient and their physician. Conclusions and Relevance In this study, children and youths reported generally positive views of AI, expressing strong interest and advocacy for their involvement in AI research and inclusion of their voices for shared decision-making with AI in clinical care. These findings suggest the need for more engagement of children and youths in health care AI research and integration.
Collapse
Affiliation(s)
- Kelly Thai
- Department of Bioethics, The Hospital for Sick Children, Toronto, Ontario, Canada
- Genetics & Genome Biology, Peter Gilgan Centre for Research & Learning, Toronto, Ontario, Canada
| | - Kate H Tsiandoulas
- Department of Bioethics, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Elizabeth A Stephenson
- Labatt Family Heart Centre, The Hospital for Sick Children, Toronto, Ontario, Canada
- Department of Paediatrics, University of Toronto, Toronto, Ontario, Canada
| | - Dolly Menna-Dack
- Holland Bloorview Kids Rehabilitation Hospital, Toronto, Ontario, Canada
| | - Randi Zlotnik Shaul
- Department of Bioethics, The Hospital for Sick Children, Toronto, Ontario, Canada
- Department of Paediatrics, University of Toronto, Toronto, Ontario, Canada
| | - James A Anderson
- Department of Bioethics, The Hospital for Sick Children, Toronto, Ontario, Canada
- Institute for Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
| | | | | | - Melissa D McCradden
- Department of Bioethics, The Hospital for Sick Children, Toronto, Ontario, Canada
- Genetics & Genome Biology, Peter Gilgan Centre for Research & Learning, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
13
|
Kelly BS, Kirwan A, Quinn MS, Kelly AM, Mathur P, Lawlor A, Killeen RP. The ethical matrix as a method for involving people living with disease and the wider public (PPI) in near-term artificial intelligence research. Radiography (Lond) 2023; 29 Suppl 1:S103-S111. [PMID: 37062673 DOI: 10.1016/j.radi.2023.03.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 03/10/2023] [Accepted: 03/12/2023] [Indexed: 04/18/2023]
Abstract
INTRODUCTION The rapid pace of research in the field of Artificial Intelligence in medicine has associated risks for near-term AI. Ethical considerations of the use of AI in medicine remain a subject of much debate. Concurrently, the Involvement of People living with disease and the Public (PPI) in research is becoming mandatory in the EU and UK. The goal of this research was to elucidate the important values for our relevant stakeholders: People with MS, Radiologists, neurologists, Registered Healthcare Practitioners and Computer Scientists concerning AI in radiology and synthesize these in an ethical matrix. METHODS An ethical matrix workshop co-designed with a patient expert. The workshop yielded a survey which was disseminated to the professional societies of the relevant stakeholders. Quantitative data were analysed using the Pingouin 0.53 python package. Qualitative data were examined with word frequency analysis and analysed for themes with grounded theory with a patient expert. RESULTS 184 participants were recruited, (54, 60, 17, 12, 41 respectively). There were significant (p < 0.00001) differences in age, gender and ethnicity between groups. Key themes emerging from our results were the importance fast and accurate results, explanations over model performance and the significance of maintaining personal connections and choice. These themes were used to construct the ethical matrix. CONCLUSION The ethical matrix is a useful tool for PPI and stakeholder engagement with particular advantages for near-term AI in the pandemic era. IMPLICATIONS FOR PRACTICE We have produced an ethical matrix that allows for the inclusion of stakeholder opinion in medical AI research design.
Collapse
Affiliation(s)
- B S Kelly
- School of Medicine, UCD, Belfield, Dublin 4, Ireland; Department of Radiology, St Vincent's University Hospital, Dublin 4, Ireland; School of Computer Science and Insight Centre, UCD Belfield, Dublin 4, Ireland.
| | - A Kirwan
- Multiple Sclerosis Ireland National Office, 80 Northumberland Road, Dublin 4, Ireland
| | - M S Quinn
- School of Computer Science and Insight Centre, UCD Belfield, Dublin 4, Ireland
| | - A M Kelly
- School of Education, Trinity College Dublin, Dublin 2, Ireland
| | - P Mathur
- Department of Radiology, St Vincent's University Hospital, Dublin 4, Ireland
| | - A Lawlor
- Department of Radiology, St Vincent's University Hospital, Dublin 4, Ireland
| | - R P Killeen
- School of Medicine, UCD, Belfield, Dublin 4, Ireland
| |
Collapse
|
14
|
Jeyakumar T, Younus S, Zhang M, Clare M, Charow R, Karsan I, Dhalla A, Al-Mouaswas D, Scandiffio J, Aling J, Salhia M, Lalani N, Overholt S, Wiljer D. Preparing for an Artificial Intelligence-Enabled Future: Patient Perspectives on Engagement and Health Care Professional Training for Adopting Artificial Intelligence Technologies in Health Care Settings. JMIR AI 2023; 2:e40973. [PMID: 38875561 PMCID: PMC11041489 DOI: 10.2196/40973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 11/29/2022] [Accepted: 12/29/2022] [Indexed: 06/16/2024]
Abstract
BACKGROUND As new technologies emerge, there is a significant shift in the way care is delivered on a global scale. Artificial intelligence (AI) technologies have been rapidly and inexorably used to optimize patient outcomes, reduce health system costs, improve workflow efficiency, and enhance population health. Despite the widespread adoption of AI technologies, the literature on patient engagement and their perspectives on how AI will affect clinical care is scarce. Minimal patient engagement can limit the optimization of these novel technologies and contribute to suboptimal use in care settings. OBJECTIVE We aimed to explore patients' views on what skills they believe health care professionals should have in preparation for this AI-enabled future and how we can better engage patients when adopting and deploying AI technologies in health care settings. METHODS Semistructured interviews were conducted from August 2020 to December 2021 with 12 individuals who were a patient in any Canadian health care setting. Interviews were conducted until thematic saturation occurred. A thematic analysis approach outlined by Braun and Clarke was used to inductively analyze the data and identify overarching themes. RESULTS Among the 12 patients interviewed, 8 (67%) were from urban settings and 4 (33%) were from rural settings. A majority of the participants were very comfortable with technology (n=6, 50%) and somewhat familiar with AI (n=7, 58%). In total, 3 themes emerged: cultivating patients' trust, fostering patient engagement, and establishing data governance and validation of AI technologies. CONCLUSIONS With the rapid surge of AI solutions, there is a critical need to understand patient values in advancing the quality of care and contributing to an equitable health system. Our study demonstrated that health care professionals play a synergetic role in the future of AI and digital technologies. Patient engagement is vital in addressing underlying health inequities and fostering an optimal care experience. Future research is warranted to understand and capture the diverse perspectives of patients with various racial, ethnic, and socioeconomic backgrounds.
Collapse
Affiliation(s)
| | | | | | - Megan Clare
- Michener Institute of Education, University Health Network, Toronto, ON, Canada
| | - Rebecca Charow
- University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | - Inaara Karsan
- University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | | | - Dalia Al-Mouaswas
- Michener Institute of Education, University Health Network, Toronto, ON, Canada
| | | | - Justin Aling
- Patient Partner Program, University Health Network, Toronto, ON, Canada
| | - Mohammad Salhia
- Michener Institute of Education, University Health Network, Toronto, ON, Canada
| | | | - Scott Overholt
- Patient Partner Program, University Health Network, Toronto, ON, Canada
| | - David Wiljer
- University Health Network, Toronto, ON, Canada
- Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
- Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- Office of Education, Centre for Addiction and Mental Health, Toronto, ON, Canada
| |
Collapse
|
15
|
Macri R, Roberts SL. The Use of Artificial Intelligence in Clinical Care: A Values-Based Guide for Shared Decision Making. Curr Oncol 2023; 30:2178-2186. [PMID: 36826129 PMCID: PMC9955933 DOI: 10.3390/curroncol30020168] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 01/28/2023] [Accepted: 02/01/2023] [Indexed: 02/12/2023] Open
Abstract
Clinical applications of artificial intelligence (AI) in healthcare, including in the field of oncology, have the potential to advance diagnosis and treatment. The literature suggests that patient values should be considered in decision making when using AI in clinical care; however, there is a lack of practical guidance for clinicians on how to approach these conversations and incorporate patient values into clinical decision making. We provide a practical, values-based guide for clinicians to assist in critical reflection and the incorporation of patient values into shared decision making when deciding to use AI in clinical care. Values that are relevant to patients, identified in the literature, include trust, privacy and confidentiality, non-maleficence, safety, accountability, beneficence, autonomy, transparency, compassion, equity, justice, and fairness. The guide offers questions for clinicians to consider when adopting the potential use of AI in their practice; explores illness understanding between the patient and clinician; encourages open dialogue of patient values; reviews all clinically appropriate options; and makes a shared decision of what option best meets the patient's values. The guide can be used for diverse clinical applications of AI.
Collapse
Affiliation(s)
- Rosanna Macri
- Department of Bioethics, Sinai Health, Toronto, ON M5G 1X5, Canada
- Joint Centre for Bioethics, Dalla Lana School of Public Health, University of Toronto, Toronto, ON M5T 1P8, Canada
- Department of Radiation Oncology, Temerty Faculty of Medicine, University of Toronto, Toronto, ON M5T 1P5, Canada
- Correspondence:
| | - Shannon L. Roberts
- Project-Specific Bioethics Research Volunteer Student, Hennick Bridgepoint Hospital, Sinai Health, Toronto, ON M4M 2B5, Canada
| |
Collapse
|
16
|
Wu C, Xu H, Bai D, Chen X, Gao J, Jiang X. Public perceptions on the application of artificial intelligence in healthcare: a qualitative meta-synthesis. BMJ Open 2023; 13:e066322. [PMID: 36599634 PMCID: PMC9815015 DOI: 10.1136/bmjopen-2022-066322] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
OBJECTIVES Medical artificial intelligence (AI) has been used widely applied in clinical field due to its convenience and innovation. However, several policy and regulatory issues such as credibility, sharing of responsibility and ethics have raised concerns in the use of AI. It is therefore necessary to understand the general public's views on medical AI. Here, a meta-synthesis was conducted to analyse and summarise the public's understanding of the application of AI in the healthcare field, to provide recommendations for future use and management of AI in medical practice. DESIGN This was a meta-synthesis of qualitative studies. METHOD A search was performed on the following databases to identify studies published in English and Chinese: MEDLINE, CINAHL, Web of science, Cochrane library, Embase, PsycINFO, CNKI, Wanfang and VIP. The search was conducted from database inception to 25 December 2021. The meta-aggregation approach of JBI was used to summarise findings from qualitative studies, focusing on the public's perception of the application of AI in healthcare. RESULTS Of the 5128 studies screened, 12 met the inclusion criteria, hence were incorporated into analysis. Three synthesised findings were used as the basis of our conclusions, including advantages of medical AI from the public's perspective, ethical and legal concerns about medical AI from the public's perspective, and public suggestions on the application of AI in medical field. CONCLUSION Results showed that the public acknowledges the unique advantages and convenience of medical AI. Meanwhile, several concerns about the application of medical AI were observed, most of which involve ethical and legal issues. The standard application and reasonable supervision of medical AI is key to ensuring its effective utilisation. Based on the public's perspective, this analysis provides insights and suggestions for health managers on how to implement and apply medical AI smoothly, while ensuring safety in healthcare practice. PROSPERO REGISTRATION NUMBER CRD42022315033.
Collapse
Affiliation(s)
- Chenxi Wu
- West China School of Nursing/West China Hospital, Sichuan University, Chengdu, Sichuan, China
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Huiqiong Xu
- West China School of Nursing,Sichuan University/ Abdominal Oncology Ward, Cancer Center,West China Hospital, Sichuan University, Chengdu, Sichuan, People's Republic of China
| | - Dingxi Bai
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Xinyu Chen
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Jing Gao
- School of Nursing, Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Xiaolian Jiang
- West China School of Nursing/West China Hospital, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
17
|
Partnering with children and youth to advance artificial intelligence in healthcare. Pediatr Res 2023; 93:284-286. [PMID: 35681090 DOI: 10.1038/s41390-022-02139-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 04/29/2022] [Indexed: 11/08/2022]
|
18
|
Tang L, Li J, Fantus S. Medical artificial intelligence ethics: A systematic review of empirical studies. Digit Health 2023; 9:20552076231186064. [PMID: 37434728 PMCID: PMC10331228 DOI: 10.1177/20552076231186064] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 06/16/2023] [Indexed: 07/13/2023] Open
Abstract
Background Artificial intelligence (AI) technologies are transforming medicine and healthcare. Scholars and practitioners have debated the philosophical, ethical, legal, and regulatory implications of medical AI, and empirical research on stakeholders' knowledge, attitude, and practices has started to emerge. This study is a systematic review of published empirical studies of medical AI ethics with the goal of mapping the main approaches, findings, and limitations of scholarship to inform future practice considerations. Methods We searched seven databases for published peer-reviewed empirical studies on medical AI ethics and evaluated them in terms of types of technologies studied, geographic locations, stakeholders involved, research methods used, ethical principles studied, and major findings. Findings Thirty-six studies were included (published 2013-2022). They typically belonged to one of the three topics: exploratory studies of stakeholder knowledge and attitude toward medical AI, theory-building studies testing hypotheses regarding factors contributing to stakeholders' acceptance of medical AI, and studies identifying and correcting bias in medical AI. Interpretation There is a disconnect between high-level ethical principles and guidelines developed by ethicists and empirical research on the topic and a need to embed ethicists in tandem with AI developers, clinicians, patients, and scholars of innovation and technology adoption in studying medical AI ethics.
Collapse
Affiliation(s)
- Lu Tang
- Department of Communication and Journalism, Texas A&M University, College Station, TX, USA
| | - Jinxu Li
- Department of Communication and Journalism, Texas A&M University, College Station, TX, USA
| | - Sophia Fantus
- School of Social Work, University of Texas at Arlington, Arlington, TX, USA
| |
Collapse
|
19
|
Feng Y, Leung AA, Lu X, Liang Z, Quan H, Walker RL. Personalized prediction of incident hospitalization for cardiovascular disease in patients with hypertension using machine learning. BMC Med Res Methodol 2022; 22:325. [PMID: 36528631 PMCID: PMC9758895 DOI: 10.1186/s12874-022-01814-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 12/05/2022] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND Prognostic information for patients with hypertension is largely based on population averages. The purpose of this study was to compare the performance of four machine learning approaches for personalized prediction of incident hospitalization for cardiovascular disease among newly diagnosed hypertensive patients. METHODS Using province-wide linked administrative health data in Alberta, we analyzed a cohort of 259,873 newly-diagnosed hypertensive patients from 2009 to 2015 who collectively had 11,863 incident hospitalizations for heart failure, myocardial infarction, and stroke. Linear multi-task logistic regression, neural multi-task logistic regression, random survival forest and Cox proportional hazard models were used to determine the number of event-free survivors at each time-point and to construct individual event-free survival probability curves. The predictive performance was evaluated by root mean squared error, mean absolute error, concordance index, and the Brier score. RESULTS The random survival forest model has the lowest root mean squared error value at 33.94 and lowest mean absolute error value at 28.37. Machine learning methods provide similar discrimination and calibration in the personalized survival prediction of hospitalizations for cardiovascular events in patients with hypertension. Neural multi-task logistic regression model has the highest concordance index at 0.8149 and lowest Brier score at 0.0242 for the personalized survival prediction. CONCLUSIONS This is the first personalized survival prediction for cardiovascular diseases among hypertensive patients using administrative data. The four models tested in this analysis exhibited a similar discrimination and calibration ability in predicting personalized survival prediction of hypertension patients.
Collapse
Affiliation(s)
- Yuanchao Feng
- grid.22072.350000 0004 1936 7697Centre for Health informatics, Department of Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, AB Canada ,grid.22072.350000 0004 1936 7697Libin Cardiovascular Institute, University of Calgary, Calgary, AB Canada
| | - Alexander A. Leung
- grid.22072.350000 0004 1936 7697Centre for Health informatics, Department of Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, AB Canada ,grid.22072.350000 0004 1936 7697Libin Cardiovascular Institute, University of Calgary, Calgary, AB Canada ,grid.22072.350000 0004 1936 7697Department of Medicine, Cumming School of Medicine, University of Calgary, Calgary, AB Canada
| | - Xuewen Lu
- grid.22072.350000 0004 1936 7697Department of Mathematics and Statistics, University of Calgary, Calgary, AB Canada
| | - Zhiying Liang
- grid.22072.350000 0004 1936 7697Centre for Health informatics, Department of Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, AB Canada ,grid.22072.350000 0004 1936 7697Libin Cardiovascular Institute, University of Calgary, Calgary, AB Canada
| | - Hude Quan
- grid.22072.350000 0004 1936 7697Centre for Health informatics, Department of Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, AB Canada ,grid.22072.350000 0004 1936 7697Libin Cardiovascular Institute, University of Calgary, Calgary, AB Canada ,grid.413574.00000 0001 0693 8815O’Brien Institute for Public Health and Alberta Health Services, 3280 Hospital Drive NW, Calgary, AB T2N 4Z6 Canada
| | - Robin L. Walker
- grid.22072.350000 0004 1936 7697Centre for Health informatics, Department of Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, AB Canada ,grid.413574.00000 0001 0693 8815O’Brien Institute for Public Health and Alberta Health Services, 3280 Hospital Drive NW, Calgary, AB T2N 4Z6 Canada
| |
Collapse
|
20
|
Cumyn A, Ménard JF, Barton A, Dault R, Lévesque F, Ethier JF. Patients and Members of the Public’s Wishes Regarding Transparency in the Context of Secondary Use of Health Data: A Scoping Review (Preprint). J Med Internet Res 2022; 25:e45002. [PMID: 37052967 PMCID: PMC10141314 DOI: 10.2196/45002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 02/09/2023] [Accepted: 03/03/2023] [Indexed: 03/06/2023] Open
Abstract
BACKGROUND Secondary use of health data has reached unequaled potential to improve health systems governance, knowledge, and clinical care. Transparency regarding this secondary use is frequently cited as necessary to address deficits in trust and conditional support and to increase patient awareness. OBJECTIVE We aimed to review the current published literature to identify different stakeholders' perspectives and recommendations on what information patients and members of the public want to learn about the secondary use of health data for research purposes and how and in which situations. METHODS Using PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines, we conducted a scoping review using Medline, CINAHL, PsycINFO, Scopus, Cochrane Library, and PubMed databases to locate a broad range of studies published in English or French until November 2022. We included articles reporting a stakeholder's perspective or recommendations of what information patients and members of the public want to learn about the secondary use of health data for research purposes and how or in which situations. Data were collected and analyzed with an iterative thematic approach using NVivo. RESULTS Overall, 178 articles were included in this scoping review. The type of information can be divided into generic and specific content. Generic content includes information on governance and regulatory frameworks, technical aspects, and scientific aims. Specific content includes updates on the use of one's data, return of results from individual tests, information on global results, information on data sharing, and how to access one's data. Recommendations on how to communicate the information focused on frequency, use of various supports, formats, and wording. Methods for communication generally favored broad approaches such as nationwide publicity campaigns, mainstream and social media for generic content, and mixed approaches for specific content including websites, patient portals, and face-to-face encounters. Content should be tailored to the individual as much as possible with regard to length, avoidance of technical terms, cultural competence, and level of detail. Finally, the review outlined 4 major situations where communication was deemed necessary: before a new use of data, when new test results became available, when global research results were released, and in the advent of a breach in confidentiality. CONCLUSIONS This review highlights how different types of information and approaches to communication efforts may serve as the basis for achieving greater transparency. Governing bodies could use the results: to elaborate or evaluate strategies to educate on the potential benefits; to provide some knowledge and control over data use as a form of reciprocity; and as a condition to engage citizens and build and maintain trust. Future work is needed to assess which strategies achieve the greatest outreach while striking a balance between meeting information needs and use of resources.
Collapse
Affiliation(s)
- Annabelle Cumyn
- Département de médecine, Faculté de médecine et des sciences de la santé, Université de Sherbrooke, Sherbrooke, QC, Canada
- Groupe de recherche interdisciplinaire en informatique de la santé, Faculté des sciences/Faculté de médecine et des sciences de la santé, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Jean-Frédéric Ménard
- Groupe de recherche interdisciplinaire en informatique de la santé, Faculté des sciences/Faculté de médecine et des sciences de la santé, Université de Sherbrooke, Sherbrooke, QC, Canada
- Faculté de droit, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Adrien Barton
- Groupe de recherche interdisciplinaire en informatique de la santé, Faculté des sciences/Faculté de médecine et des sciences de la santé, Université de Sherbrooke, Sherbrooke, QC, Canada
- Institut de recherche en informatique de Toulouse, Toulouse, France
| | - Roxanne Dault
- Groupe de recherche interdisciplinaire en informatique de la santé, Faculté des sciences/Faculté de médecine et des sciences de la santé, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Frédérique Lévesque
- Groupe de recherche interdisciplinaire en informatique de la santé, Faculté des sciences/Faculté de médecine et des sciences de la santé, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Jean-François Ethier
- Département de médecine, Faculté de médecine et des sciences de la santé, Université de Sherbrooke, Sherbrooke, QC, Canada
- Groupe de recherche interdisciplinaire en informatique de la santé, Faculté des sciences/Faculté de médecine et des sciences de la santé, Université de Sherbrooke, Sherbrooke, QC, Canada
| |
Collapse
|
21
|
Scheibner J, Ienca M, Vayena E. Health data privacy through homomorphic encryption and distributed ledger computing: an ethical-legal qualitative expert assessment study. BMC Med Ethics 2022; 23:121. [PMID: 36451210 PMCID: PMC9713155 DOI: 10.1186/s12910-022-00852-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 10/28/2022] [Indexed: 12/03/2022] Open
Abstract
BACKGROUND Increasingly, hospitals and research institutes are developing technical solutions for sharing patient data in a privacy preserving manner. Two of these technical solutions are homomorphic encryption and distributed ledger technology. Homomorphic encryption allows computations to be performed on data without this data ever being decrypted. Therefore, homomorphic encryption represents a potential solution for conducting feasibility studies on cohorts of sensitive patient data stored in distributed locations. Distributed ledger technology provides a permanent record on all transfers and processing of patient data, allowing data custodians to audit access. A significant portion of the current literature has examined how these technologies might comply with data protection and research ethics frameworks. In the Swiss context, these instruments include the Federal Act on Data Protection and the Human Research Act. There are also institutional frameworks that govern the processing of health related and genetic data at different universities and hospitals. Given Switzerland's geographical proximity to European Union (EU) member states, the General Data Protection Regulation (GDPR) may impose additional obligations. METHODS To conduct this assessment, we carried out a series of qualitative interviews with key stakeholders at Swiss hospitals and research institutions. These included legal and clinical data management staff, as well as clinical and research ethics experts. These interviews were carried out with two series of vignettes that focused on data discovery using homomorphic encryption and data erasure from a distributed ledger platform. RESULTS For our first set of vignettes, interviewees were prepared to allow data discovery requests if patients had provided general consent or ethics committee approval, depending on the types of data made available. Our interviewees highlighted the importance of protecting against the risk of reidentification given different types of data. For our second set, there was disagreement amongst interviewees on whether they would delete patient data locally, or delete data linked to a ledger with cryptographic hashes. Our interviewees were also willing to delete data locally or on the ledger, subject to local legislation. CONCLUSION Our findings can help guide the deployment of these technologies, as well as determine ethics and legal requirements for such technologies.
Collapse
Affiliation(s)
- James Scheibner
- grid.5801.c0000 0001 2156 2780Health Ethics and Policy Laboratory, Department of Health Sciences and Technology (D-HEST), ETH Zürich, Zurich, Switzerland ,grid.1014.40000 0004 0367 2697College of Business, Government and Law, Flinders University, Adelaide, Australia
| | - Marcello Ienca
- grid.5801.c0000 0001 2156 2780Health Ethics and Policy Laboratory, Department of Health Sciences and Technology (D-HEST), ETH Zürich, Zurich, Switzerland ,grid.5333.60000000121839049College of Humanities, EPFL, Lausanne, Switzerland
| | - Effy Vayena
- grid.5801.c0000 0001 2156 2780Health Ethics and Policy Laboratory, Department of Health Sciences and Technology (D-HEST), ETH Zürich, Zurich, Switzerland ,grid.5801.c0000 0001 2156 2780Department of Health Sciences and Technology, ETH Zürich, Zurich, Switzerland
| |
Collapse
|
22
|
Hantel A, Clancy DD, Kehl KL, Marron JM, Van Allen EM, Abel GA. A Process Framework for Ethically Deploying Artificial Intelligence in Oncology. J Clin Oncol 2022; 40:3907-3911. [PMID: 35849792 PMCID: PMC9746763 DOI: 10.1200/jco.22.01113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 06/01/2022] [Accepted: 06/21/2022] [Indexed: 12/24/2022] Open
|
23
|
Professional expectations and patient expectations concerning the development of Artificial Intelligence (AI) for the early diagnosis of Pulmonary Hypertension (PH). JOURNAL OF RESPONSIBLE TECHNOLOGY 2022; 12:None. [PMID: 36568032 PMCID: PMC9767405 DOI: 10.1016/j.jrt.2022.100052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
The expectations of professionals working on the development of healthcare Artificial Intelligence (AI) technologies and the patients who will be affected by them have received limited attention. This paper reports on a Foresight Workshop with professionals involved with pulmonary hypertension (PH) and a Focus Group with members of a PH patient group, to discuss expectations of AI development and implementation. We show that while professionals and patients had similar expectations of AI, with respect to the priority of early diagnosis; data risks of privacy and reuse; and responsibility, other expectations differed. One important point of difference was in the attitude toward using AI to point up other potential health problems (in addition to PH). A second difference was in the expectations regarding how much clinical professionals should know about the role of AI in diagnosis. These findings allow us to better prepare for the future by providing a frank appraisal of the complexities of AI development with foresight, and the anxieties of key stakeholders.
Collapse
|
24
|
Weinert L, Klass M, Schneider G, Heinze O. Exploring Stakeholder Requirements to enable research and development of AI algorithms in a hospital based generic infrastructure: Results of a Multi-step mixed-methods Study (Preprint). JMIR Form Res 2022; 7:e43958. [PMID: 37071450 PMCID: PMC10155093 DOI: 10.2196/43958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 01/29/2023] [Accepted: 02/22/2023] [Indexed: 02/24/2023] Open
Abstract
BACKGROUND Legal, controlled, and regulated access to high-quality data from academic hospitals currently poses a barrier to the development and testing of new artificial intelligence (AI) algorithms. To overcome this barrier, the German Federal Ministry of Health supports the "pAItient" (Protected Artificial Intelligence Innovation Environment for Patient Oriented Digital Health Solutions for developing, testing and evidence-based evaluation of clinical value) project, with the goal to establish an AI Innovation Environment at the Heidelberg University Hospital, Germany. It is designed as a proof-of-concept extension to the preexisting Medical Data Integration Center. OBJECTIVE The first part of the pAItient project aims to explore stakeholders' requirements for developing AI in partnership with an academic hospital and granting AI experts access to anonymized personal health data. METHODS We designed a multistep mixed methods approach. First, researchers and employees from stakeholder organizations were invited to participate in semistructured interviews. In the following step, questionnaires were developed based on the participants' answers and distributed among the stakeholders' organizations. In addition, patients and physicians were interviewed. RESULTS The identified requirements covered a wide range and were conflicting sometimes. Relevant patient requirements included adequate provision of necessary information for data use, clear medical objective of the research and development activities, trustworthiness of the organization collecting the patient data, and data should not be reidentifiable. Requirements of AI researchers and developers encompassed contact with clinical users, an acceptable user interface (UI) for shared data platforms, stable connection to the planned infrastructure, relevant use cases, and assistance in dealing with data privacy regulations. In a next step, a requirements model was developed, which depicts the identified requirements in different layers. This developed model will be used to communicate stakeholder requirements within the pAItient project consortium. CONCLUSIONS The study led to the identification of necessary requirements for the development, testing, and validation of AI applications within a hospital-based generic infrastructure. A requirements model was developed, which will inform the next steps in the development of an AI innovation environment at our institution. Results from our study replicate previous findings from other contexts and will add to the emerging discussion on the use of routine medical data for the development of AI applications. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) RR2-10.2196/42208.
Collapse
Affiliation(s)
- Lina Weinert
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
- Section for Translational Health Economics, Department for Conservative Dentistry, Heidelberg University Hospital, Heidelberg, Germany
| | - Maximilian Klass
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Gerd Schneider
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Oliver Heinze
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| |
Collapse
|
25
|
Anderson JA, McCradden MD, Stephenson EA. Response to Open Peer Commentaries: On Social Harms, Big Tech, and Institutional Accountability. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:W6-W8. [PMID: 35593914 DOI: 10.1080/15265161.2022.2075977] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Affiliation(s)
| | - Melissa D McCradden
- The Hospital for Sick Children
- Peter Gilgan Centre for Research and Learning
- Dalla Lana School of Public Health
| | | |
Collapse
|
26
|
Machine Learning for the Detection and Segmentation of Benign Tumors of the Central Nervous System: A Systematic Review. Cancers (Basel) 2022; 14:cancers14112676. [PMID: 35681655 PMCID: PMC9179850 DOI: 10.3390/cancers14112676] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 05/18/2022] [Accepted: 05/26/2022] [Indexed: 11/20/2022] Open
Abstract
Simple Summary Machine learning in radiology of the central nervous system has seen many interesting publications in the past few years. Since the focus has largely been on malignant tumors such as brain metastases and high-grade gliomas, we conducted a systematic review on benign tumors to summarize what has been published and where there might be gaps in the research. We found several studies that report good results, but the descriptions of methodologies could be improved to enable better comparisons and assessment of biases. Abstract Objectives: To summarize the available literature on using machine learning (ML) for the detection and segmentation of benign tumors of the central nervous system (CNS) and to assess the adherence of published ML/diagnostic accuracy studies to best practice. Methods: The MEDLINE database was searched for the use of ML in patients with any benign tumor of the CNS, and the records were screened according to PRISMA guidelines. Results: Eleven retrospective studies focusing on meningioma (n = 4), vestibular schwannoma (n = 4), pituitary adenoma (n = 2) and spinal schwannoma (n = 1) were included. The majority of studies attempted segmentation. Links to repositories containing code were provided in two manuscripts, and no manuscripts shared imaging data. Only one study used an external test set, which raises the question as to whether some of the good performances that have been reported were caused by overfitting and may not generalize to data from other institutions. Conclusions: Using ML for detecting and segmenting benign brain tumors is still in its infancy. Stronger adherence to ML best practices could facilitate easier comparisons between studies and contribute to the development of models that are more likely to one day be used in clinical practice.
Collapse
|
27
|
McCradden MD, Anderson JA, A Stephenson E, Drysdale E, Erdman L, Goldenberg A, Zlotnik Shaul R. A Research Ethics Framework for the Clinical Translation of Healthcare Machine Learning. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2022; 22:8-22. [PMID: 35048782 DOI: 10.1080/15265161.2021.2013977] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The application of artificial intelligence and machine learning (ML) technologies in healthcare have immense potential to improve the care of patients. While there are some emerging practices surrounding responsible ML as well as regulatory frameworks, the traditional role of research ethics oversight has been relatively unexplored regarding its relevance for clinical ML. In this paper, we provide a comprehensive research ethics framework that can apply to the systematic inquiry of ML research across its development cycle. The pathway consists of three stages: (1) exploratory, hypothesis-generating data access; (2) silent period evaluation; (3) prospective clinical evaluation. We connect each stage to its literature and ethical justification and suggest adaptations to traditional paradigms to suit ML while maintaining ethical rigor and the protection of individuals. This pathway can accommodate a multitude of research designs from observational to controlled trials, and the stages can apply individually to a variety of ML applications.
Collapse
Affiliation(s)
- Melissa D McCradden
- Department of Bioethics, The Hospital for Sick Children
- Genetics and Genome Biology, The Hospital for Sick Children, Peter Gilgan Centre for Research and Learning
- Division of Clinical & Public Health, Dalla Lana School of Public Health
| | - James A Anderson
- Department of Bioethics, The Hospital for Sick Children
- Institute for Health Management Policy, & Evaluation, University of Toronto
| | - Elizabeth A Stephenson
- Labatt Family Heart Centre, The Hospital for Sick Children
- Department of Pediatrics, The Hospital for Sick Children
| | - Erik Drysdale
- Genetics and Genome Biology, The Hospital for Sick Children, Peter Gilgan Centre for Research and Learning
| | - Lauren Erdman
- Genetics and Genome Biology, The Hospital for Sick Children, Peter Gilgan Centre for Research and Learning
- Vector Institute
- Department of Computer Science, University of Toronto
| | - Anna Goldenberg
- Department of Bioethics, The Hospital for Sick Children
- Vector Institute
- Department of Computer Science, University of Toronto
- CIFAR
| | - Randi Zlotnik Shaul
- Department of Bioethics, The Hospital for Sick Children
- Department of Pediatrics, The Hospital for Sick Children
- Child Health Evaluative Sciences, The Hospital for Sick Children
| |
Collapse
|
28
|
Romero RA, Young SD. Public perceptions and implementation considerations on the use of artificial intelligence in health. J Eval Clin Pract 2022; 28:75-78. [PMID: 33977613 DOI: 10.1111/jep.13580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 04/23/2021] [Indexed: 11/30/2022]
Affiliation(s)
- Romina A Romero
- Department of Emergency Medicine, University of California, Irvine, Irvine, CA, USA
| | - Sean D Young
- Department of Emergency Medicine, University of California, Irvine, Irvine, CA, USA.,University of California Institute for Prediction Technology, Department of Informatics, University of California, Irvine, Irvine, CA, USA
| |
Collapse
|
29
|
Chew HSJ, Achananuparp P. Perceptions and Needs of Artificial Intelligence in Health Care to Increase Adoption: Scoping Review. J Med Internet Res 2022; 24:e32939. [PMID: 35029538 PMCID: PMC8800095 DOI: 10.2196/32939] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Revised: 11/08/2021] [Accepted: 12/03/2021] [Indexed: 01/20/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) has the potential to improve the efficiency and effectiveness of health care service delivery. However, the perceptions and needs of such systems remain elusive, hindering efforts to promote AI adoption in health care. OBJECTIVE This study aims to provide an overview of the perceptions and needs of AI to increase its adoption in health care. METHODS A systematic scoping review was conducted according to the 5-stage framework by Arksey and O'Malley. Articles that described the perceptions and needs of AI in health care were searched across nine databases: ACM Library, CINAHL, Cochrane Central, Embase, IEEE Xplore, PsycINFO, PubMed, Scopus, and Web of Science for studies that were published from inception until June 21, 2021. Articles that were not specific to AI, not research studies, and not written in English were omitted. RESULTS Of the 3666 articles retrieved, 26 (0.71%) were eligible and included in this review. The mean age of the participants ranged from 30 to 72.6 years, the proportion of men ranged from 0% to 73.4%, and the sample sizes for primary studies ranged from 11 to 2780. The perceptions and needs of various populations in the use of AI were identified for general, primary, and community health care; chronic diseases self-management and self-diagnosis; mental health; and diagnostic procedures. The use of AI was perceived to be positive because of its availability, ease of use, and potential to improve efficiency and reduce the cost of health care service delivery. However, concerns were raised regarding the lack of trust in data privacy, patient safety, technological maturity, and the possibility of full automation. Suggestions for improving the adoption of AI in health care were highlighted: enhancing personalization and customizability; enhancing empathy and personification of AI-enabled chatbots and avatars; enhancing user experience, design, and interconnectedness with other devices; and educating the public on AI capabilities. Several corresponding mitigation strategies were also identified in this study. CONCLUSIONS The perceptions and needs of AI in its use in health care are crucial in improving its adoption by various stakeholders. Future studies and implementations should consider the points highlighted in this study to enhance the acceptability and adoption of AI in health care. This would facilitate an increase in the effectiveness and efficiency of health care service delivery to improve patient outcomes and satisfaction.
Collapse
Affiliation(s)
- Han Shi Jocelyn Chew
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Palakorn Achananuparp
- Living Analytics Research Centre, Singapore Management University, Singapore, Singapore
| |
Collapse
|
30
|
Sounderajah V, Normahani P, Aggarwal R, Jayakumar S, Markar SR, Ashrafian H, Darzi A. Reporting Standards and Quality Assessment Tools in Artificial Intelligence–Centered Healthcare Research. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_34] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
31
|
Scott IA, Carter SM, Coiera E. Exploring stakeholder attitudes towards AI in clinical practice. BMJ Health Care Inform 2021; 28:bmjhci-2021-100450. [PMID: 34887331 PMCID: PMC8663096 DOI: 10.1136/bmjhci-2021-100450] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Accepted: 11/14/2021] [Indexed: 12/31/2022] Open
Abstract
Objectives Different stakeholders may hold varying attitudes towards artificial intelligence (AI) applications in healthcare, which may constrain their acceptance if AI developers fail to take them into account. We set out to ascertain evidence of the attitudes of clinicians, consumers, managers, researchers, regulators and industry towards AI applications in healthcare. Methods We undertook an exploratory analysis of articles whose titles or abstracts contained the terms ‘artificial intelligence’ or ‘AI’ and ‘medical’ or ‘healthcare’ and ‘attitudes’, ‘perceptions’, ‘opinions’, ‘views’, ‘expectations’. Using a snowballing strategy, we searched PubMed and Google Scholar for articles published 1 January 2010 through 31 May 2021. We selected articles relating to non-robotic clinician-facing AI applications used to support healthcare-related tasks or decision-making. Results Across 27 studies, attitudes towards AI applications in healthcare, in general, were positive, more so for those with direct experience of AI, but provided certain safeguards were met. AI applications which automated data interpretation and synthesis were regarded more favourably by clinicians and consumers than those that directly influenced clinical decisions or potentially impacted clinician–patient relationships. Privacy breaches and personal liability for AI-related error worried clinicians, while loss of clinician oversight and inability to fully share in decision-making worried consumers. Both clinicians and consumers wanted AI-generated advice to be trustworthy, while industry groups emphasised AI benefits and wanted more data, funding and regulatory certainty. Discussion Certain expectations of AI applications were common to many stakeholder groups from which a set of dependencies can be defined. Conclusion Stakeholders differ in some but not all of their attitudes towards AI. Those developing and implementing applications should consider policies and processes that bridge attitudinal disconnects between different stakeholders.
Collapse
Affiliation(s)
- Ian A Scott
- Internal Medicine and Clinical Epidemiology, Princess Alexandra Hospital, Woolloongabba, Queensland, Australia .,School of Clinical Medicine, University of Queensland, Brisbane, Queensland, Australia
| | - Stacy M Carter
- Australian Centre for Health Engagement Evidence and Values, School of Health and Society, University of Wollongong, Wollongong, New South Wales, Australia
| | - Enrico Coiera
- Centre for Clinical Informatics, Macquarie University, Sydney, New South Wales, Australia
| |
Collapse
|
32
|
Seibert K, Domhoff D, Bruch D, Schulte-Althoff M, Fürstenau D, Biessmann F, Wolf-Ostermann K. Application Scenarios for Artificial Intelligence in Nursing Care: Rapid Review. J Med Internet Res 2021; 23:e26522. [PMID: 34847057 PMCID: PMC8669587 DOI: 10.2196/26522] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 05/21/2021] [Accepted: 10/08/2021] [Indexed: 12/23/2022] Open
Abstract
Background Artificial intelligence (AI) holds the promise of supporting nurses’ clinical decision-making in complex care situations or conducting tasks that are remote from direct patient interaction, such as documentation processes. There has been an increase in the research and development of AI applications for nursing care, but there is a persistent lack of an extensive overview covering the evidence base for promising application scenarios. Objective This study synthesizes literature on application scenarios for AI in nursing care settings as well as highlights adjacent aspects in the ethical, legal, and social discourse surrounding the application of AI in nursing care. Methods Following a rapid review design, PubMed, CINAHL, Association for Computing Machinery Digital Library, Institute of Electrical and Electronics Engineers Xplore, Digital Bibliography & Library Project, and Association for Information Systems Library, as well as the libraries of leading AI conferences, were searched in June 2020. Publications of original quantitative and qualitative research, systematic reviews, discussion papers, and essays on the ethical, legal, and social implications published in English were included. Eligible studies were analyzed on the basis of predetermined selection criteria. Results The titles and abstracts of 7016 publications and 704 full texts were screened, and 292 publications were included. Hospitals were the most prominent study setting, followed by independent living at home; fewer application scenarios were identified for nursing homes or home care. Most studies used machine learning algorithms, whereas expert or hybrid systems were entailed in less than every 10th publication. The application context of focusing on image and signal processing with tracking, monitoring, or the classification of activity and health followed by care coordination and communication, as well as fall detection, was the main purpose of AI applications. Few studies have reported the effects of AI applications on clinical or organizational outcomes, lacking particularly in data gathered outside laboratory conditions. In addition to technological requirements, the reporting and inclusion of certain requirements capture more overarching topics, such as data privacy, safety, and technology acceptance. Ethical, legal, and social implications reflect the discourse on technology use in health care but have mostly not been discussed in meaningful and potentially encompassing detail. Conclusions The results highlight the potential for the application of AI systems in different nursing care settings. Considering the lack of findings on the effectiveness and application of AI systems in real-world scenarios, future research should reflect on a more nursing care–specific perspective toward objectives, outcomes, and benefits. We identify that, crucially, an advancement in technological-societal discourse that surrounds the ethical and legal implications of AI applications in nursing care is a necessary next step. Further, we outline the need for greater participation among all of the stakeholders involved.
Collapse
Affiliation(s)
- Kathrin Seibert
- Institute of Public Health and Nursing Research, High Profile Area Health Sciences, University of Bremen, Bremen, Germany
| | - Dominik Domhoff
- Institute of Public Health and Nursing Research, High Profile Area Health Sciences, University of Bremen, Bremen, Germany
| | - Dominik Bruch
- Auf- und Umbruch im Gesundheitswesen UG, Bonn, Germany
| | - Matthias Schulte-Althoff
- School of Business and Economics, Department of Information Systems, Freie Universität Berlin, Einstein Center Digital Future, Berlin, Germany
| | - Daniel Fürstenau
- Department of Digitalization, Copenhagen Business School, Frederiksberg, Denmark.,Institute of Medical Informatics, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Felix Biessmann
- Faculty VI - Informatics and Media, Beuth University of Applied Sciences, Einstein Center Digital Future, Berlin, Germany
| | - Karin Wolf-Ostermann
- Institute of Public Health and Nursing Research, High Profile Area Health Sciences, University of Bremen, Bremen, Germany
| |
Collapse
|
33
|
Vervoort D, Tam DY, Wijeysundera HC. Health Technology Assessment for Cardiovascular Digital Health Technologies and Artificial Intelligence: Why Is It Different? Can J Cardiol 2021; 38:259-266. [PMID: 34461229 DOI: 10.1016/j.cjca.2021.08.015] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 07/23/2021] [Accepted: 08/03/2021] [Indexed: 11/28/2022] Open
Abstract
Innovations in health care are growing exponentially, resulting in improved quality of and access to care, as well as rising societal costs of care and variable reimbursement. In recent years, digital health technologies and artificial intelligence have become of increasing interest in cardiovascular medicine owing to their unique ability to empower patients and to use increasing quantities of data for moving toward personalised and precision medicine. Health technology assessment agencies evaluate the money spent on a health care intervention or technology to attain a given clinical impact and make recommendations for reimbursement considerations. However, there is a scarcity of economic evaluations of cardiovascular digital health technologies and artificial intelligence. The current health technology assessment framework is not equipped to address the unique, dynamic, and unpredictable value considerations of these technologies and highlight the need to better approach the digital health technologies and artificial intelligence health technology assessment process. In this review, we compare digital health technologies and artificial intelligence with traditional health care technologies, review existing health technology assessment frameworks, and discuss challenges and opportunities related to cardiovascular digital health technologies and artificial intelligence health technology assessment. Specifically, we argue that health technology assessments for digital health technologies and artificial intelligence applications must allow for a much shorter device life cycle, given the rapid and even potentially continuously iterative nature of this technology, and thus an evidence base that maybe less mature, compared with traditional health technologies and interventions.
Collapse
Affiliation(s)
- Dominique Vervoort
- Department of Health Policy and Management, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA; Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
| | - Derrick Y Tam
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada; Schulich Heart Program, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada
| | - Harindra C Wijeysundera
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada; Schulich Heart Program, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada.
| |
Collapse
|
34
|
Aggarwal R, Farag S, Martin G, Ashrafian H, Darzi A. Patient Perceptions on Data Sharing and Applying Artificial Intelligence to Health Care Data: Cross-sectional Survey. J Med Internet Res 2021; 23:e26162. [PMID: 34236994 PMCID: PMC8430862 DOI: 10.2196/26162] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 04/04/2021] [Accepted: 07/05/2021] [Indexed: 12/25/2022] Open
Abstract
Background Considerable research is being conducted as to how artificial intelligence (AI) can be effectively applied to health care. However, for the successful implementation of AI, large amounts of health data are required for training and testing algorithms. As such, there is a need to understand the perspectives and viewpoints of patients regarding the use of their health data in AI research. Objective We surveyed a large sample of patients for identifying current awareness regarding health data research, and for obtaining their opinions and views on data sharing for AI research purposes, and on the use of AI technology on health care data. Methods A cross-sectional survey with patients was conducted at a large multisite teaching hospital in the United Kingdom. Data were collected on patient and public views about sharing health data for research and the use of AI on health data. Results A total of 408 participants completed the survey. The respondents had generally low levels of prior knowledge about AI. Most were comfortable with sharing health data with the National Health Service (NHS) (318/408, 77.9%) or universities (268/408, 65.7%), but far fewer with commercial organizations such as technology companies (108/408, 26.4%). The majority endorsed AI research on health care data (357/408, 87.4%) and health care imaging (353/408, 86.4%) in a university setting, provided that concerns about privacy, reidentification of anonymized health care data, and consent processes were addressed. Conclusions There were significant variations in the patient perceptions, levels of support, and understanding of health data research and AI. Greater public engagement levels and debates are necessary to ensure the acceptability of AI research and its successful integration into clinical practice in future.
Collapse
Affiliation(s)
- Ravi Aggarwal
- Institute of Global Health Innovation, Imperial College London, London, United Kingdom
| | - Soma Farag
- Institute of Global Health Innovation, Imperial College London, London, United Kingdom
| | - Guy Martin
- Institute of Global Health Innovation, Imperial College London, London, United Kingdom
| | - Hutan Ashrafian
- Institute of Global Health Innovation, Imperial College London, London, United Kingdom
| | - Ara Darzi
- Institute of Global Health Innovation, Imperial College London, London, United Kingdom
| |
Collapse
|
35
|
Chiang S, Picard RW, Chiong W, Moss R, Worrell GA, Rao VR, Goldenholz DM. Guidelines for Conducting Ethical Artificial Intelligence Research in Neurology: A Systematic Approach for Clinicians and Researchers. Neurology 2021; 97:632-640. [PMID: 34315785 PMCID: PMC8480407 DOI: 10.1212/wnl.0000000000012570] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Accepted: 07/08/2021] [Indexed: 11/15/2022] Open
Abstract
Pre-emptive recognition of the ethical implications of study design and algorithm choices in artificial intelligence (AI) research is an important but challenging process. AI applications have begun to transition from a promising future to clinical reality in neurology. As the clinical management of neurology is often concerned with discrete, often unpredictable, and highly consequential events linked to multimodal data streams over long timescales, forthcoming advances in AI have great potential to transform care for patients. However, critical ethical questions have been raised with implementation of the first AI applications in clinical practice. Clearly, AI will have far-reaching potential to promote, but also to endanger, ethical clinical practice. This article employs an anticipatory ethics approach to scrutinize how researchers in neurology can methodically identify ethical ramifications of design choices early in the research and development process, with a goal of pre-empting unintended consequences that may violate principles of ethical clinical care. First, we discuss the use of a systematic framework for researchers to identify ethical ramifications of various study design and algorithm choices. Second, using epilepsy as a paradigmatic example, anticipatory clinical scenarios that illustrate unintended ethical consequences are discussed, and failure points in each scenario evaluated. Third, we provide practical recommendations for understanding and addressing ethical ramifications early in methods development stages. Awareness of the ethical implications of study design and algorithm choices that may unintentionally enter AI is crucial to ensuring that incorporation of AI into neurology care leads to patient benefit rather than harm.
Collapse
Affiliation(s)
- Sharon Chiang
- Department of Neurology and Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA
| | - Rosalind W Picard
- Empatica Inc., Boston, MA and The Media Lab, Massachusetts Institute of Technology, Cambridge, MA
| | - Winston Chiong
- Department of Neurology and Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA
| | | | | | - Vikram R Rao
- Department of Neurology and Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA
| | | |
Collapse
|
36
|
Ramessur R, Raja L, Kilduff CLS, Kang S, Li JPO, Thomas PBM, Sim DA. Impact and Challenges of Integrating Artificial Intelligence and Telemedicine into Clinical Ophthalmology. Asia Pac J Ophthalmol (Phila) 2021; 10:317-327. [PMID: 34383722 DOI: 10.1097/apo.0000000000000406] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
ABSTRACT Aging populations and worsening burden of chronic, treatable disease is increasingly creating a global shortfall in ophthalmic care provision. Remote and automated systems carry the promise to expand the scale and potential of health care interventions, and reduce strain on health care services through safe, personalized, efficient, and cost-effective services. However, significant challenges remain. Forward planning in service design is paramount to safeguard patient safety, trust in digital services, data privacy, medico-legal implications, and digital exclusion. We explore the impact and challenges facing patients and clinicians in integrating AI and telemedicine into ophthalmic care-and how these may influence its direction.
Collapse
Affiliation(s)
- Rishi Ramessur
- Royal Free Hospital, Royal Free London NHS Foundation Trust, London, United Kingdom
| | - Laxmi Raja
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Caroline L S Kilduff
- Central Middlesex Hospital, London North West University Healthcare NHS Trust, London, United Kingdom
| | - Swan Kang
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Ji-Peng Olivia Li
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Peter B M Thomas
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Dawn A Sim
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| |
Collapse
|
37
|
Sounderajah V, Normahani P, Aggarwal R, Jayakumar S, Markar SR, Ashrafian H, Darzi A. Reporting Standards and Quality Assessment Tools in Artificial Intelligence Centered Healthcare Research. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_34-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
38
|
Madariaga A, Kasherman L, Karakasis K, Degendorfer P, Heesters AM, Xu W, Husain S, Oza AM. Optimizing clinical research procedures in public health emergencies. Med Res Rev 2020; 41:725-738. [PMID: 33174617 DOI: 10.1002/med.21749] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2020] [Revised: 10/14/2020] [Accepted: 10/22/2020] [Indexed: 01/30/2023]
Abstract
Public Health Emergencies of International Concern, such as the coronavirus disease 2019 pandemic, have a devastating impact on an individual and societal level, and there is an urgent need to learn, understand and bridge the therapeutic gap at a time of extreme stress on the patient, health care systems and staff. Well-designed, controlled clinical trials play a crucial role in the discovery of novel diagnostic and management strategies; however, these catastrophic circumstances pose unique challenges in initiating research studies at institutional, national, and international levels, highlighting the importance of a coordinated, collaborative approach. This review discusses key elements necessary to consider for developing clinical trials within a Public Health Emergency setting.
Collapse
Affiliation(s)
- Ainhoa Madariaga
- Division of Medical Oncology & Hematology, Princess Margaret Cancer Centre, University of Toronto, Toronto, Ontario, Canada
| | - Lawrence Kasherman
- Division of Medical Oncology & Hematology, Princess Margaret Cancer Centre, University of Toronto, Toronto, Ontario, Canada
| | - Katherine Karakasis
- Division of Medical Oncology & Hematology, Princess Margaret Cancer Centre, University of Toronto, Toronto, Ontario, Canada
| | - Pamela Degendorfer
- Division of Medical Oncology & Hematology, Princess Margaret Cancer Centre, University of Toronto, Toronto, Ontario, Canada
| | - Ann M Heesters
- Bioethics Program and The Institute for Education Research, University Health Network, University of Toronto, Toronto, Ontario, Canada
| | - Wei Xu
- Division of Biostatistics, Princess Margaret Cancer Centre, University of Toronto, Toronto, Ontario, Canada
| | - Shahid Husain
- Division of Infectious Disease, Department of Medicine, University Health Network, University of Toronto, Toronto, Ontario, Canada
| | - Amit M Oza
- Division of Medical Oncology & Hematology, Princess Margaret Cancer Centre, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
39
|
McCradden MD, Sarker T, Paprica PA. Conditionally positive: a qualitative study of public perceptions about using health data for artificial intelligence research. BMJ Open 2020; 10:e039798. [PMID: 33115901 PMCID: PMC7594363 DOI: 10.1136/bmjopen-2020-039798] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/26/2020] [Revised: 08/05/2020] [Accepted: 10/08/2020] [Indexed: 01/10/2023] Open
Abstract
OBJECTIVES Given widespread interest in applying artificial intelligence (AI) to health data to improve patient care and health system efficiency, there is a need to understand the perspectives of the general public regarding the use of health data in AI research. DESIGN A qualitative study involving six focus groups with members of the public. Participants discussed their views about AI in general, then were asked to share their thoughts about three realistic health AI research scenarios. Data were analysed using qualitative description thematic analysis. SETTINGS Two cities in Ontario, Canada: Sudbury (400 km north of Toronto) and Mississauga (part of the Greater Toronto Area). PARTICIPANTS Forty-one purposively sampled members of the public (21M:20F, 25-65 years, median age 40). RESULTS Participants had low levels of prior knowledge of AI and mixed, mostly negative, perceptions of AI in general. Most endorsed using data for health AI research when there is strong potential for public benefit, providing that concerns about privacy, commercial motives and other risks were addressed. Inductive thematic analysis identified AI-specific hopes (eg, potential for faster and more accurate analyses, ability to use more data), fears (eg, loss of human touch, skill depreciation from over-reliance on machines) and conditions (eg, human verification of computer-aided decisions, transparency). There were mixed views about whether data subject consent is required for health AI research, with most participants wanting to know if, how and by whom their data were used. Though it was not an objective of the study, realistic health AI scenarios were found to have an educational effect. CONCLUSIONS Notwithstanding concerns and limited knowledge about AI in general, most members of the general public in six focus groups in Ontario, Canada perceived benefits from health AI and conditionally supported the use of health data for AI research.
Collapse
Affiliation(s)
- Melissa D McCradden
- Department of Bioethics, Hospital for Sick Children, Toronto, Ontario, Canada
| | - Tasmie Sarker
- Health Team, Vector Institute, Toronto, Ontario, Canada
| | - P Alison Paprica
- Health Team, Vector Institute, Toronto, Ontario, Canada
- Institute for Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
40
|
Chen J, See KC. Artificial Intelligence for COVID-19: Rapid Review. J Med Internet Res 2020; 22:e21476. [PMID: 32946413 PMCID: PMC7595751 DOI: 10.2196/21476] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 07/25/2020] [Accepted: 09/15/2020] [Indexed: 12/18/2022] Open
Abstract
BACKGROUND COVID-19 was first discovered in December 2019 and has since evolved into a pandemic. OBJECTIVE To address this global health crisis, artificial intelligence (AI) has been deployed at various levels of the health care system. However, AI has both potential benefits and limitations. We therefore conducted a review of AI applications for COVID-19. METHODS We performed an extensive search of the PubMed and EMBASE databases for COVID-19-related English-language studies published between December 1, 2019, and March 31, 2020. We supplemented the database search with reference list checks. A thematic analysis and narrative review of AI applications for COVID-19 was conducted. RESULTS In total, 11 papers were included for review. AI was applied to COVID-19 in four areas: diagnosis, public health, clinical decision making, and therapeutics. We identified several limitations including insufficient data, omission of multimodal methods of AI-based assessment, delay in realization of benefits, poor internal/external validation, inability to be used by laypersons, inability to be used in resource-poor settings, presence of ethical pitfalls, and presence of legal barriers. AI could potentially be explored in four other areas: surveillance, combination with big data, operation of other core clinical services, and management of patients with COVID-19. CONCLUSIONS In view of the continuing increase in the number of cases, and given that multiple waves of infections may occur, there is a need for effective methods to help control the COVID-19 pandemic. Despite its shortcomings, AI holds the potential to greatly augment existing human efforts, which may otherwise be overwhelmed by high patient numbers.
Collapse
Affiliation(s)
- Jiayang Chen
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Kay Choong See
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Division of Respiratory & Critical Care Medicine, Department of Medicine, National University Hospital, Singapore, Singapore
| |
Collapse
|