1
|
Franco D’Souza R, Mathew M, Mishra V, Surapaneni KM. Twelve tips for addressing ethical concerns in the implementation of artificial intelligence in medical education. MEDICAL EDUCATION ONLINE 2024; 29:2330250. [PMID: 38566608 PMCID: PMC10993743 DOI: 10.1080/10872981.2024.2330250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Accepted: 03/08/2024] [Indexed: 04/04/2024]
Abstract
Artificial Intelligence (AI) holds immense potential for revolutionizing medical education and healthcare. Despite its proven benefits, the full integration of AI faces hurdles, with ethical concerns standing out as a key obstacle. Thus, educators should be equipped to address the ethical issues that arise and ensure the seamless integration and sustainability of AI-based interventions. This article presents twelve essential tips for addressing the major ethical concerns in the use of AI in medical education. These include emphasizing transparency, addressing bias, validating content, prioritizing data protection, obtaining informed consent, fostering collaboration, training educators, empowering students, regularly monitoring, establishing accountability, adhering to standard guidelines, and forming an ethics committee to address the issues that arise in the implementation of AI. By adhering to these tips, medical educators and other stakeholders can foster a responsible and ethical integration of AI in medical education, ensuring its long-term success and positive impact.
Collapse
Affiliation(s)
- Russell Franco D’Souza
- Department of Education, UNESCO Chair in Bioethics, Melbourne, Australia
- Department of Organisational Psychological Medicine, International Institute of Organisational Psychological Medicine, Melbourne, Australia
| | - Mary Mathew
- Department of Pathology, Kasturba Medical College, Manipal, Manipal Academy of Higher Education (MAHE), Manipal, India
| | - Vedprakash Mishra
- School of Hogher Education and Research, Datta Meghe Institute of Higher Education and Research (Deemed to be University), Nagpur, India
| | - Krishna Mohan Surapaneni
- Department of Biochemistry, Panimalar Medical College Hospital & Research Institute, Chennai, India
- Department of Medical Education, Panimalar Medical College Hospital & Research Institute, Chennai, India
| |
Collapse
|
2
|
Chen S, Lobo BC. Regulatory and Implementation Considerations for Artificial Intelligence. Otolaryngol Clin North Am 2024; 57:871-886. [PMID: 38839554 DOI: 10.1016/j.otc.2024.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/07/2024]
Abstract
Successful artificial intelligence (AI) implementation is predicated on the trust of clinicians and patients, and is achieved through a culture of responsible use, focusing on regulations, standards, and education. Otolaryngologists can overcome barriers in AI implementation by promoting data standardization through professional societies, engaging in institutional efforts to integrate AI, and developing otolaryngology-specific AI education for both trainees and practitioners.
Collapse
Affiliation(s)
- Si Chen
- Department of Otolaryngology - Head and Neck Surgery, University of Florida College of Medicine, 1345 Center Drive, PO Box 100264, Gainesville, FL 32610, USA.
| | - Brian C Lobo
- Department of Otolaryngology - Head and Neck Surgery, University of Florida College of Medicine, 1345 Center Drive, PO Box 100264, Gainesville, FL 32610, USA
| |
Collapse
|
3
|
Mooghali M, Stroud AM, Yoo DW, Barry BA, Grimshaw AA, Ross JS, Zhu X, Miller JE. Trustworthy and ethical AI-enabled cardiovascular care: a rapid review. BMC Med Inform Decis Mak 2024; 24:247. [PMID: 39232725 DOI: 10.1186/s12911-024-02653-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 08/26/2024] [Indexed: 09/06/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) is increasingly used for prevention, diagnosis, monitoring, and treatment of cardiovascular diseases. Despite the potential for AI to improve care, ethical concerns and mistrust in AI-enabled healthcare exist among the public and medical community. Given the rapid and transformative recent growth of AI in cardiovascular care, to inform practice guidelines and regulatory policies that facilitate ethical and trustworthy use of AI in medicine, we conducted a literature review to identify key ethical and trust barriers and facilitators from patients' and healthcare providers' perspectives when using AI in cardiovascular care. METHODS In this rapid literature review, we searched six bibliographic databases to identify publications discussing transparency, trust, or ethical concerns (outcomes of interest) associated with AI-based medical devices (interventions of interest) in the context of cardiovascular care from patients', caregivers', or healthcare providers' perspectives. The search was completed on May 24, 2022 and was not limited by date or study design. RESULTS After reviewing 7,925 papers from six databases and 3,603 papers identified through citation chasing, 145 articles were included. Key ethical concerns included privacy, security, or confidentiality issues (n = 59, 40.7%); risk of healthcare inequity or disparity (n = 36, 24.8%); risk of patient harm (n = 24, 16.6%); accountability and responsibility concerns (n = 19, 13.1%); problematic informed consent and potential loss of patient autonomy (n = 17, 11.7%); and issues related to data ownership (n = 11, 7.6%). Major trust barriers included data privacy and security concerns, potential risk of patient harm, perceived lack of transparency about AI-enabled medical devices, concerns about AI replacing human aspects of care, concerns about prioritizing profits over patients' interests, and lack of robust evidence related to the accuracy and limitations of AI-based medical devices. Ethical and trust facilitators included ensuring data privacy and data validation, conducting clinical trials in diverse cohorts, providing appropriate training and resources to patients and healthcare providers and improving their engagement in different phases of AI implementation, and establishing further regulatory oversights. CONCLUSION This review revealed key ethical concerns and barriers and facilitators of trust in AI-enabled medical devices from patients' and healthcare providers' perspectives. Successful integration of AI into cardiovascular care necessitates implementation of mitigation strategies. These strategies should focus on enhanced regulatory oversight on the use of patient data and promoting transparency around the use of AI in patient care.
Collapse
Affiliation(s)
- Maryam Mooghali
- Section of General Internal Medicine, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA.
- Yale Center for Outcomes Research and Evaluation (CORE), 195 Church Street, New Haven, CT, 06510, USA.
| | - Austin M Stroud
- Biomedical Ethics Research Program, Mayo Clinic, Rochester, MN, USA
| | - Dong Whi Yoo
- School of Information, Kent State University, Kent, OH, USA
| | - Barbara A Barry
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, USA
- Division of Health Care Delivery Research, Mayo Clinic, Rochester, MN, USA
| | - Alyssa A Grimshaw
- Harvey Cushing/John Hay Whitney Medical Library, Yale University, New Haven, CT, USA
| | - Joseph S Ross
- Section of General Internal Medicine, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA
- Department of Health Policy and Management, Yale School of Public Health, New Haven, CT, USA
| | - Xuan Zhu
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, USA
| | - Jennifer E Miller
- Section of General Internal Medicine, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA
| |
Collapse
|
4
|
Arnaout A, Gill P, Virani A, Flatt A, Prodan-Balla N, Byres D, Stowe M, Saremi A, Coss M, Tatto M, Tuason M, Malovec S, Virani S. Shaping the future of healthcare in British Columbia: Establishing provincial clinical governance for responsible deployment of artificial intelligence tools. Healthc Manage Forum 2024; 37:320-328. [PMID: 39030752 DOI: 10.1177/08404704241264819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/22/2024]
Abstract
As healthcare embraces the transformative potential of Artificial Intelligence (AI), it is imperative to safeguard patient and provider safety, equity, and trust in the healthcare system. This article outlines the approach taken by the British Columbia (BC) Provincial Health Services Authority (PHSA) to establish clinical governance for the responsible deployment of AI tools in healthcare. Leveraging its province-wide mandate and expertise, PHSA establishes the infrastructure and processes to proactively and systematically intake, assess, prioritize, and evaluate AI tools. PHSA proposes a coordinated approach in AI tool deployment in collaboration with regional health authorities to prevent duplication of efforts and ensure equitable access to existing and emerging AI tools across the province of BC, incorporating principles of anti-Indigenous racism, cultural safety, and humility. The proposed governance structure underscores the identification of clinical needs, proactive ethics review, rigorous risk assessment, data validation, transparent communication, provider training, and ongoing evaluation to ensure success.
Collapse
Affiliation(s)
- Angel Arnaout
- Provincial Health Services Agency, Vancouver, British Columbia, Canada
| | - Prabjot Gill
- Provincial Health Services Agency, Vancouver, British Columbia, Canada
| | - Alice Virani
- Provincial Health Services Agency, Vancouver, British Columbia, Canada
| | - Alexandra Flatt
- Provincial Health Services Agency, Vancouver, British Columbia, Canada
| | | | - David Byres
- Provincial Health Services Agency, Vancouver, British Columbia, Canada
| | - Megan Stowe
- Provincial Health Services Agency, Vancouver, British Columbia, Canada
| | - Alireza Saremi
- Provincial Health Services Agency, Vancouver, British Columbia, Canada
| | - Michael Coss
- Provincial Health Services Agency, Vancouver, British Columbia, Canada
| | - Michael Tatto
- Provincial Health Services Agency, Vancouver, British Columbia, Canada
| | - May Tuason
- Provincial Health Services Agency, Vancouver, British Columbia, Canada
| | - Shannon Malovec
- Provincial Health Services Agency, Vancouver, British Columbia, Canada
| | - Sean Virani
- Provincial Health Services Agency, Vancouver, British Columbia, Canada
| |
Collapse
|
5
|
Pool J, Indulska M, Sadiq S. Large language models and generative AI in telehealth: a responsible use lens. J Am Med Inform Assoc 2024; 31:2125-2136. [PMID: 38441296 PMCID: PMC11339524 DOI: 10.1093/jamia/ocae035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 02/05/2024] [Accepted: 02/14/2024] [Indexed: 08/23/2024] Open
Abstract
OBJECTIVE This scoping review aims to assess the current research landscape of the application and use of large language models (LLMs) and generative Artificial Intelligence (AI), through tools such as ChatGPT in telehealth. Additionally, the review seeks to identify key areas for future research, with a particular focus on AI ethics considerations for responsible use and ensuring trustworthy AI. MATERIALS AND METHODS Following the scoping review methodological framework, a search strategy was conducted across 6 databases. To structure our review, we employed AI ethics guidelines and principles, constructing a concept matrix for investigating the responsible use of AI in telehealth. Using the concept matrix in our review enabled the identification of gaps in the literature and informed future research directions. RESULTS Twenty studies were included in the review. Among the included studies, 5 were empirical, and 15 were reviews and perspectives focusing on different telehealth applications and healthcare contexts. Benefit and reliability concepts were frequently discussed in these studies. Privacy, security, and accountability were peripheral themes, with transparency, explainability, human agency, and contestability lacking conceptual or empirical exploration. CONCLUSION The findings emphasized the potential of LLMs, especially ChatGPT, in telehealth. They provide insights into understanding the use of LLMs, enhancing telehealth services, and taking ethical considerations into account. By proposing three future research directions with a focus on responsible use, this review further contributes to the advancement of this emerging phenomenon of healthcare AI.
Collapse
Affiliation(s)
- Javad Pool
- ARC Industrial Transformation Training Centre for Information Resilience (CIRES), The University of Queensland, Brisbane 4072, Australia
- School of Electrical Engineering and Computer Science, The University of Queensland, Brisbane 4072, Australia
| | - Marta Indulska
- ARC Industrial Transformation Training Centre for Information Resilience (CIRES), The University of Queensland, Brisbane 4072, Australia
- Business School, The University of Queensland, Brisbane 4072, Australia
| | - Shazia Sadiq
- ARC Industrial Transformation Training Centre for Information Resilience (CIRES), The University of Queensland, Brisbane 4072, Australia
- School of Electrical Engineering and Computer Science, The University of Queensland, Brisbane 4072, Australia
| |
Collapse
|
6
|
Sriharan A, Sekercioglu N, Mitchell C, Senkaiahliyan S, Hertelendy A, Porter T, Banaszak-Holl J. Leadership for AI Transformation in Health Care Organization: Scoping Review. J Med Internet Res 2024; 26:e54556. [PMID: 39009038 PMCID: PMC11358667 DOI: 10.2196/54556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 03/12/2024] [Accepted: 07/15/2024] [Indexed: 07/17/2024] Open
Abstract
BACKGROUND The leaders of health care organizations are grappling with rising expenses and surging demands for health services. In response, they are increasingly embracing artificial intelligence (AI) technologies to improve patient care delivery, alleviate operational burdens, and efficiently improve health care safety and quality. OBJECTIVE In this paper, we map the current literature and synthesize insights on the role of leadership in driving AI transformation within health care organizations. METHODS We conducted a comprehensive search across several databases, including MEDLINE (via Ovid), PsycINFO (via Ovid), CINAHL (via EBSCO), Business Source Premier (via EBSCO), and Canadian Business & Current Affairs (via ProQuest), spanning articles published from 2015 to June 2023 discussing AI transformation within the health care sector. Specifically, we focused on empirical studies with a particular emphasis on leadership. We used an inductive, thematic analysis approach to qualitatively map the evidence. The findings were reported in accordance with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analysis extension for Scoping Reviews) guidelines. RESULTS A comprehensive review of 2813 unique abstracts led to the retrieval of 97 full-text articles, with 22 included for detailed assessment. Our literature mapping reveals that successful AI integration within healthcare organizations requires leadership engagement across technological, strategic, operational, and organizational domains. Leaders must demonstrate a blend of technical expertise, adaptive strategies, and strong interpersonal skills to navigate the dynamic healthcare landscape shaped by complex regulatory, technological, and organizational factors. CONCLUSIONS In conclusion, leading AI transformation in healthcare requires a multidimensional approach, with leadership across technological, strategic, operational, and organizational domains. Organizations should implement a comprehensive leadership development strategy, including targeted training and cross-functional collaboration, to equip leaders with the skills needed for AI integration. Additionally, when upskilling or recruiting AI talent, priority should be given to individuals with a strong mix of technical expertise, adaptive capacity, and interpersonal acumen, enabling them to navigate the unique complexities of the healthcare environment.
Collapse
Affiliation(s)
- Abi Sriharan
- Krembil Centre for Health Management and Leadership, Schulich School of Business, York University, Toronto, ON, Canada
- Institute for Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | - Nigar Sekercioglu
- Institute for Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | - Cheryl Mitchell
- Gustavson School of Business, University of Victoria, Victoria, ON, Canada
| | - Senthujan Senkaiahliyan
- Institute for Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | - Attila Hertelendy
- College of Business, Florida International University, Florida, FL, United States
| | - Tracy Porter
- Department of Management, Cleveland State University, Cleveland, OH, United States
| | - Jane Banaszak-Holl
- Department of Health Services Administration, School of Health Professions, University of Alabama Birmingham, Birmingham, OH, United States
| |
Collapse
|
7
|
Ayana G, Dese K, Daba Nemomssa H, Habtamu B, Mellado B, Badu K, Yamba E, Faye SL, Ondua M, Nsagha D, Nkweteyim D, Kong JD. Decolonizing global AI governance: assessment of the state of decolonized AI governance in Sub-Saharan Africa. ROYAL SOCIETY OPEN SCIENCE 2024; 11:231994. [PMID: 39113766 PMCID: PMC11303018 DOI: 10.1098/rsos.231994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 05/22/2024] [Indexed: 08/10/2024]
Abstract
Global artificial intelligence (AI) governance must prioritize equity, embrace a decolonial mindset, and provide the Global South countries the authority to spearhead solution creation. Decolonization is crucial for dismantling Western-centric cognitive frameworks and mitigating biases. Integrating a decolonial approach to AI governance involves recognizing persistent colonial repercussions, leading to biases in AI solutions and disparities in AI access based on gender, race, geography, income and societal factors. This paradigm shift necessitates deliberate efforts to deconstruct imperial structures governing knowledge production, perpetuating global unequal resource access and biases. This research evaluates Sub-Saharan African progress in AI governance decolonization, focusing on indicators like AI governance institutions, national strategies, sovereignty prioritization, data protection regulations, and adherence to local data usage requirements. Results show limited progress, with only Rwanda notably responsive to decolonization among the ten countries evaluated; 80% are 'decolonization-aware', and one is 'decolonization-blind'. The paper provides a detailed analysis of each nation, offering recommendations for fostering decolonization, including stakeholder involvement, addressing inequalities, promoting ethical AI, supporting local innovation, building regional partnerships, capacity building, public awareness, and inclusive governance. This paper contributes to elucidating the challenges and opportunities associated with decolonization in SSA countries, thereby enriching the ongoing discourse on global AI governance.
Collapse
Affiliation(s)
- Gelan Ayana
- School of Biomedical Engineering, Jimma University, Jimma, Ethiopia
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Kokeb Dese
- School of Biomedical Engineering, Jimma University, Jimma, Ethiopia
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Hundessa Daba Nemomssa
- School of Biomedical Engineering, Jimma University, Jimma, Ethiopia
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Bontu Habtamu
- School of Biomedical Engineering, Jimma University, Jimma, Ethiopia
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Bruce Mellado
- The University of the Witwatersrand, Private Bag 3, Johannesburg, Wits 2050, South Africa
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Kingsley Badu
- Kwame Nkrumah University of Science and Technology (KNUST), Kumasi, Ghana
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Edmund Yamba
- Kwame Nkrumah University of Science and Technology (KNUST), Kumasi, Ghana
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Sylvain Landry Faye
- Cheikh Anta Diop University, Avenue Cheikh Anta DIOP, Dakar SENEGAL
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Moise Ondua
- The University Ngaoundere, PO Box 454, Ngaoundere. City, Adamawa Province, Cameroon
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Dickson Nsagha
- The University of Buea, PO Box 63, Buea, South West Province, Cameroon
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Denis Nkweteyim
- The University of Buea, PO Box 63, Buea, South West Province, Cameroon
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| | - Jude Dzevela Kong
- Artificial Intelligence & Mathematical Modeling Lab (AIMM Lab), Dalla Lana School of Public Health, University of Toronto, 155 College St Room 500, Toronto, ON M5T 3M7
- Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP)
- Africa-Canada Artificial Intelligence & Data Innovation Consortium (ACADIC)
| |
Collapse
|
8
|
Lee W, Kim T, Kim H, Kim Y. Controlled Migration of Lithium Cations by Diamine Bridges in Water-Processable Polymer-Based Solid-State Electrolyte Memory Layers for Organic Synaptic Transistors. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2024:e2403645. [PMID: 39011779 DOI: 10.1002/adma.202403645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 05/30/2024] [Indexed: 07/17/2024]
Abstract
Synaptic transistors require sufficient retention (memory) performances of current signals to exactly mimic biological synapses. Ion migration has been proposed to achieve high retention characteristics but less attention has been paid to polymer-based solid-state electrolytes (SSEs) for organic synaptic transistors (OSTRs). Here, OSTRs with water-processable polymer-based SSEs, featuring ion migration-controllable molecular bridges, which are prepared by reactions of poly(4-styrenesulfonic acid) (PSSA), diethylenetriamine (DETA), and lithium hydroxide (LiOH) are demonstrated. The ion conductivity of PSSA:LiOH:DETA (1:0.4:X, PLiD) films is remarkably changed by the molar ratio (X) of DETA, which is attributed to the extended distances between the PSSA chains by the DETA bridges. The devices with the PLiD layers deliver noticeably changed hysteresis reaching an optimum at X = 0.2, leading to the longest retention of current signals upon single/double pulses. The long-term potentiation test confirms that the present OSTRs can gradually build up the postsynaptic current by gate pulses of -2 V, while the long-term depression can be adjusted by varying the depression gate pulses (≈0.2-1.2 V). The artificial neural network simulations disclose that the present OSTRs with the ion migration-controlled PLiD layers can perform synaptic processes with an accuracy of ≈96%.
Collapse
Affiliation(s)
- Woongki Lee
- Organic Nanoelectronics Laboratory and KNU Institute for Nanophotonics Applications (KINPA), Department of Chemical Engineering, Kyungpook National University, Daegu, 41566, Republic of Korea
- Department of Chemistry and Centre for Processable Electronics, Imperial College London, London, W12 0BZ, UK
| | - Taehoon Kim
- Organic Nanoelectronics Laboratory and KNU Institute for Nanophotonics Applications (KINPA), Department of Chemical Engineering, Kyungpook National University, Daegu, 41566, Republic of Korea
| | - Hwajeong Kim
- Organic Nanoelectronics Laboratory and KNU Institute for Nanophotonics Applications (KINPA), Department of Chemical Engineering, Kyungpook National University, Daegu, 41566, Republic of Korea
- Priority Research Center, Research Institute of Environmental Science & Technology, Kyungpook National University, Daegu, 41566, Republic of Korea
| | - Youngkyoo Kim
- Organic Nanoelectronics Laboratory and KNU Institute for Nanophotonics Applications (KINPA), Department of Chemical Engineering, Kyungpook National University, Daegu, 41566, Republic of Korea
| |
Collapse
|
9
|
Ooi K. Using Artificial Intelligence in Patient Care-Some Considerations for Doctors and Medical Regulators. Asian Bioeth Rev 2024; 16:483-499. [PMID: 39022377 PMCID: PMC11250739 DOI: 10.1007/s41649-024-00291-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 02/29/2024] [Accepted: 03/07/2024] [Indexed: 07/20/2024] Open
Abstract
This paper discusses the key role medical regulators have in setting standards for doctors who use artificial intelligence (AI) in patient care. Given their mandate to protect public health and safety, it is incumbent on regulators to guide the profession on emerging and vexed areas of practice such as AI. However, formulating effective and robust guidance in a novel field is challenging particularly as regulators are navigating unfamiliar territory. As such, regulators themselves will need to understand what AI is and to grapple with its ethical and practical challenges when doctors use AI in their care of patients. This paper will also argue that effective regulation of AI extends beyond devising guidance for the profession. It includes keeping abreast of developments in AI-based technology and considering the implications for regulation and the practice of medicine. On that note, medical regulators should encourage the profession to evaluate how AI may exacerbate existing issues in medicine and create unintended consequences so that doctors (and patients) are realistic about AI's potential and pitfalls when it is used in health care delivery.
Collapse
Affiliation(s)
- Kanny Ooi
- Medical Council of New Zealand, Wellington, New Zealand
| |
Collapse
|
10
|
Bouhouita-Guermech S, Haidar H. Scoping Review Shows the Dynamics and Complexities Inherent to the Notion of "Responsibility" in Artificial Intelligence within the Healthcare Context. Asian Bioeth Rev 2024; 16:315-344. [PMID: 39022380 PMCID: PMC11250714 DOI: 10.1007/s41649-024-00292-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 03/06/2024] [Accepted: 03/07/2024] [Indexed: 07/20/2024] Open
Abstract
The increasing integration of artificial intelligence (AI) in healthcare presents a host of ethical, legal, social, and political challenges involving various stakeholders. These challenges prompt various studies proposing frameworks and guidelines to tackle these issues, emphasizing distinct phases of AI development, deployment, and oversight. As a result, the notion of responsible AI has become widespread, incorporating ethical principles such as transparency, fairness, responsibility, and privacy. This paper explores the existing literature on AI use in healthcare to examine how it addresses, defines, and discusses the concept of responsibility. We conducted a scoping review of literature related to AI responsibility in healthcare, searching databases and reference lists between January 2017 and January 2022 for terms related to "responsibility" and "AI in healthcare", and their derivatives. Following screening, 136 articles were included. Data were grouped into four thematic categories: (1) the variety of terminology used to describe and address responsibility; (2) principles and concepts associated with responsibility; (3) stakeholders' responsibilities in AI clinical development, use, and deployment; and (4) recommendations for addressing responsibility concerns. The results show the lack of a clear definition of AI responsibility in healthcare and highlight the importance of ensuring responsible development and implementation of AI in healthcare. Further research is necessary to clarify this notion to contribute to developing frameworks regarding the type of responsibility (ethical/moral/professional, legal, and causal) of various stakeholders involved in the AI lifecycle.
Collapse
Affiliation(s)
| | - Hazar Haidar
- Ethics Programs, Department of Letters and Humanities, University of Quebec at Rimouski, Rimouski, Québec Canada
| |
Collapse
|
11
|
Wilkinson LS, Dunbar JK, Lip G. Clinical Integration of Artificial Intelligence for Breast Imaging. Radiol Clin North Am 2024; 62:703-716. [PMID: 38777544 DOI: 10.1016/j.rcl.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
This article describes an approach to planning and implementing artificial intelligence products in a breast screening service. It highlights the importance of an in-depth understanding of the end-to-end workflow and effective project planning by a multidisciplinary team. It discusses the need for monitoring to ensure that performance is stable and meets expectations, as well as focusing on the potential for inadvertantly generating inequality. New cross-discipline roles and expertise will be needed to enhance service delivery.
Collapse
Affiliation(s)
- Louise S Wilkinson
- Oxford Breast Imaging Centre, Churchill Hospital, Old Road, Headington, Oxford OX3 7LE, UK.
| | - J Kevin Dunbar
- Regional Head of Screening Quality Assurance Service (SQAS) - South, NHS England, England, UK
| | - Gerald Lip
- North East Scotland Breast Screening Service, Aberdeen Royal Infirmary, Foresterhill Road, Aberdeen AB25 2XF, UK
| |
Collapse
|
12
|
Graham Y, Spencer AE, Velez GE, Herbell K. Engaging Youth Voice and Family Partnerships to Improve Children's Mental Health Outcomes. Child Adolesc Psychiatr Clin N Am 2024; 33:343-354. [PMID: 38823808 DOI: 10.1016/j.chc.2024.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/03/2024]
Abstract
Promoting active participation of families and youth in mental health systems of care is the cornerstone of creating a more inclusive, effective, and responsive care network. This article focuses on the inclusion of parent and youth voice in transforming our mental health care system to promote increased engagement at all levels of service delivery. Youth and parent peer support delivery models, digital innovation, and technology not only empower the individuals involved, but also have the potential to enhance the overall efficacy of the mental health care system.
Collapse
Affiliation(s)
- Yolanda Graham
- Morehouse School of Medicine, Devereux Advanced Behavioral Health, 444 Devereux Drive, Villanova, PA 19085, USA.
| | - Andrea E Spencer
- Ann & Robert H. Lurie Children's Hospital of Chicago, Northwestern University Feinberg School of Medicine, 225 East Chicago Avenue, Chicago, IL 60611, USA
| | - German E Velez
- New York-Presbyterian Hospital, Weill Cornell Medical College/ Columbia University College of Physicians and Surgeons, 525 E. 68th Street, Box 140, New York, NY 10065, USA
| | - Kayla Herbell
- Martha S. Pitzer Center for Women, Children, and Youth, The Ohio State University, 1577 Neil Avenue, Columbus, OH 43210, USA
| |
Collapse
|
13
|
Wieben AM, Alreshidi BG, Douthit BJ, Sileo M, Vyas P, Steege L, Gilmore-Bykovskyi A. Nurses' perceptions of the design, implementation, and adoption of machine learning clinical decision support: A descriptive qualitative study. J Nurs Scholarsh 2024. [PMID: 38898636 DOI: 10.1111/jnu.13001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Revised: 05/06/2024] [Accepted: 06/07/2024] [Indexed: 06/21/2024]
Abstract
INTRODUCTION The purpose of this study was to explore nurses' perspectives on Machine Learning Clinical Decision Support (ML CDS) design, development, implementation, and adoption. DESIGN Qualitative descriptive study. METHODS Nurses (n = 17) participated in semi-structured interviews. Data were transcribed, coded, and analyzed using Thematic analysis methods as described by Braun and Clarke. RESULTS Four major themes and 14 sub-themes highlight nurses' perspectives on autonomy in decision-making, the influence of prior experience in shaping their preferences for use of novel CDS tools, the need for clarity in why ML CDS is useful in improving practice/outcomes, and their desire to have nursing integrated in design and implementation of these tools. CONCLUSION This study provided insights into nurse perceptions regarding the utility and usability of ML CDS as well as the influence of previous experiences with technology and CDS, change management strategies needed at the time of implementation of ML CDS, the importance of nurse-perceived engagement in the development process, nurse information needs at the time of ML CDS deployment, and the perceived impact of ML CDS on nurse decision making autonomy. CLINICAL RELEVANCE This study contributes to the body of knowledge about the use of AI and machine learning (ML) in nursing practice. Through generation of insights drawn from nurses' perspectives, these findings can inform successful design and adoption of ML Clinical Decision Support.
Collapse
Affiliation(s)
- Ann M Wieben
- University of Wisconsin-Madison School of Nursing, Madison, Wisconsin, USA
| | - Bader G Alreshidi
- Department of Medical Surgical Nursing, University of Hail College of Nursing, Hail, Saudi Arabia
| | - Brian J Douthit
- United States Department of Veterans Affairs, Department of Biomedical Informatics, Vanderbilt University, Nashville, Tennessee, USA
| | - Marisa Sileo
- Boston Children's Hospital, Boston, Massachusetts, USA
| | | | - Linsey Steege
- University of Wisconsin-Madison School of Nursing, Madison, Wisconsin, USA
| | - Andrea Gilmore-Bykovskyi
- BerbeeWalsh Department of Emergency Medicine, University of Wisconsin-Madison School of Medicine & Public Health, Madison, Wisconsin, USA
| |
Collapse
|
14
|
Yang DP, Tang XG, Sun QJ, Chen JY, Jiang YP, Zhang D, Dong HF. Emerging ferroelectric materials ScAlN: applications and prospects in memristors. MATERIALS HORIZONS 2024; 11:2802-2819. [PMID: 38525789 DOI: 10.1039/d3mh01942j] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/26/2024]
Abstract
The research found that after doping with rare earth elements, a large number of electrons and holes will be produced on the surface of AlN, which makes the material have the characteristics of spontaneous polarization. A new type of ferroelectric material has made a new breakthrough in the application of nitride-materials in the field of integrated devices. In this paper, the application prospects and development trends of ferroelectric material ScAlN in memristors are reviewed. Firstly, various fabrication processes and structures of the current ScAlN thin films are described in detail to explore the implementation of their applications in synaptic devices. Secondly, a series of electrical properties of ScAlN films, such as the current switching ratio and long-term cycle durability, were tested to explore whether their electrical properties could meet the basic needs of memristor device materials. Finally, a series of summaries on the current research studies of ScAlN thin films in the synaptic simulation are made, and the working state of ScAlN thin films as a synaptic device is observed. The results show that the ScAlN ferroelectric material has high residual polarization, no wake-up function, excellent stability and obvious STDP behavior, which indicates that the modified material has wide application prospects in the research and development of memristors.
Collapse
Affiliation(s)
- Dong-Ping Yang
- School of Physics and Optoelectric Engineering, Guangdong University of Technology, Guangzhou Higher Education Mega Centre, Guangzhou 510006, China.
| | - Xin-Gui Tang
- School of Physics and Optoelectric Engineering, Guangdong University of Technology, Guangzhou Higher Education Mega Centre, Guangzhou 510006, China.
| | - Qi-Jun Sun
- School of Physics and Optoelectric Engineering, Guangdong University of Technology, Guangzhou Higher Education Mega Centre, Guangzhou 510006, China.
| | - Jia-Ying Chen
- School of Physics and Optoelectric Engineering, Guangdong University of Technology, Guangzhou Higher Education Mega Centre, Guangzhou 510006, China.
| | - Yan-Ping Jiang
- School of Physics and Optoelectric Engineering, Guangdong University of Technology, Guangzhou Higher Education Mega Centre, Guangzhou 510006, China.
| | - Dan Zhang
- School of Physics and Optoelectric Engineering, Guangdong University of Technology, Guangzhou Higher Education Mega Centre, Guangzhou 510006, China.
| | - Hua-Feng Dong
- School of Physics and Optoelectric Engineering, Guangdong University of Technology, Guangzhou Higher Education Mega Centre, Guangzhou 510006, China.
| |
Collapse
|
15
|
Mahesh N, Devishamani CS, Raghu K, Mahalingam M, Bysani P, Chakravarthy AV, Raman R. Advancing healthcare: the role and impact of AI and foundation models. Am J Transl Res 2024; 16:2166-2179. [PMID: 39006256 PMCID: PMC11236664 DOI: 10.62347/wqwv9220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2024] [Accepted: 05/06/2024] [Indexed: 07/16/2024]
Abstract
BACKGROUND The integration of artificial intelligence (AI) into the healthcare domain is a monumental shift with profound implications for diagnostics, medical interventions, and the overall structure of healthcare systems. PURPOSE This study explores the transformative journey of foundation AI models in healthcare, shedding light on the challenges, ethical considerations, and vast potential they hold for improving patient outcome and system efficiency. Notably, in this investigation we observe a relatively slow adoption of AI within the public sector of healthcare. The evolution of AI in healthcare is un-paralleled, especially its prowess in revolutionizing diagnostic processes. RESULTS This research showcases how these foundational models can unravel hidden patterns within complex medical datasets. The impact of AI reverberates through medical interventions, encompassing pathology, imaging, genomics, and personalized healthcare, positioning AI as a cornerstone in the quest for precision medicine. The paper delves into the applications of generative AI models in critical facets of healthcare, including decision support, medical imaging, and the prediction of protein structures. The study meticulously evaluates various AI models, such as transfer learning, RNN, autoencoders, and their roles in the healthcare landscape. A pioneering concept introduced in this exploration is that of General Medical AI (GMAI), advocating for the development of reusable and flexible AI models. CONCLUSION The review article discusses how AI can revolutionize healthcare by stressing the significance of transparency, fairness and accountability, in AI applications regarding patient data privacy and biases. By tackling these issues and suggesting a governance structure the article adds to the conversation about AI integration in healthcare environments.
Collapse
Affiliation(s)
- Nandhini Mahesh
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Medical Research Foundation Chennai, Tamil Nadu, India
| | - Chitralekha S Devishamani
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Medical Research Foundation Chennai, Tamil Nadu, India
| | - Keerthana Raghu
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Medical Research Foundation Chennai, Tamil Nadu, India
| | - Maanasi Mahalingam
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Medical Research Foundation Chennai, Tamil Nadu, India
| | - Pragathi Bysani
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Medical Research Foundation Chennai, Tamil Nadu, India
| | | | - Rajiv Raman
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Medical Research Foundation Chennai, Tamil Nadu, India
| |
Collapse
|
16
|
Movahed M, Bilderback S. Evaluating the readiness of healthcare administration students to utilize AI for sustainable leadership: a survey study. J Health Organ Manag 2024; ahead-of-print. [PMID: 38858220 DOI: 10.1108/jhom-12-2023-0385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/12/2024]
Abstract
PURPOSE This paper explores how healthcare administration students perceive the integration of Artificial Intelligence (AI) in healthcare leadership, mainly focusing on the sustainability aspects involved. It aims to identify gaps in current educational curricula and suggests enhancements to better prepare future healthcare professionals for the evolving demands of AI-driven healthcare environments. DESIGN/METHODOLOGY/APPROACH This study utilized a cross-sectional survey design to understand healthcare administration students' perceptions regarding integrating AI in healthcare leadership. An online questionnaire, developed from an extensive literature review covering fundamental AI knowledge and its role in sustainable leadership, was distributed to students majoring and minoring in healthcare administration. This methodological approach garnered participation from 62 students, providing insights and perspectives crucial for the study's objectives. FINDINGS The research revealed that while a significant majority of healthcare administration students (70%) recognize the potential of AI in fostering sustainable leadership in healthcare, only 30% feel adequately prepared to work in AI-integrated environments. Additionally, students were interested in learning more about AI applications in healthcare and the role of AI in sustainable leadership, underscoring the need for comprehensive AI-focused education in their curriculum. RESEARCH LIMITATIONS/IMPLICATIONS The research is limited by its focus on a single academic institution, which may not fully represent the diversity of perspectives in healthcare administration. PRACTICAL IMPLICATIONS This study highlights the need for healthcare administration curricula to incorporate AI education, aligning theoretical knowledge with practical applications, to effectively prepare future professionals for the evolving demands of AI-integrated healthcare environments. ORIGINALITY/VALUE This research paper presents insights into healthcare administration students' readiness and perspectives toward AI integration in healthcare leadership, filling a critical gap in understanding the educational needs in the evolving landscape of AI-driven healthcare.
Collapse
Affiliation(s)
- Mohammad Movahed
- Department of Economics, Finance, and Healthcare Administration, Valdosta State University, Valdosta, Georgia, USA
| | | |
Collapse
|
17
|
Kayarian F, Patel D, O'Brien JR, Schraft EK, Gottlieb M. Artificial intelligence and point-of-care ultrasound: Benefits, limitations, and implications for the future. Am J Emerg Med 2024; 80:119-122. [PMID: 38555712 DOI: 10.1016/j.ajem.2024.03.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 03/23/2024] [Indexed: 04/02/2024] Open
Abstract
The utilization of artificial intelligence (AI) in medical imaging has become a rapidly growing field as a means to address contemporary demands and challenges of healthcare. Among the emerging applications of AI is point-of-care ultrasound (POCUS), in which the combination of these two technologies has garnered recent attention in research and clinical settings. In this Controversies paper, we will discuss the benefits, limitations, and future considerations of AI in POCUS for patients, clinicians, and healthcare systems.
Collapse
Affiliation(s)
| | - Daven Patel
- Department of Emergency Medicine, Rush University Medical Center, Chicago, IL, USA.
| | - James R O'Brien
- Department of Emergency Medicine, Rush University Medical Center, Chicago, IL, USA. james_o'
| | - Evelyn K Schraft
- Department of Emergency Medicine, Rush University Medical Center, Chicago, IL, USA.
| | - Michael Gottlieb
- Department of Emergency Medicine, Rush University Medical Center, Chicago, IL, USA.
| |
Collapse
|
18
|
Ong JCL, Chang SYH, William W, Butte AJ, Shah NH, Chew LST, Liu N, Doshi-Velez F, Lu W, Savulescu J, Ting DSW. Ethical and regulatory challenges of large language models in medicine. Lancet Digit Health 2024; 6:e428-e432. [PMID: 38658283 DOI: 10.1016/s2589-7500(24)00061-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Revised: 03/08/2024] [Accepted: 03/12/2024] [Indexed: 04/26/2024]
Abstract
With the rapid growth of interest in and use of large language models (LLMs) across various industries, we are facing some crucial and profound ethical concerns, especially in the medical field. The unique technical architecture and purported emergent abilities of LLMs differentiate them substantially from other artificial intelligence (AI) models and natural language processing techniques used, necessitating a nuanced understanding of LLM ethics. In this Viewpoint, we highlight ethical concerns stemming from the perspectives of users, developers, and regulators, notably focusing on data privacy and rights of use, data provenance, intellectual property contamination, and broad applications and plasticity of LLMs. A comprehensive framework and mitigating strategies will be imperative for the responsible integration of LLMs into medical practice, ensuring alignment with ethical principles and safeguarding against potential societal risks.
Collapse
Affiliation(s)
- Jasmine Chiat Ling Ong
- Division of Pharmacy, Singapore General Hospital, Singapore; Duke-NUS Medical School, National University of Singapore, Singapore
| | - Shelley Yin-Hsi Chang
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan; College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Wasswa William
- Department of Biomedical Sciences and Engineering, Mbarara University of Science and Technology, Mbarara, Uganda
| | - Atul J Butte
- Bakar Computational Health Sciences Institute, and Department of Pediatrics, University of California, San Francisco, San Francisco, CA, USA; Center for Data-Driven Insights and Innovation, University of California Health, Oakland, CA, USA
| | - Nigam H Shah
- Stanford Health Care, Palo Alto, CA, USA; Department of Medicine, and Clinical Excellence Research Center, School of Medicine, Stanford University, Stanford, CA, USA
| | - Lita Sui Tjien Chew
- Department of Pharmacy, National University of Singapore, Singapore; Singapore Health Services, Pharmacy and Therapeutics Council Office, Singapore; Department of Pharmacy, National Cancer Centre Singapore, Singapore
| | - Nan Liu
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Finale Doshi-Velez
- Harvard Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Wei Lu
- StatNLP Research Group, Singapore University of Technology and Design, Singpore
| | - Julian Savulescu
- Murdoch Children's Research Institute, Melbourne, VIC, Australia; Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, UK
| | - Daniel Shu Wei Ting
- Duke-NUS Medical School, National University of Singapore, Singapore; Artificial Intelligence and Digital Innovation, Singapore Eye Research Institute, Singapore National Eye Center, Singapore Health Service, Singapore; Byers Eye Institute, Stanford University, Palo Alto, CA, USA.
| |
Collapse
|
19
|
Scott IA, van der Vegt A, Lane P, McPhail S, Magrabi F. Achieving large-scale clinician adoption of AI-enabled decision support. BMJ Health Care Inform 2024; 31:e100971. [PMID: 38816209 PMCID: PMC11141172 DOI: 10.1136/bmjhci-2023-100971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Accepted: 05/15/2024] [Indexed: 06/01/2024] Open
Abstract
Computerised decision support (CDS) tools enabled by artificial intelligence (AI) seek to enhance accuracy and efficiency of clinician decision-making at the point of care. Statistical models developed using machine learning (ML) underpin most current tools. However, despite thousands of models and hundreds of regulator-approved tools internationally, large-scale uptake into routine clinical practice has proved elusive. While underdeveloped system readiness and investment in AI/ML within Australia and perhaps other countries are impediments, clinician ambivalence towards adopting these tools at scale could be a major inhibitor. We propose a set of principles and several strategic enablers for obtaining broad clinician acceptance of AI/ML-enabled CDS tools.
Collapse
Affiliation(s)
- Ian A Scott
- Internal Medicine and Clinical Epidemiology, Princess Alexandra Hospital, Brisbane, Queensland, Australia
- Centre for Health Services Research, The University of Queensland Faculty of Medicine and Biomedical Sciences, Brisbane, Queensland, Australia
| | - Anton van der Vegt
- Digital Health Centre, The University of Queensland Faculty of Medicine and Biomedical Sciences, Herston, Queensland, Australia
| | - Paul Lane
- Safety, Quality and Innovation, The Prince Charles Hospital, Brisbane, Queensland, Australia
| | - Steven McPhail
- Australian Centre for Health Services Innovation, Queensland University of Technology Faculty of Health, Brisbane, Queensland, Australia
| | - Farah Magrabi
- Macquarie University, Sydney, New South Wales, Australia
| |
Collapse
|
20
|
Gordon ER, Trager MH, Kontos D, Weng C, Geskin LJ, Dugdale LS, Samie FH. Ethical considerations for artificial intelligence in dermatology: a scoping review. Br J Dermatol 2024; 190:789-797. [PMID: 38330217 DOI: 10.1093/bjd/ljae040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 12/26/2023] [Accepted: 01/23/2024] [Indexed: 02/10/2024]
Abstract
The field of dermatology is experiencing the rapid deployment of artificial intelligence (AI), from mobile applications (apps) for skin cancer detection to large language models like ChatGPT that can answer generalist or specialist questions about skin diagnoses. With these new applications, ethical concerns have emerged. In this scoping review, we aimed to identify the applications of AI to the field of dermatology and to understand their ethical implications. We used a multifaceted search approach, searching PubMed, MEDLINE, Cochrane Library and Google Scholar for primary literature, following the PRISMA Extension for Scoping Reviews guidance. Our advanced query included terms related to dermatology, AI and ethical considerations. Our search yielded 202 papers. After initial screening, 68 studies were included. Thirty-two were related to clinical image analysis and raised ethical concerns for misdiagnosis, data security, privacy violations and replacement of dermatologist jobs. Seventeen discussed limited skin of colour representation in datasets leading to potential misdiagnosis in the general population. Nine articles about teledermatology raised ethical concerns, including the exacerbation of health disparities, lack of standardized regulations, informed consent for AI use and privacy challenges. Seven addressed inaccuracies in the responses of large language models. Seven examined attitudes toward and trust in AI, with most patients requesting supplemental assessment by a physician to ensure reliability and accountability. Benefits of AI integration into clinical practice include increased patient access, improved clinical decision-making, efficiency and many others. However, safeguards must be put in place to ensure the ethical application of AI.
Collapse
Affiliation(s)
- Emily R Gordon
- Columbia University Vagelos College of Physicians and Surgeons, New York, NY, USA
| | - Megan H Trager
- Columbia University Irving Medical Center, Departments of Dermatology
| | - Despina Kontos
- University of Pennsylvania, Perelman School of Medicine, Department of Radiology, Philadelphia, PA, USA
- Radiology
| | | | - Larisa J Geskin
- Columbia University Irving Medical Center, Departments of Dermatology
| | - Lydia S Dugdale
- Columbia University Vagelos College of Physicians and Surgeons, Department of Medicine, Center for Clinical Medical Ethics, New York, NY, USA
| | - Faramarz H Samie
- Columbia University Irving Medical Center, Departments of Dermatology
| |
Collapse
|
21
|
Leslie K, Myles S, Alraja AA, Chiu P, Schiller CJ, Nelson S, Adams TL. Professional regulation in the digital era: A qualitative case study of three professions in Ontario, Canada. PLoS One 2024; 19:e0303192. [PMID: 38728239 PMCID: PMC11086820 DOI: 10.1371/journal.pone.0303192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 04/20/2024] [Indexed: 05/12/2024] Open
Abstract
Technology is transforming service delivery and practice in many regulated professions, altering required skills, scopes of practice, and the organization of professional work. Professional regulators face considerable pressure to facilitate technology-enabled work while adapting to digital changes in their practices and procedures. However, our understanding of how regulators are responding to technology-driven risks and the impact of technology on regulatory policy is limited. To examine the impact of technology and digitalization on regulation, we conducted an exploratory case study of the regulatory bodies for nursing, law, and social work in Ontario, Canada. Data were collected over two phases. First, we collected documents from the regulators' websites and regulatory consortiums. Second, we conducted key informant interviews with two representatives from each regulator. Data were thematically analyzed to explore the impact of technological change on regulatory activities and policies and to compare how regulatory structure and field shape this impact. Five themes were identified in our analysis: balancing efficiency potential with risks of certain technological advances; the potential for improving regulation through data analytics; considering how to regulate a technologically competent workforce; recalibrating pandemic emergency measures involving technology; and contemplating the future of technology on regulatory policy and practice. Regulators face ongoing challenges with providing equity-based approaches to regulating virtual practice, ensuring practitioners are technologically competent, and leveraging regulatory data to inform decision-making. Policymakers and regulators across Canada and internationally should prioritize risk-balanced policies, guidelines, and practice standards to support professional practice in the digital era.
Collapse
Affiliation(s)
- Kathleen Leslie
- Faculty of Health Disciplines, Athabasca University, Athabasca, Alberta, Canada
| | - Sophia Myles
- Faculty of Health Disciplines, Athabasca University, Athabasca, Alberta, Canada
- School of Sociological and Anthropological Studies, University of Ottawa, Ottawa, Ontario, Canada
| | - Abeer A. Alraja
- Faculty of Health Disciplines, Athabasca University, Athabasca, Alberta, Canada
| | - Patrick Chiu
- Faculty of Nursing, University of Alberta, Edmonton, Alberta, Canada
| | - Catharine J. Schiller
- School of Nursing, University of Northern British Columbia, Prince George, British Columbia, Canada
| | - Sioban Nelson
- Faculty of Nursing, University of Toronto, Toronto, Ontario, Canada
| | - Tracey L. Adams
- Department of Sociology, Western University, London, Ontario, Canada
| |
Collapse
|
22
|
Esmaeilzadeh P. Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: A perspective for healthcare organizations. Artif Intell Med 2024; 151:102861. [PMID: 38555850 DOI: 10.1016/j.artmed.2024.102861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 03/19/2024] [Accepted: 03/25/2024] [Indexed: 04/02/2024]
Abstract
Healthcare organizations have realized that Artificial intelligence (AI) can provide a competitive edge through personalized patient experiences, improved patient outcomes, early diagnosis, augmented clinician capabilities, enhanced operational efficiencies, or improved medical service accessibility. However, deploying AI-driven tools in the healthcare ecosystem could be challenging. This paper categorizes AI applications in healthcare and comprehensively examines the challenges associated with deploying AI in medical practices at scale. As AI continues to make strides in healthcare, its integration presents various challenges, including production timelines, trust generation, privacy concerns, algorithmic biases, and data scarcity. The paper highlights that flawed business models and wrong workflows in healthcare practices cannot be rectified merely by deploying AI-driven tools. Healthcare organizations should re-evaluate root problems such as misaligned financial incentives (e.g., fee-for-service models), dysfunctional medical workflows (e.g., high rates of patient readmissions), poor care coordination between different providers, fragmented electronic health records systems, and inadequate patient education and engagement models in tandem with AI adoption. This study also explores the need for a cultural shift in viewing AI not as a threat but as an enabler that can enhance healthcare delivery and create new employment opportunities while emphasizing the importance of addressing underlying operational issues. The necessity of investments beyond finance is discussed, emphasizing the importance of human capital, continuous learning, and a supportive environment for AI integration. The paper also highlights the crucial role of clear regulations in building trust, ensuring safety, and guiding the ethical use of AI, calling for coherent frameworks addressing transparency, model accuracy, data quality control, liability, and ethics. Furthermore, this paper underscores the importance of advancing AI literacy within academia to prepare future healthcare professionals for an AI-driven landscape. Through careful navigation and proactive measures addressing these challenges, the healthcare community can harness AI's transformative power responsibly and effectively, revolutionizing healthcare delivery and patient care. The paper concludes with a vision and strategic suggestions for the future of healthcare with AI, emphasizing thoughtful, responsible, and innovative engagement as the pathway to realizing its full potential to unlock immense benefits for healthcare organizations, physicians, nurses, and patients while proactively mitigating risks.
Collapse
Affiliation(s)
- Pouyan Esmaeilzadeh
- Department of Information Systems and Business Analytics, College of Business, Florida International University (FIU), Modesto A. Maidique Campus, 11200 S.W. 8th St, RB 261B, Miami, FL 33199, United States.
| |
Collapse
|
23
|
Kim MJ, Admane S, Chang YK, Shih KSK, Reddy A, Tang M, Cruz MDL, Taylor TP, Bruera E, Hui D. Chatbot Performance in Defining and Differentiating Palliative Care, Supportive Care, Hospice Care. J Pain Symptom Manage 2024; 67:e381-e391. [PMID: 38219964 DOI: 10.1016/j.jpainsymman.2024.01.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 12/22/2023] [Accepted: 01/03/2024] [Indexed: 01/16/2024]
Abstract
CONTEXT Artificial intelligence (AI) chatbot platforms are increasingly used by patients as sources of information. However, there is limited data on the performance of these platforms, especially regarding palliative care terms. OBJECTIVES We evaluated the accuracy, comprehensiveness, reliability, and readability of three AI platforms in defining and differentiating "palliative care," "supportive care," and "hospice care." METHODS We asked ChatGPT, Microsoft Bing Chat, Google Bard to define and differentiate "palliative care," "supportive care," and "hospice care" and provide three references. Outputs were randomized and assessed by six blinded palliative care physicians using 0-10 scales (10 = best) for accuracy, comprehensiveness, and reliability. Readability was assessed using Flesch Kincaid Grade Level and Flesch Reading Ease scores. RESULTS The mean (SD) accuracy scores for ChatGPT, Bard, and Bing Chat were 9.1 (1.3), 8.7 (1.5), and 8.2 (1.7), respectively; for comprehensiveness, the scores for the three platforms were 8.7 (1.5), 8.1 (1.9), and 5.6 (2.0), respectively; for reliability, the scores were 6.3 (2.5), 3.2 (3.1), and 7.1 (2.4), respectively. Despite generally high accuracy, we identified some major errors (e.g., Bard stated that supportive care had "the goal of prolonging life or even achieving a cure"). We found several major omissions, particularly with Bing Chat (e.g., no mention of interdisciplinary teams in palliative care or hospice care). References were often unreliable. Readability scores did not meet recommended levels for patient educational materials. CONCLUSION We identified important concerns regarding the accuracy, comprehensiveness, reliability, and readability of outputs from AI platforms. Further research is needed to improve their performance.
Collapse
Affiliation(s)
- Min Ji Kim
- Department of Palliative Care (M.J.K., S.A., Y.K.C., A.R., M.T., E.B., D.H.), Rehabilitation, and Integrative Medicine, University of Texas MD Anderson Cancer Center, Houston, Texas, USA.
| | - Sonal Admane
- Department of Palliative Care (M.J.K., S.A., Y.K.C., A.R., M.T., E.B., D.H.), Rehabilitation, and Integrative Medicine, University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Yuchieh Kathryn Chang
- Department of Palliative Care (M.J.K., S.A., Y.K.C., A.R., M.T., E.B., D.H.), Rehabilitation, and Integrative Medicine, University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | | | - Akhila Reddy
- Department of Palliative Care (M.J.K., S.A., Y.K.C., A.R., M.T., E.B., D.H.), Rehabilitation, and Integrative Medicine, University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Michael Tang
- Department of Palliative Care (M.J.K., S.A., Y.K.C., A.R., M.T., E.B., D.H.), Rehabilitation, and Integrative Medicine, University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Maxine De La Cruz
- Beth Israel Deaconess Medical Center, Harvard Medical School (M.C.), Boston, Massachusetts, USA
| | - Terry Pham Taylor
- Department of Hospital Medicine, University of Texas MD Anderson Cancer Center (T.P.T.), Houston, Texas, USA
| | - Eduardo Bruera
- Department of Palliative Care (M.J.K., S.A., Y.K.C., A.R., M.T., E.B., D.H.), Rehabilitation, and Integrative Medicine, University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - David Hui
- Department of Palliative Care (M.J.K., S.A., Y.K.C., A.R., M.T., E.B., D.H.), Rehabilitation, and Integrative Medicine, University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|
24
|
Duggal I, Tripathi T. Ethical principles in dental healthcare: Relevance in the current technological era of artificial intelligence. J Oral Biol Craniofac Res 2024; 14:317-321. [PMID: 38645705 PMCID: PMC11031811 DOI: 10.1016/j.jobcr.2024.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 04/03/2024] [Accepted: 04/07/2024] [Indexed: 04/23/2024] Open
Abstract
In the current technological era, dental practitioners are faced with various ethical challenges, highlighting the importance of bioethics in this healthcare discipline. The rise of artificial intelligence has recently sparked a debate regarding the privacy of patient data. While the advancements may offer innovative treatment options, their long-term effects may not be fully understood, raising questions about the responsible implementation of such methods. Thus, conscientious and ethical AI use in dentistry encompasses that patients be notified about how their data is used and also about the involvement of AI-based decision-making. This paper explores the key bioethical considerations in dental healthcare, with a focus on evidence-based AI development and use. The framework of ethical principles and guidelines provided would foster trust between the clinician and patients, while promoting the highest standards of care.
Collapse
Affiliation(s)
- Isha Duggal
- Department of Orthodontics and Dentofacial Orthopedics, Maulana Azad Institute of Dental Sciences, New Delhi, 110002, India
| | - Tulika Tripathi
- Department of Orthodontics and Dentofacial Orthopedics, Maulana Azad Institute of Dental Sciences, New Delhi, 110002, India
| |
Collapse
|
25
|
Wang W, Wang Y, Chen L, Ma R, Zhang M. Justice at the Forefront: Cultivating felt accountability towards Artificial Intelligence among healthcare professionals. Soc Sci Med 2024; 347:116717. [PMID: 38518481 DOI: 10.1016/j.socscimed.2024.116717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 02/10/2024] [Accepted: 02/20/2024] [Indexed: 03/24/2024]
Abstract
The advent of AI has ushered in a new era of patient care, but with it emerges a contentious debate surrounding accountability for algorithmic medical decisions. Within this discourse, a spectrum of views prevails, ranging from placing accountability on AI solution providers to laying it squarely on the shoulders of healthcare professionals. In response to this debate, this study, grounded in the mutualistic partner choice (MPC) model of the evolution of morality, seeks to establish a configurational framework for cultivating felt accountability towards AI among healthcare professionals. This framework underscores two pivotal conditions: AI ethics enactment and trusting belief in AI and considers the influence of organizational complexity in the implementation of this framework. Drawing on Fuzzy-set Qualitative Comparative Analysis (fsQCA) of a sample of 401 healthcare professionals, this study reveals that a) focusing justice and autonomy in AI ethics enactment along with building trusting belief in AI reliability and functionality reinforces healthcare professionals' sense of felt accountability towards AI, b) fostering felt accountability towards AI necessitates ensuring the establishment of trust in its functionality for high complexity hospitals, and c) prioritizing justice in AI ethics enactment and trust in AI reliability is essential for low complexity hospitals.
Collapse
Affiliation(s)
- Weisha Wang
- Research Center for Smarter Supply Chain, Business School, Soochow University, 50 Donghuan Road, Suzhou, 215006, China.
| | - Yichuan Wang
- Sheffield University Management School, University of Sheffield, Conduit Rd, Sheffield, S10 1FL, United Kingdom.
| | - Long Chen
- Brunel University London, United Kingdom.
| | - Rui Ma
- Greenwich Business School, University of Greenwich, United Kingdom.
| | - Minhao Zhang
- University of Bristol School of Management, University of Bristol, United Kingdom.
| |
Collapse
|
26
|
Chadaga K, Prabhu S, Sampathila N, Chadaga R, Bhat D, Sharma AK, Swathi KS. SADXAI: Predicting social anxiety disorder using multiple interpretable artificial intelligence techniques. SLAS Technol 2024; 29:100129. [PMID: 38508237 DOI: 10.1016/j.slast.2024.100129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Accepted: 03/17/2024] [Indexed: 03/22/2024]
Abstract
Social anxiety disorder (SAD), also known as social phobia, is a psychological condition in which a person has a persistent and overwhelming fear of being negatively judged or observed by other individuals. This fear can affect them at work, in relationships and other social activities. The intricate combination of several environmental and biological factors is the reason for the onset of this mental condition. SAD is diagnosed using a test called the "Diagnostic and Statistical Manual of Mental Health Disorders (DSM-5), which is based on several physical, emotional and demographic symptoms. Artificial Intelligence has been a boon for medicine and is regularly used to diagnose various health conditions and diseases. Hence, this study used demographic, emotional, and physical symptoms and multiple machine learning (ML) techniques to diagnose SAD. A thorough descriptive and statistical analysis has been conducted before using the classifiers. Among all the models, the AdaBoost and logistic regression obtained the highest accuracy of 88 % each. Four eXplainable artificial techniques (XAI) techniques are utilized to make the predictions interpretable, transparent and understandable. According to XAI, the "Liebowitz Social Anxiety Scale questionnaire" and "The fear of speaking in public" are the most critical attributes in the diagnosis of SAD. This clinical decision support system framework could be utilized in various suitable locations such as schools, hospitals and workplaces to identify SAD in people.
Collapse
Affiliation(s)
- Krishnaraj Chadaga
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka 576104, India
| | - Srikanth Prabhu
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka 576104, India.
| | - Niranjana Sampathila
- Department of Biomedical Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka 576104, India.
| | - Rajagopala Chadaga
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka 576104, India
| | - Devadas Bhat
- Department of Biomedical Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka 576104, India
| | - Akhilesh Kumar Sharma
- Department of Data Science and Engineering, Manipal University Jaipur, Jaipur, Rajasthan, India
| | - K S Swathi
- Prasanna School of Public Health, Manipal Academy of Higher Education, Manipal, Karnataka 576104, India
| |
Collapse
|
27
|
Reddy S. Generative AI in healthcare: an implementation science informed translational path on application, integration and governance. Implement Sci 2024; 19:27. [PMID: 38491544 PMCID: PMC10941464 DOI: 10.1186/s13012-024-01357-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 03/06/2024] [Indexed: 03/18/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI), particularly generative AI, has emerged as a transformative tool in healthcare, with the potential to revolutionize clinical decision-making and improve health outcomes. Generative AI, capable of generating new data such as text and images, holds promise in enhancing patient care, revolutionizing disease diagnosis and expanding treatment options. However, the utility and impact of generative AI in healthcare remain poorly understood, with concerns around ethical and medico-legal implications, integration into healthcare service delivery and workforce utilisation. Also, there is not a clear pathway to implement and integrate generative AI in healthcare delivery. METHODS This article aims to provide a comprehensive overview of the use of generative AI in healthcare, focusing on the utility of the technology in healthcare and its translational application highlighting the need for careful planning, execution and management of expectations in adopting generative AI in clinical medicine. Key considerations include factors such as data privacy, security and the irreplaceable role of clinicians' expertise. Frameworks like the technology acceptance model (TAM) and the Non-Adoption, Abandonment, Scale-up, Spread and Sustainability (NASSS) model are considered to promote responsible integration. These frameworks allow anticipating and proactively addressing barriers to adoption, facilitating stakeholder participation and responsibly transitioning care systems to harness generative AI's potential. RESULTS Generative AI has the potential to transform healthcare through automated systems, enhanced clinical decision-making and democratization of expertise with diagnostic support tools providing timely, personalized suggestions. Generative AI applications across billing, diagnosis, treatment and research can also make healthcare delivery more efficient, equitable and effective. However, integration of generative AI necessitates meticulous change management and risk mitigation strategies. Technological capabilities alone cannot shift complex care ecosystems overnight; rather, structured adoption programs grounded in implementation science are imperative. CONCLUSIONS It is strongly argued in this article that generative AI can usher in tremendous healthcare progress, if introduced responsibly. Strategic adoption based on implementation science, incremental deployment and balanced messaging around opportunities versus limitations helps promote safe, ethical generative AI integration. Extensive real-world piloting and iteration aligned to clinical priorities should drive development. With conscientious governance centred on human wellbeing over technological novelty, generative AI can enhance accessibility, affordability and quality of care. As these models continue advancing rapidly, ongoing reassessment and transparent communication around their strengths and weaknesses remain vital to restoring trust, realizing positive potential and, most importantly, improving patient outcomes.
Collapse
Affiliation(s)
- Sandeep Reddy
- Deakin School of Medicine, Waurn Ponds, Geelong, VIC, 3215, Australia.
| |
Collapse
|
28
|
Lv C, Guo W, Yin X, Liu L, Huang X, Li S, Zhang L. Innovative applications of artificial intelligence during the COVID-19 pandemic. INFECTIOUS MEDICINE 2024; 3:100095. [PMID: 38586543 PMCID: PMC10998276 DOI: 10.1016/j.imj.2024.100095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 12/16/2023] [Accepted: 02/18/2024] [Indexed: 04/09/2024]
Abstract
The COVID-19 pandemic has created unprecedented challenges worldwide. Artificial intelligence (AI) technologies hold tremendous potential for tackling key aspects of pandemic management and response. In the present review, we discuss the tremendous possibilities of AI technology in addressing the global challenges posed by the COVID-19 pandemic. First, we outline the multiple impacts of the current pandemic on public health, the economy, and society. Next, we focus on the innovative applications of advanced AI technologies in key areas such as COVID-19 prediction, detection, control, and drug discovery for treatment. Specifically, AI-based predictive analytics models can use clinical, epidemiological, and omics data to forecast disease spread and patient outcomes. Additionally, deep neural networks enable rapid diagnosis through medical imaging. Intelligent systems can support risk assessment, decision-making, and social sensing, thereby improving epidemic control and public health policies. Furthermore, high-throughput virtual screening enables AI to accelerate the identification of therapeutic drug candidates and opportunities for drug repurposing. Finally, we discuss future research directions for AI technology in combating COVID-19, emphasizing the importance of interdisciplinary collaboration. Though promising, barriers related to model generalization, data quality, infrastructure readiness, and ethical risks must be addressed to fully translate these innovations into real-world impacts. Multidisciplinary collaboration engaging diverse expertise and stakeholders is imperative for developing robust, responsible, and human-centered AI solutions against COVID-19 and future public health emergencies.
Collapse
Affiliation(s)
- Chenrui Lv
- Huazhong Agricultural University, Wuhan 430070, China
| | - Wenqiang Guo
- Huazhong Agricultural University, Wuhan 430070, China
| | - Xinyi Yin
- Huazhong Agricultural University, Wuhan 430070, China
| | - Liu Liu
- National Institute of Parasitic Diseases, Chinese Center for Disease Control and Prevention; Chinese Center for Tropical Diseases Research, Shanghai 200001, China
| | - Xinlei Huang
- Huazhong Agricultural University, Wuhan 430070, China
| | - Shimin Li
- Huazhong Agricultural University, Wuhan 430070, China
| | - Li Zhang
- Huazhong Agricultural University, Wuhan 430070, China
| |
Collapse
|
29
|
Mennella C, Maniscalco U, De Pietro G, Esposito M. Ethical and regulatory challenges of AI technologies in healthcare: A narrative review. Heliyon 2024; 10:e26297. [PMID: 38384518 PMCID: PMC10879008 DOI: 10.1016/j.heliyon.2024.e26297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Accepted: 02/09/2024] [Indexed: 02/23/2024] Open
Abstract
Over the past decade, there has been a notable surge in AI-driven research, specifically geared toward enhancing crucial clinical processes and outcomes. The potential of AI-powered decision support systems to streamline clinical workflows, assist in diagnostics, and enable personalized treatment is increasingly evident. Nevertheless, the introduction of these cutting-edge solutions poses substantial challenges in clinical and care environments, necessitating a thorough exploration of ethical, legal, and regulatory considerations. A robust governance framework is imperative to foster the acceptance and successful implementation of AI in healthcare. This article delves deep into the critical ethical and regulatory concerns entangled with the deployment of AI systems in clinical practice. It not only provides a comprehensive overview of the role of AI technologies but also offers an insightful perspective on the ethical and regulatory challenges, making a pioneering contribution to the field. This research aims to address the current challenges in digital healthcare by presenting valuable recommendations for all stakeholders eager to advance the development and implementation of innovative AI systems.
Collapse
Affiliation(s)
- Ciro Mennella
- Institute for High-Performance Computing and Networking (ICAR) - Research National Council of Italy (CNR), Italy
| | - Umberto Maniscalco
- Institute for High-Performance Computing and Networking (ICAR) - Research National Council of Italy (CNR), Italy
| | - Giuseppe De Pietro
- Institute for High-Performance Computing and Networking (ICAR) - Research National Council of Italy (CNR), Italy
| | - Massimo Esposito
- Institute for High-Performance Computing and Networking (ICAR) - Research National Council of Italy (CNR), Italy
| |
Collapse
|
30
|
Palaniappan K, Lin EYT, Vogel S. Global Regulatory Frameworks for the Use of Artificial Intelligence (AI) in the Healthcare Services Sector. Healthcare (Basel) 2024; 12:562. [PMID: 38470673 PMCID: PMC10930608 DOI: 10.3390/healthcare12050562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 02/23/2024] [Accepted: 02/26/2024] [Indexed: 03/14/2024] Open
Abstract
The healthcare sector is faced with challenges due to a shrinking healthcare workforce and a rise in chronic diseases that are worsening with demographic and epidemiological shifts. Digital health interventions that include artificial intelligence (AI) are being identified as some of the potential solutions to these challenges. The ultimate aim of these AI systems is to improve the patient's health outcomes and satisfaction, the overall population's health, and the well-being of healthcare professionals. The applications of AI in healthcare services are vast and are expected to assist, automate, and augment several healthcare services. Like any other emerging innovation, AI in healthcare also comes with its own risks and requires regulatory controls. A review of the literature was undertaken to study the existing regulatory landscape for AI in the healthcare services sector in developed nations. In the global regulatory landscape, most of the regulations for AI revolve around Software as a Medical Device (SaMD) and are regulated under digital health products. However, it is necessary to note that the current regulations may not suffice as AI-based technologies are capable of working autonomously, adapting their algorithms, and improving their performance over time based on the new real-world data that they have encountered. Hence, a global regulatory convergence for AI in healthcare, similar to the voluntary AI code of conduct that is being developed by the US-EU Trade and Technology Council, would be beneficial to all nations, be it developing or developed.
Collapse
Affiliation(s)
- Kavitha Palaniappan
- Centre of Regulatory Excellence, Duke-NUS Medical School, Singapore 169857, Singapore
| | | | | |
Collapse
|
31
|
Barwise AK, Curtis S, Diedrich DA, Pickering BW. Using artificial intelligence to promote equitable care for inpatients with language barriers and complex medical needs: clinical stakeholder perspectives. J Am Med Inform Assoc 2024; 31:611-621. [PMID: 38099504 PMCID: PMC10873784 DOI: 10.1093/jamia/ocad224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 11/14/2023] [Indexed: 02/18/2024] Open
Abstract
OBJECTIVES Inpatients with language barriers and complex medical needs suffer disparities in quality of care, safety, and health outcomes. Although in-person interpreters are particularly beneficial for these patients, they are underused. We plan to use machine learning predictive analytics to reliably identify patients with language barriers and complex medical needs to prioritize them for in-person interpreters. MATERIALS AND METHODS This qualitative study used stakeholder engagement through semi-structured interviews to understand the perceived risks and benefits of artificial intelligence (AI) in this domain. Stakeholders included clinicians, interpreters, and personnel involved in caring for these patients or for organizing interpreters. Data were coded and analyzed using NVIVO software. RESULTS We completed 49 interviews. Key perceived risks included concerns about transparency, accuracy, redundancy, privacy, perceived stigmatization among patients, alert fatigue, and supply-demand issues. Key perceived benefits included increased awareness of in-person interpreters, improved standard of care and prioritization for interpreter utilization; a streamlined process for accessing interpreters, empowered clinicians, and potential to overcome clinician bias. DISCUSSION This is the first study that elicits stakeholder perspectives on the use of AI with the goal of improved clinical care for patients with language barriers. Perceived benefits and risks related to the use of AI in this domain, overlapped with known hazards and values of AI but some benefits were unique for addressing challenges with providing interpreter services to patients with language barriers. CONCLUSION Artificial intelligence to identify and prioritize patients for interpreter services has the potential to improve standard of care and address healthcare disparities among patients with language barriers.
Collapse
Affiliation(s)
- Amelia K Barwise
- Biomedical Ethics Research Program, Pulmonary and Critical Care Medicine, Mayo Clinic, Rochester, MN 55902, United States
| | - Susan Curtis
- Biomedical Ethics Research Program, Mayo Clinic, Rochester, MN 55902, United States
| | - Daniel A Diedrich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55902, United States
| | - Brian W Pickering
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55902, United States
| |
Collapse
|
32
|
Akhlaghi H, Freeman S, Vari C, McKenna B, Braitberg G, Karro J, Tahayori B. Machine learning in clinical practice: Evaluation of an artificial intelligence tool after implementation. Emerg Med Australas 2024; 36:118-124. [PMID: 37771067 DOI: 10.1111/1742-6723.14325] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 09/14/2023] [Accepted: 09/19/2023] [Indexed: 09/30/2023]
Abstract
OBJECTIVE Artificial intelligence (AI) has gradually found its way into healthcare, and its future integration into clinical practice is inevitable. In the present study, we evaluate the accuracy of a novel AI algorithm designed to predict admission based on a triage note after clinical implementation. This is the first of such studies to investigate real-time AI performance in the emergency setting. METHODS The novel AI algorithm that predicts admission using a triage note was translated into clinical practice and integrated within St Vincent's Hospital Melbourne's electronic emergency patient management system. The data were collected from 1 January 2021 to 17 August 2022 to evaluate the diagnostic accuracy of the AI system after implementation. RESULTS A total of 77 125 ED presentations were included. The live AI algorithm has a sensitivity of 73.1% (95% confidence interval 72.5-73.8), specificity of 74.3% (73.9-74.7), positive predictive value of 50% (49.6-50.4) and negative predictive value of 88.7% (88.5-89) with a total accuracy of 74% (73.7-74.3). The accuracy of the system was at the lowest for admission to psychiatric units (34%) and at the highest for gastroenterology and medical admission (84% and 80%, respectively). CONCLUSION Our study showed the diagnostic evaluation of a real-time AI clinical decision-support tool became less accurate than the original. Although real-time sensitivity and specificity of the AI tool was still acceptable as a decision-support tool in the ED, we propose that continuous training and evaluation of AI-enabled clinical support tools in healthcare are conducted to ensure consistent accuracy and performance to prevent inadvertent consequences.
Collapse
Affiliation(s)
- Hamed Akhlaghi
- Department of Emergency Medicine, St Vincent's Hospital Melbourne, Melbourne, Victoria, Australia
- Department of Medical Education, The University of Melbourne, Melbourne, Victoria, Australia
- Faculty of Health, Deakin University, Melbourne, Victoria, Australia
| | - Sam Freeman
- Department of Emergency Medicine, St Vincent's Hospital Melbourne, Melbourne, Victoria, Australia
- SensiLab, Monash University, Melbourne, Victoria, Australia
| | - Cynthia Vari
- Department of Emergency Medicine, St Vincent's Hospital Melbourne, Melbourne, Victoria, Australia
| | - Bede McKenna
- Department of Emergency Medicine, St Vincent's Hospital Melbourne, Melbourne, Victoria, Australia
| | - George Braitberg
- Department of Emergency Medicine, Austin Health, Melbourne, Victoria, Australia
- Department of Critical Care, The University of Melbourne, Melbourne, Victoria, Australia
| | - Jonathan Karro
- Department of Emergency Medicine, St Vincent's Hospital Melbourne, Melbourne, Victoria, Australia
| | - Bahman Tahayori
- Florey Institute of Neuroscience and Mental Health, The University of Melbourne, Melbourne, Victoria, Australia
| |
Collapse
|
33
|
Gavette H, McDonald CL, Kostick-Quenet K, Mullen A, Najafi B, Finco MG. Advances in prosthetic technology: a perspective on ethical considerations for development and clinical translation. FRONTIERS IN REHABILITATION SCIENCES 2024; 4:1335966. [PMID: 38293290 PMCID: PMC10824968 DOI: 10.3389/fresc.2023.1335966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 12/29/2023] [Indexed: 02/01/2024]
Abstract
Technological advancements of prostheses in recent years, such as haptic feedback, active power, and machine learning for prosthetic control, have opened new doors for improved functioning, satisfaction, and overall quality of life. However, little attention has been paid to ethical considerations surrounding the development and translation of prosthetic technologies into clinical practice. This article, based on current literature, presents perspectives surrounding ethical considerations from the authors' multidisciplinary views as prosthetists (HG, AM, CLM, MGF), as well as combined research experience working directly with people using prostheses (AM, CLM, MGF), wearable technologies for rehabilitation (MGF, BN), machine learning and artificial intelligence (BN, KKQ), and ethics of advanced technologies (KKQ). The target audience for this article includes developers, manufacturers, and researchers of prosthetic devices and related technology. We present several ethical considerations for current advances in prosthetic technology, as well as topics for future research, that may inform product and policy decisions and positively influence the lives of those who can benefit from advances in prosthetic technology.
Collapse
Affiliation(s)
- Hayden Gavette
- Orthotics and Prosthetics Program, School of Health Professions, Baylor College of Medicine, Houston, TX, United States
| | - Cody L. McDonald
- Department of Rehabilitation Medicine, University of Washington, Seattle, WA, United States
| | - Kristin Kostick-Quenet
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - Ashley Mullen
- Orthotics and Prosthetics Program, School of Health Professions, Baylor College of Medicine, Houston, TX, United States
| | - Bijan Najafi
- Interdisciplinary Consortium on Advanced Motion Performance Lab (iCAMP), Department of Surgery, Baylor College of Medicine, Houston, TX, United States
| | - M. G. Finco
- Orthotics and Prosthetics Program, School of Health Professions, Baylor College of Medicine, Houston, TX, United States
- Interdisciplinary Consortium on Advanced Motion Performance Lab (iCAMP), Department of Surgery, Baylor College of Medicine, Houston, TX, United States
| |
Collapse
|
34
|
Ong JCL, Seng BJJ, Law JZF, Low LL, Kwa ALH, Giacomini KM, Ting DSW. Artificial intelligence, ChatGPT, and other large language models for social determinants of health: Current state and future directions. Cell Rep Med 2024; 5:101356. [PMID: 38232690 PMCID: PMC10829781 DOI: 10.1016/j.xcrm.2023.101356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 10/12/2023] [Accepted: 12/10/2023] [Indexed: 01/19/2024]
Abstract
This perspective highlights the importance of addressing social determinants of health (SDOH) in patient health outcomes and health inequity, a global problem exacerbated by the COVID-19 pandemic. We provide a broad discussion on current developments in digital health and artificial intelligence (AI), including large language models (LLMs), as transformative tools in addressing SDOH factors, offering new capabilities for disease surveillance and patient care. Simultaneously, we bring attention to challenges, such as data standardization, infrastructure limitations, digital literacy, and algorithmic bias, that could hinder equitable access to AI benefits. For LLMs, we highlight potential unique challenges and risks including environmental impact, unfair labor practices, inadvertent disinformation or "hallucinations," proliferation of bias, and infringement of copyrights. We propose the need for a multitiered approach to digital inclusion as an SDOH and the development of ethical and responsible AI practice frameworks globally and provide suggestions on bridging the gap from development to implementation of equitable AI technologies.
Collapse
Affiliation(s)
- Jasmine Chiat Ling Ong
- Division of Pharmacy, Singapore General Hospital, Singapore, Singapore; SingHealth Duke-NUS Medicine Academic Clinical Programme, Singapore, Singapore
| | - Benjamin Jun Jie Seng
- MOHH Holdings (Singapore) Pte., Ltd., Singapore, Singapore; SingHealth Duke-NUS Family Medicine Academic Clinical Programme, Singapore, Singapore
| | | | - Lian Leng Low
- SingHealth Duke-NUS Family Medicine Academic Clinical Programme, Singapore, Singapore; Population Health and Integrated Care Office, Singapore General Hospital, Singapore, Singapore; Centre for Population Health Research and Implementation, SingHealth Regional Health System, Singapore, Singapore; Outram Community Hospital, SingHealth Community Hospitals, Singapore, Singapore
| | - Andrea Lay Hoon Kwa
- Division of Pharmacy, Singapore General Hospital, Singapore, Singapore; SingHealth Duke-NUS Medicine Academic Clinical Programme, Singapore, Singapore; Emerging Infectious Diseases, Duke-NUS Medical School, Singapore, Singapore
| | - Kathleen M Giacomini
- Department of Bioengineering and Therapeutic Sciences, Schools of Pharmacy and Medicine, University of California, San Francisco, San Francisco, CA, USA
| | - Daniel Shu Wei Ting
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research, Singapore, Singapore; Duke-NUS Medical School, National University of Singapore, Singapore, Singapore; Byers Eye Institute, Stanford University, Stanford, CA, USA.
| |
Collapse
|
35
|
Tripathi S, Tabari A, Mansur A, Dabbara H, Bridge CP, Daye D. From Machine Learning to Patient Outcomes: A Comprehensive Review of AI in Pancreatic Cancer. Diagnostics (Basel) 2024; 14:174. [PMID: 38248051 PMCID: PMC10814554 DOI: 10.3390/diagnostics14020174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 12/28/2023] [Accepted: 12/29/2023] [Indexed: 01/23/2024] Open
Abstract
Pancreatic cancer is a highly aggressive and difficult-to-detect cancer with a poor prognosis. Late diagnosis is common due to a lack of early symptoms, specific markers, and the challenging location of the pancreas. Imaging technologies have improved diagnosis, but there is still room for improvement in standardizing guidelines. Biopsies and histopathological analysis are challenging due to tumor heterogeneity. Artificial Intelligence (AI) revolutionizes healthcare by improving diagnosis, treatment, and patient care. AI algorithms can analyze medical images with precision, aiding in early disease detection. AI also plays a role in personalized medicine by analyzing patient data to tailor treatment plans. It streamlines administrative tasks, such as medical coding and documentation, and provides patient assistance through AI chatbots. However, challenges include data privacy, security, and ethical considerations. This review article focuses on the potential of AI in transforming pancreatic cancer care, offering improved diagnostics, personalized treatments, and operational efficiency, leading to better patient outcomes.
Collapse
Affiliation(s)
- Satvik Tripathi
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Harvard Medical School, Boston, MA 02115, USA
| | - Azadeh Tabari
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Harvard Medical School, Boston, MA 02115, USA
| | - Arian Mansur
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Harvard Medical School, Boston, MA 02115, USA
| | - Harika Dabbara
- Boston University Chobanian & Avedisian School of Medicine, Boston, MA 02118, USA;
| | - Christopher P. Bridge
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Harvard Medical School, Boston, MA 02115, USA
| | - Dania Daye
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
36
|
Bekbolatova M, Mayer J, Ong CW, Toma M. Transformative Potential of AI in Healthcare: Definitions, Applications, and Navigating the Ethical Landscape and Public Perspectives. Healthcare (Basel) 2024; 12:125. [PMID: 38255014 PMCID: PMC10815906 DOI: 10.3390/healthcare12020125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 12/27/2023] [Accepted: 01/02/2024] [Indexed: 01/24/2024] Open
Abstract
Artificial intelligence (AI) has emerged as a crucial tool in healthcare with the primary aim of improving patient outcomes and optimizing healthcare delivery. By harnessing machine learning algorithms, natural language processing, and computer vision, AI enables the analysis of complex medical data. The integration of AI into healthcare systems aims to support clinicians, personalize patient care, and enhance population health, all while addressing the challenges posed by rising costs and limited resources. As a subdivision of computer science, AI focuses on the development of advanced algorithms capable of performing complex tasks that were once reliant on human intelligence. The ultimate goal is to achieve human-level performance with improved efficiency and accuracy in problem-solving and task execution, thereby reducing the need for human intervention. Various industries, including engineering, media/entertainment, finance, and education, have already reaped significant benefits by incorporating AI systems into their operations. Notably, the healthcare sector has witnessed rapid growth in the utilization of AI technology. Nevertheless, there remains untapped potential for AI to truly revolutionize the industry. It is important to note that despite concerns about job displacement, AI in healthcare should not be viewed as a threat to human workers. Instead, AI systems are designed to augment and support healthcare professionals, freeing up their time to focus on more complex and critical tasks. By automating routine and repetitive tasks, AI can alleviate the burden on healthcare professionals, allowing them to dedicate more attention to patient care and meaningful interactions. However, legal and ethical challenges must be addressed when embracing AI technology in medicine, alongside comprehensive public education to ensure widespread acceptance.
Collapse
Affiliation(s)
- Molly Bekbolatova
- Department of Osteopathic Manipulative Medicine, College of Osteopathic Medicine, New York Institute of Technology, Old Westbury, NY 11568, USA; (M.B.); (J.M.)
| | - Jonathan Mayer
- Department of Osteopathic Manipulative Medicine, College of Osteopathic Medicine, New York Institute of Technology, Old Westbury, NY 11568, USA; (M.B.); (J.M.)
| | - Chi Wei Ong
- School of Chemistry, Chemical Engineering, and Biotechnology, Nanyang Technological University, 62 Nanyang Drive, Singapore 637459, Singapore
| | - Milan Toma
- Department of Osteopathic Manipulative Medicine, College of Osteopathic Medicine, New York Institute of Technology, Old Westbury, NY 11568, USA; (M.B.); (J.M.)
| |
Collapse
|
37
|
Carapinha JL, Botes D, Carapinha R. Balancing innovation and ethics in AI governance for health technology assessment. J Med Econ 2024; 27:754-757. [PMID: 38711204 DOI: 10.1080/13696998.2024.2352821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 05/05/2024] [Indexed: 05/08/2024]
Affiliation(s)
- João L Carapinha
- Syenza, Anaheim, CA, USA
- Northeastern University School of Pharmacy, Boston, MA, USA
| | - Danélia Botes
- Health Economics and Outcomes Research Division, Syenza, Pretoria, South Africa
| | - René Carapinha
- Dynamic Intelligence Division, Syenza, Andorra la Vella, Andorra
| |
Collapse
|
38
|
Wang B, Asan O, Zhang Y. Shaping the future of chronic disease management: Insights into patient needs for AI-based homecare systems. Int J Med Inform 2024; 181:105301. [PMID: 38029700 DOI: 10.1016/j.ijmedinf.2023.105301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 11/02/2023] [Accepted: 11/16/2023] [Indexed: 12/01/2023]
Abstract
BACKGROUND The rising demand for healthcare resources, especially in chronic disease management, has elevated the importance of Artificial Intelligence (AI) in healthcare. While AI-based homecare systems are being developed, the perspectives of chronic patients, who are one of the primary beneficiaries and risk bearers of these technologies, remain largely under-researched. While recent research has highlighted the importance of AI-based homecare systems, the current understanding of patients' desired designs and features is still limited. OBJECTIVE This paper explores chronic patients' perspectives regarding AI-based homecare systems, an area currently underrepresented in research. We aim to identify the factors influencing their decision to use such systems, elucidate the potential roles of government and other concerned authorities, and provide feedback to AI developers to enhance adoption, system design, and usability and improve the overall healthcare experiences of chronic patients. METHOD A web-based open-ended questionnaire was designed to gather the perspectives of chronic patients about AI-based homecare systems. In total, responses from 181 participants were collected. Using Krippendorff's clustering technique, an inductive thematic analysis was performed to identify the main themes and their respective subthemes. RESULT Through rigorous coding and thematic analysis of the collected responses, we identified four major themes further segmented into thirteen subthemes. These four primary themes were: 1) "Personalized Design", emphasizing the need for patients to manage their health condition better through personalized and educational resources and user-friendly interfaces; 2) "Emotional & Social Support", underscoring the desire for AI systems to facilitate social connectivity and provide emotional support to improve the well-being of chronic patients at home; 3) "System Integration & Proactive Care", addressing the importance of seamless communication, proactive patient monitoring and integration with existing healthcare platforms; and 4) "Ethics & Regulation", prioritizing ethical guidelines, regulatory compliance, and affordability in the design. CONCLUSION This study has offered significant insights into the needs and expectations of chronic patients regarding AI-based home care systems. 'The findings highlight the importance of personalized and accessible care, emotional and social support, seamless system integration, proactive care, and ethical considerations in designing and implementing such systems. By aligning the design and operation of these systems with the lived experiences and expectations of patients, we can better ensure their acceptance and effectiveness.
Collapse
Affiliation(s)
- Bijun Wang
- Department of Business Analytics and Data Science, Florida Polytechnic University, Lakeland, FL 33805, USA
| | - Onur Asan
- School of Systems and Enterprises, Stevens Institute of Technology, Hoboken, NJ 07047, USA.
| | - Yiqi Zhang
- Department of Industrial and Manufacturing Engineering, Penn State University, State College, PA 16801, USA
| |
Collapse
|
39
|
Bélisle-Pipon JC, Powell M, English R, Malo MF, Ravitsky V, Bensoussan Y. Stakeholder perspectives on ethical and trustworthy voice AI in health care. Digit Health 2024; 10:20552076241260407. [PMID: 39055787 PMCID: PMC11271113 DOI: 10.1177/20552076241260407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Accepted: 05/21/2024] [Indexed: 07/27/2024] Open
Abstract
Objective Voice as a health biomarker using artificial intelligence (AI) is gaining momentum in research. The noninvasiveness of voice data collection through accessible technology (such as smartphones, telehealth, and ambient recordings) or within clinical contexts means voice AI may help address health disparities and promote the inclusion of marginalized communities. However, the development of AI-ready voice datasets free from bias and discrimination is a complex task. The objective of this study is to better understand the perspectives of engaged and interested stakeholders regarding ethical and trustworthy voice AI, to inform both further ethical inquiry and technology innovation. Methods A questionnaire was administered to voice AI experts, clinicians, scholars, patients, trainees, and policy-makers who participated at the 2023 Voice AI Symposium organized by the Bridge2AI-Voice AI Consortium. The survey used a mix of Likert scale, ranking and open-ended questions. A total of 27 stakeholders participated in the study. Results The main results of the study are the identification of priorities in terms of ethical issues, an initial definition of ethically sourced data for voice AI, insights into the use of synthetic voice data, and proposals for acting on the trustworthiness of voice AI. The study shows a diversity of perspectives and adds nuance to the planning and development of ethical and trustworthy voice AI. Conclusions This study represents the first stakeholder survey related to voice as a biomarker of health published to date. This study sheds light on the critical importance of ethics and trustworthiness in the development of voice AI technologies for health applications.
Collapse
Affiliation(s)
| | - Maria Powell
- Vanderbilt University Medical Center, Department of Otolaryngology-Head & Neck Surgery, Nashville, TN, Canada
| | - Renee English
- Faculty of Health Sciences, Simon Fraser University, Burnaby, BC, Canada
| | | | - Vardit Ravitsky
- Hastings Center, Garrison, NY, USA
- Department of Global Health and Social Medicine, Harvard University, Cambridge, MA, USA
| | | | - Yael Bensoussan
- Department of Otolaryngology-Head & Neck Surgery, University of South Florida, Tampa, FL, USA
| |
Collapse
|
40
|
Long X, Deng H, Zhang Z, Liu T, Yu X, Gong P, Tian L. Development and evaluation of acceptance scale for artificial intelligence in digestive endoscopy by subjects. ZHONG NAN DA XUE XUE BAO. YI XUE BAN = JOURNAL OF CENTRAL SOUTH UNIVERSITY. MEDICAL SCIENCES 2023; 48:1844-1853. [PMID: 38448378 PMCID: PMC10930752 DOI: 10.11817/j.issn.1672-7347.2023.230225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Indexed: 03/08/2024]
Abstract
OBJECTIVES Digestive endoscopy is an important diagnostic and therapeutic tool for digestive system diseases. The artificial intelligence (AI)-assisted system in endoscopy (hereinafter referred to as AI in digestive endoscopy) has broad application prospects in the field of digestive endoscopy. The trust and acceptance of endoscopic subjects are the cornerstone of the research, application, and promotion of AI in digestive endoscopy. Currently, the tools for measuring the acceptance of AI in digestive endoscopy by subjects are limited at home and abroad. This study aims to develop a scale for measuring the acceptance of AI in digestive endoscopy by subjects, then to evaluate its reliability and validity. METHODS By conducting literature research, an item pool and dimensions were constructed, and a preliminary scale was constructed using Delphi method. Through the first stage of the survey on the subjects, the reliability and validity of the scale were tested, and the revised scale was used for the second stage of survey on the subjects to further verify the structural validity of the scale. RESULTS The acceptance scale for AI in digestive endoscopy included 11 items in 3 dimensions: accuracy, ethics, benefit and willingness. In the first stage of the survey, 351 valid questionnaires were collected, and the Cronbach's α was 0.864. The correlation coefficient between the total score of the scale and the score of the test item was 0.636, and the Kaiser-Meyer-Olkin (KMO) value in exploratory factor analysis was 0.788. In the second stage of the survey, 335 valid questionnaires were collected, and in confirmatory factor analysis, the χ2/df was 3.774, while the root mean squared error of approximation (RMSEA) was 0.091. CONCLUSIONS Acceptance scale for AI in digestive endoscopy by subjects developed in this study has good reliability and validity.
Collapse
Affiliation(s)
- Xiuyan Long
- Department of Pediatrics, Third Xiangya Hospital, Central South University, Changsha 410013.
| | - Haijun Deng
- School of Mathematics & Statistics, Guizhou University of Finance and Economics, Guiyang 550025
| | - Zinan Zhang
- Department of Gastroenterology, Beijing Friendship Hospital, Capital Medical University, Beijing 100050
| | - Tao Liu
- Eight-Years Program of Clinical Medicine, Xiangya School of Medicine, Central South University, Changsha 410013
| | - Xiaoyu Yu
- Department of Gastroenterology, Third Xiangya Hospital, Central South University, Changsha 410013, China
| | - Pan Gong
- Department of Gastroenterology, Third Xiangya Hospital, Central South University, Changsha 410013, China
| | - Li Tian
- Department of Gastroenterology, Third Xiangya Hospital, Central South University, Changsha 410013, China.
| |
Collapse
|
41
|
Staes CJ, Beck AC, Chalkidis G, Scheese CH, Taft T, Guo JW, Newman MG, Kawamoto K, Sloss EA, McPherson JP. Design of an interface to communicate artificial intelligence-based prognosis for patients with advanced solid tumors: a user-centered approach. J Am Med Inform Assoc 2023; 31:174-187. [PMID: 37847666 PMCID: PMC10746322 DOI: 10.1093/jamia/ocad201] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 09/18/2023] [Accepted: 10/02/2023] [Indexed: 10/19/2023] Open
Abstract
OBJECTIVES To design an interface to support communication of machine learning (ML)-based prognosis for patients with advanced solid tumors, incorporating oncologists' needs and feedback throughout design. MATERIALS AND METHODS Using an interdisciplinary user-centered design approach, we performed 5 rounds of iterative design to refine an interface, involving expert review based on usability heuristics, input from a color-blind adult, and 13 individual semi-structured interviews with oncologists. Individual interviews included patient vignettes and a series of interfaces populated with representative patient data and predicted survival for each treatment decision point when a new line of therapy (LoT) was being considered. Ongoing feedback informed design decisions, and directed qualitative content analysis of interview transcripts was used to evaluate usability and identify enhancement requirements. RESULTS Design processes resulted in an interface with 7 sections, each addressing user-focused questions, supporting oncologists to "tell a story" as they discuss prognosis during a clinical encounter. The iteratively enhanced interface both triggered and reflected design decisions relevant when attempting to communicate ML-based prognosis, and exposed misassumptions. Clinicians requested enhancements that emphasized interpretability over explainability. Qualitative findings confirmed that previously identified issues were resolved and clarified necessary enhancements (eg, use months not days) and concerns about usability and trust (eg, address LoT received elsewhere). Appropriate use should be in the context of a conversation with an oncologist. CONCLUSION User-centered design, ongoing clinical input, and a visualization to communicate ML-related outcomes are important elements for designing any decision support tool enabled by artificial intelligence, particularly when communicating prognosis risk.
Collapse
Affiliation(s)
- Catherine J Staes
- College of Nursing, University of Utah, Salt Lake City, UT 84112, United States
- Department of Biomedical Informatics, School of Medicine, University of Utah, Salt Lake City, UT 84108, United States
| | - Anna C Beck
- Department of Internal Medicine, Huntsman Cancer Institute, University of Utah, Salt Lake City, UT 84112, United States
| | - George Chalkidis
- Healthcare IT Research Department, Center for Digital Services, Hitachi Ltd., Tokyo, Japan
| | - Carolyn H Scheese
- College of Nursing, University of Utah, Salt Lake City, UT 84112, United States
- Department of Biomedical Informatics, School of Medicine, University of Utah, Salt Lake City, UT 84108, United States
| | - Teresa Taft
- Department of Biomedical Informatics, School of Medicine, University of Utah, Salt Lake City, UT 84108, United States
| | - Jia-Wen Guo
- College of Nursing, University of Utah, Salt Lake City, UT 84112, United States
- Department of Biomedical Informatics, School of Medicine, University of Utah, Salt Lake City, UT 84108, United States
| | - Michael G Newman
- Department of Population Sciences, Huntsman Cancer Institute, Salt Lake City, UT 84112, United States
| | - Kensaku Kawamoto
- Department of Biomedical Informatics, School of Medicine, University of Utah, Salt Lake City, UT 84108, United States
| | - Elizabeth A Sloss
- College of Nursing, University of Utah, Salt Lake City, UT 84112, United States
| | - Jordan P McPherson
- Department of Pharmacotherapy, College of Pharmacy, University of Utah, Salt Lake City, UT 84108, United States
- Department of Pharmacy, Huntsman Cancer Institute, Salt Lake City, UT 84112, United States
| |
Collapse
|
42
|
Stevens AF, Stetson P. Theory of trust and acceptance of artificial intelligence technology (TrAAIT): An instrument to assess clinician trust and acceptance of artificial intelligence. J Biomed Inform 2023; 148:104550. [PMID: 37981107 PMCID: PMC10815802 DOI: 10.1016/j.jbi.2023.104550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 11/07/2023] [Accepted: 11/16/2023] [Indexed: 11/21/2023]
Abstract
BACKGROUND Artificial intelligence and machine learning (AI/ML) technologies like generative and ambient AI solutions are proliferating in real-world healthcare settings. Clinician trust affects adoption and impact of these systems. Organizations need a validated method to assess factors underlying trust and acceptance of AI for clinical workflows in order to improve adoption and the impact of AI. OBJECTIVE Our study set out to develop and assess a novel clinician-centered model to measure and explain trust and adoption of AI technology. We hypothesized that clinicians' system-specific Trust in AI is the primary predictor of both Acceptance (i.e., willingness to adopt), and post-adoption Trusting Stance (i.e., general stance towards any AI system). We validated the new model at an urban comprehensive cancer center. We produced an easily implemented survey tool for measuring clinician trust and adoption of AI. METHODS This survey-based, cross-sectional, psychometric study included a model development phase and validation phase. Measurement was done with five-point ascending unidirectional Likert scales. The development sample included N = 93 clinicians (physicians, advanced practice providers, nurses) that used an AI-based communication application. The validation sample included N = 73 clinicians that used a commercially available AI-powered speech-to-text application for note-writing in an electronic health record (EHR). Analytical procedures included exploratory factor analysis (EFA), confirmatory factor analysis (CFA), and partial least squares structural equation modeling (PLS-SEM). The Johnson-Neyman (JN) methodology was used to determine moderator effects. RESULTS In the fully moderated causal model, clinician trust explained a large amount of variance in their acceptance of a specific AI application (56%) and their post-adoption general trusting stance towards AI in general (36%). Moderators included organizational assurances, length of time using the application, and clinician age. The final validated instrument has 20 items and takes 5 min to complete on average. CONCLUSIONS We found that clinician acceptance of AI is determined by their degree of trust formed via information credibility, perceived application value, and reliability. The novel model, TrAAIT, explains factors underlying AI trustworthiness and acceptance for clinicians. With its easy-to-use instrument and Summative Score Dashboard, TrAAIT can help organizations implementing AI to identify and intercept barriers to clinician adoption in real-world settings.
Collapse
Affiliation(s)
- Alexander F Stevens
- Digital Products and Informatics Division, DigITs, Memorial Sloan Kettering Cancer Center, New York, NY.
| | - Pete Stetson
- Digital Products and Informatics Division, DigITs, Memorial Sloan Kettering Cancer Center, New York, NY; Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, NY
| |
Collapse
|
43
|
Chatterjee S, Bhattacharya M, Pal S, Lee SS, Chakraborty C. ChatGPT and large language models in orthopedics: from education and surgery to research. J Exp Orthop 2023; 10:128. [PMID: 38038796 PMCID: PMC10692045 DOI: 10.1186/s40634-023-00700-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 11/16/2023] [Indexed: 12/02/2023] Open
Abstract
ChatGPT has quickly popularized since its release in November 2022. Currently, large language models (LLMs) and ChatGPT have been applied in various domains of medical science, including in cardiology, nephrology, orthopedics, ophthalmology, gastroenterology, and radiology. Researchers are exploring the potential of LLMs and ChatGPT for clinicians and surgeons in every domain. This study discusses how ChatGPT can help orthopedic clinicians and surgeons perform various medical tasks. LLMs and ChatGPT can help the patient community by providing suggestions and diagnostic guidelines. In this study, the use of LLMs and ChatGPT to enhance and expand the field of orthopedics, including orthopedic education, surgery, and research, is explored. Present LLMs have several shortcomings, which are discussed herein. However, next-generation and future domain-specific LLMs are expected to be more potent and transform patients' quality of life.
Collapse
Affiliation(s)
- Srijan Chatterjee
- Institute for Skeletal Aging & Orthopaedic Surgery, Hallym University-Chuncheon Sacred Heart Hospital, Chuncheon-Si, 24252, Gangwon-Do, Republic of Korea
| | - Manojit Bhattacharya
- Department of Zoology, Fakir Mohan University, Vyasa Vihar, Balasore, 756020, Odisha, India
| | - Soumen Pal
- School of Mechanical Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Sang-Soo Lee
- Institute for Skeletal Aging & Orthopaedic Surgery, Hallym University-Chuncheon Sacred Heart Hospital, Chuncheon-Si, 24252, Gangwon-Do, Republic of Korea.
| | - Chiranjib Chakraborty
- Department of Biotechnology, School of Life Science and Biotechnology, Adamas University, Kolkata, West Bengal, 700126, India.
| |
Collapse
|
44
|
Jacquemyn X, Kutty S, Manlhiot C. The Lifelong Impact of Artificial Intelligence and Clinical Prediction Models on Patients With Tetralogy of Fallot. CJC PEDIATRIC AND CONGENITAL HEART DISEASE 2023; 2:440-452. [PMID: 38161675 PMCID: PMC10755786 DOI: 10.1016/j.cjcpc.2023.08.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 08/24/2023] [Indexed: 01/03/2024]
Abstract
Medical advancements in the diagnosis, surgical techniques, perioperative care, and continued care throughout childhood have transformed the outlook for individuals with tetralogy of Fallot (TOF), improving survival and shifting the perspective towards lifelong care. However, with a growing population of survivors, longstanding challenges have been accentuated, and new challenges have surfaced, necessitating a re-evaluation of TOF care. Availability of prenatal diagnostics, insufficient information from traditional imaging techniques, previously unforeseen medical complications, and debates surrounding optimal timing and indications for reintervention are among the emerging issues. To address these challenges, the integration of artificial intelligence and machine learning holds great promise as they have the potential to revolutionize patient management and positively impact lifelong outcomes for individuals with TOF. Innovative applications of artificial intelligence and machine learning have spanned across multiple domains of TOF care, including screening and diagnosis, automated image processing and interpretation, clinical risk stratification, and planning and performing cardiac interventions. By embracing these advancements and incorporating them into routine clinical practice, personalized medicine could be delivered, leading to the best possible outcomes for patients. In this review, we provide an overview of these evolving applications and emphasize the challenges, limitations, and future potential for integrating them into clinical care.
Collapse
Affiliation(s)
- Xander Jacquemyn
- Blalock-Taussig-Thomas Pediatric and Congenital Heart Center, Department of Pediatrics, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
- Department of Cardiovascular Sciences, KU Leuven, Leuven, Belgium
| | - Shelby Kutty
- Blalock-Taussig-Thomas Pediatric and Congenital Heart Center, Department of Pediatrics, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| | - Cedric Manlhiot
- Blalock-Taussig-Thomas Pediatric and Congenital Heart Center, Department of Pediatrics, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| |
Collapse
|
45
|
Wang B, Asan O, Mansouri M. Perspectives of Patients With Chronic Diseases on Future Acceptance of AI-Based Home Care Systems: Cross-Sectional Web-Based Survey Study. JMIR Hum Factors 2023; 10:e49788. [PMID: 37930780 PMCID: PMC10660233 DOI: 10.2196/49788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 08/18/2023] [Accepted: 10/05/2023] [Indexed: 11/07/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI)-based home care systems and devices are being gradually integrated into health care delivery to benefit patients with chronic diseases. However, existing research mainly focuses on the technical and clinical aspects of AI application, with an insufficient investigation of patients' motivation and intention to adopt such systems. OBJECTIVE This study aimed to examine the factors that affect the motivation of patients with chronic diseases to adopt AI-based home care systems and provide empirical evidence for the proposed research hypotheses. METHODS We conducted a cross-sectional web-based survey with 222 patients with chronic diseases based on a hypothetical scenario. RESULTS The results indicated that patients have an overall positive perception of AI-based home care systems. Their attitudes toward the technology, perceived usefulness, and comfortability were found to be significant factors encouraging adoption, with a clear understanding of accountability being a particularly influential factor in shaping patients' attitudes toward their motivation to use these systems. However, privacy concerns persist as an indirect factor, affecting the perceived usefulness and comfortability, hence influencing patients' attitudes. CONCLUSIONS This study is one of the first to examine the motivation of patients with chronic diseases to adopt AI-based home care systems, offering practical insights for policy makers, care or technology providers, and patients. This understanding can facilitate effective policy formulation, product design, and informed patient decision-making, potentially improving the overall health status of patients with chronic diseases.
Collapse
Affiliation(s)
- Bijun Wang
- Department of Business Analytics and Data Science, Florida Polytechnic University, Lakeland, FL, United States
| | - Onur Asan
- School of Systems and Enterprises, Stevens Institue of Technology, Hoboken, NJ, United States
| | - Mo Mansouri
- School of Systems and Enterprises, Stevens Institue of Technology, Hoboken, NJ, United States
| |
Collapse
|
46
|
Hoyek S, Cruz NFSD, Patel NA, Al-Khersan H, Fan KC, Berrocal AM. Identification of novel biomarkers for retinopathy of prematurity in preterm infants by use of innovative technologies and artificial intelligence. Prog Retin Eye Res 2023; 97:101208. [PMID: 37611892 DOI: 10.1016/j.preteyeres.2023.101208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 08/16/2023] [Accepted: 08/18/2023] [Indexed: 08/25/2023]
Abstract
Retinopathy of prematurity (ROP) is a leading cause of preventable vision loss in preterm infants. While appropriate screening is crucial for early identification and treatment of ROP, current screening guidelines remain limited by inter-examiner variability in screening modalities, absence of local protocol for ROP screening in some settings, a paucity of resources and an increased survival of younger and smaller infants. This review summarizes the advancements and challenges of current innovative technologies, artificial intelligence (AI), and predictive biomarkers for the diagnosis and management of ROP. We provide a contemporary overview of AI-based models for detection of ROP, its severity, progression, and response to treatment. To address the transition from experimental settings to real-world clinical practice, challenges to the clinical implementation of AI for ROP are reviewed and potential solutions are proposed. The use of optical coherence tomography (OCT) and OCT angiography (OCTA) technology is also explored, providing evaluation of subclinical ROP characteristics that are often imperceptible on fundus examination. Furthermore, we explore several potential biomarkers to reduce the need for invasive procedures, to enhance diagnostic accuracy and treatment efficacy. Finally, we emphasize the need of a symbiotic integration of biologic and imaging biomarkers and AI in ROP screening, where the robustness of biomarkers in early disease detection is complemented by the predictive precision of AI algorithms.
Collapse
Affiliation(s)
- Sandra Hoyek
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Natasha F S da Cruz
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Nimesh A Patel
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Hasenin Al-Khersan
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Kenneth C Fan
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Audina M Berrocal
- Bascom Palmer Eye Institute, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA.
| |
Collapse
|
47
|
Nwosu OI, Crowson MG, Rameau A. Artificial Intelligence Governance and Otolaryngology-Head and Neck Surgery. Laryngoscope 2023; 133:2868-2870. [PMID: 37658749 PMCID: PMC10592089 DOI: 10.1002/lary.31013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 08/18/2023] [Indexed: 09/05/2023]
Abstract
This rapid communication highlights components of artificial intelligence governance in healthcare and suggests adopting key governance approaches in otolaryngology – head and neck surgery.
Collapse
Affiliation(s)
- Obinna I. Nwosu
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, USA
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
| | - Matthew G. Crowson
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, USA
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
- Deloitte Consulting, Boston, Massachusetts, USA
| | - Anaïs Rameau
- Department of Otolaryngology–Head and Neck Surgery, Sean Parker Institute for the Voice, Weill Cornell Medical College, New York, New York, USA
| |
Collapse
|
48
|
Alanzi T, Alanazi F, Mashhour B, Altalhi R, Alghamdi A, Al Shubbar M, Alamro S, Alshammari M, Almusmili L, Alanazi L, Alzahrani S, Alalouni R, Alanzi N, Alsharifa A. Surveying Hematologists' Perceptions and Readiness to Embrace Artificial Intelligence in Diagnosis and Treatment Decision-Making. Cureus 2023; 15:e49462. [PMID: 38152821 PMCID: PMC10751460 DOI: 10.7759/cureus.49462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/23/2023] [Indexed: 12/29/2023] Open
Abstract
AIM This study aims to explore the critical dimension of assessing the perceptions and readiness of hematologists to embrace artificial intelligence (AI) technologies in their diagnostic and treatment decision-making processes. METHODS This study used a cross-sectional design for collecting data related to the perceptions and readiness of hematologists using a validated online questionnaire-based survey. Both hematologists (MD) and postgraduate MD students in hematology were included in the study. A total of 188 participants, including 35 hematologists (MD) and 153 MD hematology students, completed the survey. RESULTS Major challenges include "AI's level of autonomy" and "the complexity in the field of medicine." Major barriers and risks identified include "lack of trust," "management's level of understanding," "dehumanization of healthcare," and "reduction in physicians' skills." Statistically significant differences in perceptions of benefits including resources (p=0.0326, p<0.05) and knowledge (p=0.0262, p<0.05) were observed between genders. Older physicians were observed to be more concerned about the use of AI compared to younger physicians (p<0.05). CONCLUSION While AI use in hematology diagnosis and treatment decision-making is positively perceived, issues such as lack of trust, transparency, regulations, and poor AI awareness can affect the adoption of AI.
Collapse
Affiliation(s)
- Turki Alanzi
- Department of Health Information Management and Technology, College of Public Health, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | - Fehaid Alanazi
- Department of Clinical Laboratory Sciences, College of Applied Medical Sciences, Jouf University, Sakakah, SAU
| | | | | | | | | | - Saud Alamro
- College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | | | | | - Lena Alanazi
- Department of Clinical Laboratory Sciences, College of Applied Medical Sciences, Jouf University, Sakakah, SAU
| | | | - Raneem Alalouni
- College of Public Health, Imam Abdulrahman Bin Faisal University, Dammam, SAU
| | - Nouf Alanzi
- Department of Clinical Laboratory Sciences, College of Applied Medical Sciences, Jouf University, Sakakah, SAU
| | | |
Collapse
|
49
|
Li LT, Haley LC, Boyd AK, Bernstam EV. Technical/Algorithm, Stakeholder, and Society (TASS) barriers to the application of artificial intelligence in medicine: A systematic review. J Biomed Inform 2023; 147:104531. [PMID: 37884177 DOI: 10.1016/j.jbi.2023.104531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 09/14/2023] [Accepted: 10/22/2023] [Indexed: 10/28/2023]
Abstract
INTRODUCTION The use of artificial intelligence (AI), particularly machine learning and predictive analytics, has shown great promise in health care. Despite its strong potential, there has been limited use in health care settings. In this systematic review, we aim to determine the main barriers to successful implementation of AI in healthcare and discuss potential ways to overcome these challenges. METHODS We conducted a literature search in PubMed (1/1/2001-1/1/2023). The search was restricted to publications in the English language, and human study subjects. We excluded articles that did not discuss AI, machine learning, predictive analytics, and barriers to the use of these techniques in health care. Using grounded theory methodology, we abstracted concepts to identify major barriers to AI use in medicine. RESULTS We identified a total of 2,382 articles. After reviewing the 306 included papers, we developed 19 major themes, which we categorized into three levels: the Technical/Algorithm, Stakeholder, and Social levels (TASS). These themes included: Lack of Explainability, Need for Validation Protocols, Need for Standards for Interoperability, Need for Reporting Guidelines, Need for Standardization of Performance Metrics, Lack of Plan for Updating Algorithm, Job Loss, Skills Loss, Workflow Challenges, Loss of Patient Autonomy and Consent, Disturbing the Patient-Clinician Relationship, Lack of Trust in AI, Logistical Challenges, Lack of strategic plan, Lack of Cost-effectiveness Analysis and Proof of Efficacy, Privacy, Liability, Bias and Social Justice, and Education. CONCLUSION We identified 19 major barriers to the use of AI in healthcare and categorized them into three levels: the Technical/Algorithm, Stakeholder, and Social levels (TASS). Future studies should expand on barriers in pediatric care and focus on developing clearly defined protocols to overcome these barriers.
Collapse
Affiliation(s)
- Linda T Li
- Department of Surgery, Division of Pediatric Surgery, Icahn School of Medicine at Mount Sinai, 1 Gustave L. Levy Pl, New York, NY 10029, United States; McWilliams School of Biomedical Informatics at UT Health Houston, 7000 Fannin St, Suite 600, Houston, TX 77030, United States.
| | - Lauren C Haley
- McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| | - Alexandra K Boyd
- McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| | - Elmer V Bernstam
- McWilliams School of Biomedical Informatics at UT Health Houston, 7000 Fannin St, Suite 600, Houston, TX 77030, United States; McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| |
Collapse
|
50
|
Tong W, Guan Y, Chen J, Huang X, Zhong Y, Zhang C, Zhang H. Artificial intelligence in global health equity: an evaluation and discussion on the application of ChatGPT, in the Chinese National Medical Licensing Examination. Front Med (Lausanne) 2023; 10:1237432. [PMID: 38020160 PMCID: PMC10656681 DOI: 10.3389/fmed.2023.1237432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 10/09/2023] [Indexed: 12/01/2023] Open
Abstract
Background The demand for healthcare is increasing globally, with notable disparities in access to resources, especially in Asia, Africa, and Latin America. The rapid development of Artificial Intelligence (AI) technologies, such as OpenAI's ChatGPT, has shown promise in revolutionizing healthcare. However, potential challenges, including the need for specialized medical training, privacy concerns, and language bias, require attention. Methods To assess the applicability and limitations of ChatGPT in Chinese and English settings, we designed an experiment evaluating its performance in the 2022 National Medical Licensing Examination (NMLE) in China. For a standardized evaluation, we used the comprehensive written part of the NMLE, translated into English by a bilingual expert. All questions were input into ChatGPT, which provided answers and reasons for choosing them. Responses were evaluated for "information quality" using the Likert scale. Results ChatGPT demonstrated a correct response rate of 81.25% for Chinese and 86.25% for English questions. Logistic regression analysis showed that neither the difficulty nor the subject matter of the questions was a significant factor in AI errors. The Brier Scores, indicating predictive accuracy, were 0.19 for Chinese and 0.14 for English, indicating good predictive performance. The average quality score for English responses was excellent (4.43 point), slightly higher than for Chinese (4.34 point). Conclusion While AI language models like ChatGPT show promise for global healthcare, language bias is a key challenge. Ensuring that such technologies are robustly trained and sensitive to multiple languages and cultures is vital. Further research into AI's role in healthcare, particularly in areas with limited resources, is warranted.
Collapse
Affiliation(s)
- Wenting Tong
- Department of Pharmacy, Gannan Healthcare Vocational College, Ganzhou, Jiangxi, China
| | - Yongfu Guan
- Department of Rehabilitation and Elderly Care, Gannan Healthcare Vocational College, Ganzhou, Jiangxi, China
| | - Jinping Chen
- Department of Rehabilitation and Elderly Care, Gannan Healthcare Vocational College, Ganzhou, Jiangxi, China
| | - Xixuan Huang
- Department of Mathematics, Xiamen University, Xiamen, Fujian, China
| | - Yuting Zhong
- Department of Anesthesiology, Gannan Medical University, Jiangxi, China
| | - Changrong Zhang
- Department of Chinese Medicine, Affiliated Hospital of Qinghai University, Xining, Qinghai, China
| | - Hui Zhang
- Department of Rehabilitation and Elderly Care, Gannan Healthcare Vocational College, Ganzhou, Jiangxi, China
- Chair of Endocrinology and Medical Sexology (ENDOSEX), Department of Experimental Medicine, University of Rome Tor Vergata, Rome, Italy
| |
Collapse
|