1
|
Colalillo JM, Smith J. Artificial intelligence in medicine: The rise of machine learning. Emerg Med Australas 2024; 36:628-631. [PMID: 39013808 DOI: 10.1111/1742-6723.14459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Accepted: 06/11/2024] [Indexed: 07/18/2024]
Affiliation(s)
- James M Colalillo
- Emergency Department, Fiona Stanley Hospital, Perth, Western Australia, Australia
| | - Joshua Smith
- Emergency Department, Dunedin Public Hospital, Dunedin, Otago, New Zealand
| |
Collapse
|
2
|
Graham Y, Spencer AE, Velez GE, Herbell K. Engaging Youth Voice and Family Partnerships to Improve Children's Mental Health Outcomes. Child Adolesc Psychiatr Clin N Am 2024; 33:343-354. [PMID: 38823808 DOI: 10.1016/j.chc.2024.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/03/2024]
Abstract
Promoting active participation of families and youth in mental health systems of care is the cornerstone of creating a more inclusive, effective, and responsive care network. This article focuses on the inclusion of parent and youth voice in transforming our mental health care system to promote increased engagement at all levels of service delivery. Youth and parent peer support delivery models, digital innovation, and technology not only empower the individuals involved, but also have the potential to enhance the overall efficacy of the mental health care system.
Collapse
Affiliation(s)
- Yolanda Graham
- Morehouse School of Medicine, Devereux Advanced Behavioral Health, 444 Devereux Drive, Villanova, PA 19085, USA.
| | - Andrea E Spencer
- Ann & Robert H. Lurie Children's Hospital of Chicago, Northwestern University Feinberg School of Medicine, 225 East Chicago Avenue, Chicago, IL 60611, USA
| | - German E Velez
- New York-Presbyterian Hospital, Weill Cornell Medical College/ Columbia University College of Physicians and Surgeons, 525 E. 68th Street, Box 140, New York, NY 10065, USA
| | - Kayla Herbell
- Martha S. Pitzer Center for Women, Children, and Youth, The Ohio State University, 1577 Neil Avenue, Columbus, OH 43210, USA
| |
Collapse
|
3
|
Olver IN. Ethics of artificial intelligence in supportive care in cancer. Med J Aust 2024; 220:499-501. [PMID: 38714360 DOI: 10.5694/mja2.52297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 12/22/2023] [Indexed: 05/09/2024]
|
4
|
Frost EK, Bosward R, Aquino YSJ, Braunack-Mayer A, Carter SM. Facilitating public involvement in research about healthcare AI: A scoping review of empirical methods. Int J Med Inform 2024; 186:105417. [PMID: 38564959 DOI: 10.1016/j.ijmedinf.2024.105417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 03/06/2024] [Accepted: 03/17/2024] [Indexed: 04/04/2024]
Abstract
OBJECTIVE With the recent increase in research into public views on healthcare artificial intelligence (HCAI), the objective of this review is to examine the methods of empirical studies on public views on HCAI. We map how studies provided participants with information about HCAI, and we examine the extent to which studies framed publics as active contributors to HCAI governance. MATERIALS AND METHODS We searched 5 academic databases and Google Advanced for empirical studies investigating public views on HCAI. We extracted information including study aims, research instruments, and recommendations. RESULTS Sixty-two studies were included. Most were quantitative (N = 42). Most (N = 47) reported providing participants with background information about HCAI. Despite this, studies often reported participants' lack of prior knowledge about HCAI as a limitation. Over three quarters (N = 48) of the studies made recommendations that envisaged public views being used to guide governance of AI. DISCUSSION Provision of background information is an important component of facilitating research with publics on HCAI. The high proportion of studies reporting participants' lack of knowledge about HCAI as a limitation reflects the need for more guidance on how information should be presented. A minority of studies adopted technocratic positions that construed publics as passive beneficiaries of AI, rather than as active stakeholders in HCAI design and implementation. CONCLUSION This review draws attention to how public roles in HCAI governance are constructed in empirical studies. To facilitate active participation, we recommend that research with publics on HCAI consider methodological designs that expose participants to diverse information sources.
Collapse
Affiliation(s)
- Emma Kellie Frost
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Rebecca Bosward
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Annette Braunack-Mayer
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| |
Collapse
|
5
|
Scott IA, van der Vegt A, Lane P, McPhail S, Magrabi F. Achieving large-scale clinician adoption of AI-enabled decision support. BMJ Health Care Inform 2024; 31:e100971. [PMID: 38816209 PMCID: PMC11141172 DOI: 10.1136/bmjhci-2023-100971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Accepted: 05/15/2024] [Indexed: 06/01/2024] Open
Abstract
Computerised decision support (CDS) tools enabled by artificial intelligence (AI) seek to enhance accuracy and efficiency of clinician decision-making at the point of care. Statistical models developed using machine learning (ML) underpin most current tools. However, despite thousands of models and hundreds of regulator-approved tools internationally, large-scale uptake into routine clinical practice has proved elusive. While underdeveloped system readiness and investment in AI/ML within Australia and perhaps other countries are impediments, clinician ambivalence towards adopting these tools at scale could be a major inhibitor. We propose a set of principles and several strategic enablers for obtaining broad clinician acceptance of AI/ML-enabled CDS tools.
Collapse
Affiliation(s)
- Ian A Scott
- Internal Medicine and Clinical Epidemiology, Princess Alexandra Hospital, Brisbane, Queensland, Australia
- Centre for Health Services Research, The University of Queensland Faculty of Medicine and Biomedical Sciences, Brisbane, Queensland, Australia
| | - Anton van der Vegt
- Digital Health Centre, The University of Queensland Faculty of Medicine and Biomedical Sciences, Herston, Queensland, Australia
| | - Paul Lane
- Safety, Quality and Innovation, The Prince Charles Hospital, Brisbane, Queensland, Australia
| | - Steven McPhail
- Australian Centre for Health Services Innovation, Queensland University of Technology Faculty of Health, Brisbane, Queensland, Australia
| | - Farah Magrabi
- Macquarie University, Sydney, New South Wales, Australia
| |
Collapse
|
6
|
Carter SM, Aquino YSJ, Carolan L, Frost E, Degeling C, Rogers WA, Scott IA, Bell KJ, Fabrianesi B, Magrabi F. How should artificial intelligence be used in Australian health care? Recommendations from a citizens' jury. Med J Aust 2024; 220:409-416. [PMID: 38629188 DOI: 10.5694/mja2.52283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 11/06/2023] [Indexed: 05/06/2024]
Abstract
OBJECTIVE To support a diverse sample of Australians to make recommendations about the use of artificial intelligence (AI) technology in health care. STUDY DESIGN Citizens' jury, deliberating the question: "Under which circumstances, if any, should artificial intelligence be used in Australian health systems to detect or diagnose disease?" SETTING, PARTICIPANTS Thirty Australian adults recruited by Sortition Foundation using random invitation and stratified selection to reflect population proportions by gender, age, ancestry, highest level of education, and residential location (state/territory; urban, regional, rural). The jury process took 18 days (16 March - 2 April 2023): fifteen days online and three days face-to-face in Sydney, where the jurors, both in small groups and together, were informed about and discussed the question, and developed recommendations with reasons. Jurors received extensive information: a printed handbook, online documents, and recorded presentations by four expert speakers. Jurors asked questions and received answers from the experts during the online period of the process, and during the first day of the face-to-face meeting. MAIN OUTCOME MEASURES Jury recommendations, with reasons. RESULTS The jurors recommended an overarching, independently governed charter and framework for health care AI. The other nine recommendation categories concerned balancing benefits and harms; fairness and bias; patients' rights and choices; clinical governance and training; technical governance and standards; data governance and use; open source software; AI evaluation and assessment; and education and communication. CONCLUSIONS The deliberative process supported a nationally representative sample of citizens to construct recommendations about how AI in health care should be developed, used, and governed. Recommendations derived using such methods could guide clinicians, policy makers, AI researchers and developers, and health service users to develop approaches that ensure trustworthy and responsible use of this technology.
Collapse
Affiliation(s)
- Stacy M Carter
- University of Wollongong, Wollongong, NSW
- Australian Centre for Health Engagement, Evidence and Values, University of Wollongong, Wollongong, NSW
| | - Yves Saint James Aquino
- University of Wollongong, Wollongong, NSW
- Australian Centre for Health Engagement, Evidence and Values, University of Wollongong, Wollongong, NSW
| | - Lucy Carolan
- University of Wollongong, Wollongong, NSW
- Australian Centre for Health Engagement, Evidence and Values, University of Wollongong, Wollongong, NSW
| | - Emma Frost
- University of Wollongong, Wollongong, NSW
- Australian Centre for Health Engagement, Evidence and Values, University of Wollongong, Wollongong, NSW
| | - Chris Degeling
- University of Wollongong, Wollongong, NSW
- Australian Centre for Health Engagement, Evidence and Values, University of Wollongong, Wollongong, NSW
| | | | - Ian A Scott
- University of Queensland, Brisbane, QLD
- Princess Alexandra Hospital, Brisbane, QLD
| | | | - Belinda Fabrianesi
- University of Wollongong, Wollongong, NSW
- Australian Centre for Health Engagement, Evidence and Values, University of Wollongong, Wollongong, NSW
| | - Farah Magrabi
- Australian Institute for Health Innovation, Macquarie University, Sydney, NSW
| |
Collapse
|
7
|
Castonguay A, Wagner G, Motulsky A, Paré G. AI maturity in health care: An overview of 10 OECD countries. Health Policy 2024; 140:104938. [PMID: 38157771 DOI: 10.1016/j.healthpol.2023.104938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 07/13/2023] [Accepted: 11/01/2023] [Indexed: 01/03/2024]
Abstract
BACKGROUND Artificial Intelligence (AI) and its applications in health care are on the agenda of policymakers around the world, but a major challenge remains, namely, to set policies that will ensure wide acceptance and capture the value of AI while mitigating associated risks. OBJECTIVE This study aims to provide an overview of how OECD countries strategize about how to integrate AI into health care and to determine their actual level of AI maturity. METHODS A scan of government-based AI strategies and initiatives adopted in 10 proactive OECD countries was conducted. Available documentation was analyzed, using the Broadband Commission for Sustainable Development's roadmap to AI maturity as a conceptual framework. RESULTS The findings reveal that most selected OECD countries are at the Emerging stage (Level 2) of AI in health maturity. Despite considerable funding and a variety of approaches to the development of an AI in health supporting ecosystem, only the United Kingdom and United States have reached the highest level of maturity, an integrated and collaborative AI in health ecosystem (Level 3). CONCLUSION Despite policymakers looking for opportunities to expedite efforts related to AI, there is no one-size-fits-all approach to ensure the sustainable development and safe use of AI in health. The principles of equifinality and mindfulness must thus guide policymaking in the development of AI in health care.
Collapse
Affiliation(s)
- Alexandre Castonguay
- Faculté des sciences infirmières, Pavillon Marguerite-d'Youville, C.P. 6128 succ. Centre-ville, Montréal, Québec, H3C 3J7, Canada.
| | - Gerit Wagner
- Faculty Information Systems and Applied Computer Sciences, University of Bamberg, Kapuzinerstraße 16, D-96047, Bamberg, Germany
| | - Aude Motulsky
- École de Santé Publique de l'Université de Montréal, C.P. 6128 succursale centre-ville, Montreal, Québec, H3C 3J7, Canada
| | - Guy Paré
- Département de technologies de l'information, HEC Montréal. 3000, chemin de la Côte-Sainte-Catherine, Montréal, Québec, H3T 2A7, Canada
| |
Collapse
|
8
|
Vo V, Chen G, Aquino YSJ, Carter SM, Do QN, Woode ME. Multi-stakeholder preferences for the use of artificial intelligence in healthcare: A systematic review and thematic analysis. Soc Sci Med 2023; 338:116357. [PMID: 37949020 DOI: 10.1016/j.socscimed.2023.116357] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 09/04/2023] [Accepted: 10/24/2023] [Indexed: 11/12/2023]
Abstract
INTRODUCTION Despite the proliferation of Artificial Intelligence (AI) technology over the last decade, clinician, patient, and public perceptions of its use in healthcare raise a number of ethical, legal and social questions. We systematically review the literature on attitudes towards the use of AI in healthcare from patients, the general public and health professionals' perspectives to understand these issues from multiple perspectives. METHODOLOGY A search for original research articles using qualitative, quantitative, and mixed methods published between 1 Jan 2001 to 24 Aug 2021 was conducted on six bibliographic databases. Data were extracted and classified into different themes representing views on: (i) knowledge and familiarity of AI, (ii) AI benefits, risks, and challenges, (iii) AI acceptability, (iv) AI development, (v) AI implementation, (vi) AI regulations, and (vii) Human - AI relationship. RESULTS The final search identified 7,490 different records of which 105 publications were selected based on predefined inclusion/exclusion criteria. While the majority of patients, the general public and health professionals generally had a positive attitude towards the use of AI in healthcare, all groups indicated some perceived risks and challenges. Commonly perceived risks included data privacy; reduced professional autonomy; algorithmic bias; healthcare inequities; and greater burnout to acquire AI-related skills. While patients had mixed opinions on whether healthcare workers suffer from job loss due to the use of AI, health professionals strongly indicated that AI would not be able to completely replace them in their professions. Both groups shared similar doubts about AI's ability to deliver empathic care. The need for AI validation, transparency, explainability, and patient and clinical involvement in the development of AI was emphasised. To help successfully implement AI in health care, most participants envisioned that an investment in training and education campaigns was necessary, especially for health professionals. Lack of familiarity, lack of trust, and regulatory uncertainties were identified as factors hindering AI implementation. Regarding AI regulations, key themes included data access and data privacy. While the general public and patients exhibited a willingness to share anonymised data for AI development, there remained concerns about sharing data with insurance or technology companies. One key domain under this theme was the question of who should be held accountable in the case of adverse events arising from using AI. CONCLUSIONS While overall positivity persists in attitudes and preferences toward AI use in healthcare, some prevalent problems require more attention. There is a need to go beyond addressing algorithm-related issues to look at the translation of legislation and guidelines into practice to ensure fairness, accountability, transparency, and ethics in AI.
Collapse
Affiliation(s)
- Vinh Vo
- Centre for Health Economics, Monash University, Australia.
| | - Gang Chen
- Centre for Health Economics, Monash University, Australia
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Quynh Nga Do
- Department of Economics, Monash University, Australia
| | - Maame Esi Woode
- Centre for Health Economics, Monash University, Australia; Monash Data Futures Research Institute, Australia
| |
Collapse
|
9
|
Peh W, Saw A. Artificial Intelligence: Impact and Challenges to Authors, Journals and Medical Publishing. Malays Orthop J 2023; 17:1-4. [PMID: 38107365 PMCID: PMC10723007 DOI: 10.5704/moj.2311.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 09/20/2023] [Indexed: 12/19/2023] Open
Abstract
Artificial intelligence (AI)-assisted technologies are here to stay and cannot be ignored. These tools are able to generate highly-realistic human-like text and perform a wide range of useful language tasks with a wide range of applications. They have the potential to expedite innovation in health care and can aid in promoting equity and diversity in research by overcoming language barriers. When using these AI tools, authors must take responsibility for the output and originality of their work, as publishers expect all content to be generated by human authors unless there is a declaration to the contrary. Authors must disclose how AI tools have been used, and ensure appropriate attribution of all the text, images, and audio-visual material. The responsible use of AI language models and transparent reporting of how these tools were used in the creation of information and publication are vital to promote and protect the credibility and integrity of medical research, and trust in medical knowledge. Educating postgraduate and undergraduate students, researchers and authors on the applications and best usage of AI-assisted technologies, together with the importance of critical thinking, integrity and strict adherence to ethical principles, are key steps that need to be undertaken.
Collapse
Affiliation(s)
- Wcg Peh
- Department of Diagnostic Radiology, Khoo Teck Puat Hospital, Singapore
| | - A Saw
- Department of Orthopaedic Surgery (NOCERAL), University of Malaya, Kuala Lumpur, Malaysia
| |
Collapse
|
10
|
Squires E, Bacchi S, Maddison J. We need to chat about artificial intelligence. Med J Aust 2023; 219:394. [PMID: 37644689 DOI: 10.5694/mja2.52081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Accepted: 08/08/2023] [Indexed: 08/31/2023]
|
11
|
Leung TI, de Azevedo Cardoso T, Mavragani A, Eysenbach G. Best Practices for Using AI Tools as an Author, Peer Reviewer, or Editor. J Med Internet Res 2023; 25:e51584. [PMID: 37651164 PMCID: PMC10502596 DOI: 10.2196/51584] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 08/28/2023] [Indexed: 09/01/2023] Open
Abstract
The ethics of generative artificial intelligence (AI) use in scientific manuscript content creation has become a serious matter of concern in the scientific publishing community. Generative AI has computationally become capable of elaborating research questions; refining programming code; generating text in scientific language; and generating images, graphics, or figures. However, this technology should be used with caution. In this editorial, we outline the current state of editorial policies on generative AI or chatbot use in authorship, peer review, and editorial processing of scientific and scholarly manuscripts. Additionally, we provide JMIR Publications' editorial policies on these issues. We further detail JMIR Publications' approach to the applications of AI in the editorial process for manuscripts in review in a JMIR Publications journal.
Collapse
Affiliation(s)
- Tiffany I Leung
- JMIR Publications, Inc, Toronto, ON, Canada
- Department of Internal Medicine (adjunct), Southern Illinois University School of Medicine, Springfield, IL, United States
| | | | | | - Gunther Eysenbach
- JMIR Publications, Inc, Toronto, ON, Canada
- University of Victoria, Victoria, BC, Canada
| |
Collapse
|