1
|
Colalillo JM, Smith J. Artificial intelligence in medicine: The rise of machine learning. Emerg Med Australas 2024; 36:628-631. [PMID: 39013808 DOI: 10.1111/1742-6723.14459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Accepted: 06/11/2024] [Indexed: 07/18/2024]
Affiliation(s)
- James M Colalillo
- Emergency Department, Fiona Stanley Hospital, Perth, Western Australia, Australia
| | - Joshua Smith
- Emergency Department, Dunedin Public Hospital, Dunedin, Otago, New Zealand
| |
Collapse
|
2
|
Frost EK, Bosward R, Aquino YSJ, Braunack-Mayer A, Carter SM. Facilitating public involvement in research about healthcare AI: A scoping review of empirical methods. Int J Med Inform 2024; 186:105417. [PMID: 38564959 DOI: 10.1016/j.ijmedinf.2024.105417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 03/06/2024] [Accepted: 03/17/2024] [Indexed: 04/04/2024]
Abstract
OBJECTIVE With the recent increase in research into public views on healthcare artificial intelligence (HCAI), the objective of this review is to examine the methods of empirical studies on public views on HCAI. We map how studies provided participants with information about HCAI, and we examine the extent to which studies framed publics as active contributors to HCAI governance. MATERIALS AND METHODS We searched 5 academic databases and Google Advanced for empirical studies investigating public views on HCAI. We extracted information including study aims, research instruments, and recommendations. RESULTS Sixty-two studies were included. Most were quantitative (N = 42). Most (N = 47) reported providing participants with background information about HCAI. Despite this, studies often reported participants' lack of prior knowledge about HCAI as a limitation. Over three quarters (N = 48) of the studies made recommendations that envisaged public views being used to guide governance of AI. DISCUSSION Provision of background information is an important component of facilitating research with publics on HCAI. The high proportion of studies reporting participants' lack of knowledge about HCAI as a limitation reflects the need for more guidance on how information should be presented. A minority of studies adopted technocratic positions that construed publics as passive beneficiaries of AI, rather than as active stakeholders in HCAI design and implementation. CONCLUSION This review draws attention to how public roles in HCAI governance are constructed in empirical studies. To facilitate active participation, we recommend that research with publics on HCAI consider methodological designs that expose participants to diverse information sources.
Collapse
Affiliation(s)
- Emma Kellie Frost
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Rebecca Bosward
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Annette Braunack-Mayer
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| |
Collapse
|
3
|
Lee YM, Stretton B, Tan S, Gupta A, Kovoor J, Bacchi S, Lim W, Chan WO. Captive markets and medical artificial intelligence. J Med Imaging Radiat Oncol 2024; 68:278-281. [PMID: 38563301 DOI: 10.1111/1754-9485.13648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 03/21/2024] [Indexed: 04/04/2024]
Affiliation(s)
- Yong Min Lee
- Ophthalmology Department, Royal Adelaide Hospital, Adelaide, South Australia, Australia
- Faculty of Health and Medical Sciences, The University of Adelaide, Adelaide, South Australia, Australia
| | - Brandon Stretton
- Ophthalmology Department, Royal Adelaide Hospital, Adelaide, South Australia, Australia
- Faculty of Health and Medical Sciences, The University of Adelaide, Adelaide, South Australia, Australia
| | - Sheryn Tan
- Ophthalmology Department, Royal Adelaide Hospital, Adelaide, South Australia, Australia
- Faculty of Health and Medical Sciences, The University of Adelaide, Adelaide, South Australia, Australia
| | - Aashray Gupta
- Faculty of Health and Medical Sciences, The University of Adelaide, Adelaide, South Australia, Australia
- Gold Coast University Hospital, Gold Coast, Queensland, Australia
| | - Joshua Kovoor
- Ophthalmology Department, Royal Adelaide Hospital, Adelaide, South Australia, Australia
- Faculty of Health and Medical Sciences, The University of Adelaide, Adelaide, South Australia, Australia
| | - Stephen Bacchi
- Ophthalmology Department, Royal Adelaide Hospital, Adelaide, South Australia, Australia
- Flinders University, Adelaide, South Australia, Australia
| | - Wanyin Lim
- Faculty of Health and Medical Sciences, The University of Adelaide, Adelaide, South Australia, Australia
- Department of Radiology, Royal Adelaide Hospital, Adelaide, South Australia, Australia
| | - Weng Onn Chan
- Ophthalmology Department, Royal Adelaide Hospital, Adelaide, South Australia, Australia
- Faculty of Health and Medical Sciences, The University of Adelaide, Adelaide, South Australia, Australia
| |
Collapse
|
4
|
Stewart J, Freeman S, Eroglu E, Dumitrascu N, Lu J, Goudie A, Sprivulis P, Akhlaghi H, Tran V, Sanfilippo F, Celenza A, Than M, Fatovich D, Walker K, Dwivedi G. Attitudes towards artificial intelligence in emergency medicine. Emerg Med Australas 2024; 36:252-265. [PMID: 38044755 DOI: 10.1111/1742-6723.14345] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 10/24/2023] [Accepted: 10/30/2023] [Indexed: 12/05/2023]
Abstract
OBJECTIVE To assess Australian and New Zealand emergency clinicians' attitudes towards the use of artificial intelligence (AI) in emergency medicine. METHODS We undertook a qualitative interview-based study based on grounded theory. Participants were recruited through ED internal mailing lists, the Australasian College for Emergency Medicine Bulletin, and the research teams' personal networks. Interviews were transcribed, coded and themes presented. RESULTS Twenty-five interviews were conducted between July 2021 and May 2022. Thematic saturation was achieved after 22 interviews. Most participants were from either Western Australia (52%) or Victoria (16%) and were consultants (96%). More participants reported feeling optimistic (10/25) than neutral (6/25), pessimistic (2/25) or mixed (7/25) towards the use of AI in the ED. A minority expressed scepticism regarding the feasibility or value of implementing AI into the ED. Multiple potential risks and ethical issues were discussed by participants including skill loss from overreliance on AI, algorithmic bias, patient privacy and concerns over liability. Participants also discussed perceived inadequacies in existing information technology systems. Participants felt that AI technologies would be used as decision support tools and not replace the roles of emergency clinicians. Participants were not concerned about the impact of AI on their job security. Most (17/25) participants thought that AI would impact emergency medicine within the next 10 years. CONCLUSIONS Emergency clinicians interviewed were generally optimistic about the use of AI in emergency medicine, so long as it is used as a decision support tool and they maintain the ability to override its recommendations.
Collapse
Affiliation(s)
- Jonathon Stewart
- School of Medicine, The University of Western Australia, Perth, Western Australia, Australia
- Department of Advanced Clinical and Translational Cardiovascular Imaging, Harry Perkins Institute of Medical Research, Perth, Western Australia, Australia
| | - Samuel Freeman
- SensiLab, Monash University, Melbourne, Victoria, Australia
- Department of Emergency Medicine, St Vincent's Hospital Melbourne, Melbourne, Victoria, Australia
| | - Ege Eroglu
- School of Medicine, The University of Notre Dame Australia, Fremantle, Western Australia, Australia
| | - Nicole Dumitrascu
- School of Medicine, The University of Notre Dame Australia, Fremantle, Western Australia, Australia
| | - Juan Lu
- Department of Advanced Clinical and Translational Cardiovascular Imaging, Harry Perkins Institute of Medical Research, Perth, Western Australia, Australia
- Department of Computer Science and Software Engineering, The University of Western Australia, Perth, Western Australia, Australia
| | - Adrian Goudie
- Department of Emergency Medicine, Fiona Stanley Hospital, Perth, Western Australia, Australia
| | - Peter Sprivulis
- Strategy and Governance Division, Western Australia Department of Health, Perth, Western Australia, Australia
| | - Hamed Akhlaghi
- Department of Emergency Medicine, St Vincent's Hospital Melbourne, Melbourne, Victoria, Australia
| | - Viet Tran
- School of Medicine, University of Tasmania, Hobart, Tasmania, Australia
- Department of Emergency Medicine, Royal Hobart Hospital, Hobart, Tasmania, Australia
| | - Frank Sanfilippo
- School of Population and Global Health, The University of Western Australia, Perth, Western Australia, Australia
| | - Antonio Celenza
- School of Medicine, The University of Western Australia, Perth, Western Australia, Australia
- Department of Emergency Medicine, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
| | - Martin Than
- Department of Emergency Medicine, Christchurch Hospital, Christchurch, New Zealand
| | - Daniel Fatovich
- Emergency Medicine, Royal Perth Hospital, The University of Western Australia, Perth, Western Australia, Australia
- Centre for Clinical Research in Emergency Medicine, Harry Perkins Institute of Medical Research, Perth, Western Australia, Australia
| | - Katie Walker
- School of Clinical Sciences at Monash Health, Monash University, Melbourne, Victoria, Australia
| | - Girish Dwivedi
- School of Medicine, The University of Western Australia, Perth, Western Australia, Australia
- Department of Advanced Clinical and Translational Cardiovascular Imaging, Harry Perkins Institute of Medical Research, Perth, Western Australia, Australia
- Department of Cardiology, Fiona Stanley Hospital, Perth, Western Australia, Australia
| |
Collapse
|
5
|
Kovoor JG, Bacchi S, Sharma P, Sharma S, Kumawat M, Stretton B, Gupta AK, Chan W, Abou-Hamden A, Maddern GJ. Artificial intelligence for surgical services in Australia and New Zealand: opportunities, challenges and recommendations. Med J Aust 2024; 220:234-237. [PMID: 38321813 DOI: 10.5694/mja2.52225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 01/22/2024] [Indexed: 02/08/2024]
Affiliation(s)
- Joshua G Kovoor
- University of Adelaide, Adelaide, SA
- Ballarat Base Hospital, Ballarat, VIC
| | | | | | | | | | | | | | - WengOnn Chan
- University of Adelaide, Adelaide, SA
- Queen Elizabeth Hospital, Adelaide, SA
| | - Amal Abou-Hamden
- University of Adelaide, Adelaide, SA
- Royal Adelaide Hospital, Adelaide, SA
| | - Guy J Maddern
- University of Adelaide, Adelaide, SA
- Queen Elizabeth Hospital, Adelaide, SA
| |
Collapse
|
6
|
van der Vegt A, Campbell V, Zuccon G. Why clinical artificial intelligence is (almost) non-existent in Australian hospitals and how to fix it. Med J Aust 2024; 220:172-175. [PMID: 38146620 DOI: 10.5694/mja2.52195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 10/27/2023] [Indexed: 12/27/2023]
Affiliation(s)
- Anton van der Vegt
- Centre for Health Services Research, University of Queensland, Brisbane, QLD
| | | | | |
Collapse
|
7
|
Gould DJ, Dowsey MM, Glanville-Hearst M, Spelman T, Bailey JA, Choong PFM, Bunzli S. Patients' Views on AI for Risk Prediction in Shared Decision-Making for Knee Replacement Surgery: Qualitative Interview Study. J Med Internet Res 2023; 25:e43632. [PMID: 37721797 PMCID: PMC10546266 DOI: 10.2196/43632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 05/04/2023] [Accepted: 08/21/2023] [Indexed: 09/19/2023] Open
Abstract
BACKGROUND The use of artificial intelligence (AI) in decision-making around knee replacement surgery is increasing, and this technology holds promise to improve the prediction of patient outcomes. Ambiguity surrounds the definition of AI, and there are mixed views on its application in clinical settings. OBJECTIVE In this study, we aimed to explore the understanding and attitudes of patients who underwent knee replacement surgery regarding AI in the context of risk prediction for shared clinical decision-making. METHODS This qualitative study involved patients who underwent knee replacement surgery at a tertiary referral center for joint replacement surgery. The participants were selected based on their age and sex. Semistructured interviews explored the participants' understanding of AI and their opinions on its use in shared clinical decision-making. Data collection and reflexive thematic analyses were conducted concurrently. Recruitment continued until thematic saturation was achieved. RESULTS Thematic saturation was achieved with 19 interviews and confirmed with 1 additional interview, resulting in 20 participants being interviewed (female participants: n=11, 55%; male participants: n=9, 45%; median age: 66 years). A total of 11 (55%) participants had a substantial postoperative complication. Three themes captured the participants' understanding of AI and their perceptions of its use in shared clinical decision-making. The theme Expectations captured the participants' views of themselves as individuals with the right to self-determination as they sought therapeutic solutions tailored to their circumstances, needs, and desires, including whether to use AI at all. The theme Empowerment highlighted the potential of AI to enable patients to develop realistic expectations and equip them with personalized risk information to discuss in shared decision-making conversations with the surgeon. The theme Partnership captured the importance of symbiosis between AI and clinicians because AI has varied levels of interpretability and understanding of human emotions and empathy. CONCLUSIONS Patients who underwent knee replacement surgery in this study had varied levels of familiarity with AI and diverse conceptualizations of its definitions and capabilities. Educating patients about AI through nontechnical explanations and illustrative scenarios could help inform their decision to use it for risk prediction in the shared decision-making process with their surgeon. These findings could be used in the process of developing a questionnaire to ascertain the views of patients undergoing knee replacement surgery on the acceptability of AI in shared clinical decision-making. Future work could investigate the accuracy of this patient group's understanding of AI, beyond their familiarity with it, and how this influences their acceptance of its use. Surgeons may play a key role in finding a place for AI in the clinical setting as the uptake of this technology in health care continues to grow.
Collapse
Affiliation(s)
- Daniel J Gould
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
| | - Michelle M Dowsey
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
- Department of Orthopaedics, St Vincent's Hospital Melbourne, Melbourne, Australia
| | | | - Tim Spelman
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
| | - James A Bailey
- School of Computing and Information Systems, University of Melbourne, Melbourne, Australia
| | - Peter F M Choong
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
- Department of Orthopaedics, St Vincent's Hospital Melbourne, Melbourne, Australia
| | - Samantha Bunzli
- School of Health Sciences and Social Work, Griffith University, Brisbane, Australia
| |
Collapse
|
8
|
Pelly M, Fatehi F, Liew D, Verdejo-Garcia A. Artificial intelligence for secondary prevention of myocardial infarction: A qualitative study of patient and health professional perspectives. Int J Med Inform 2023; 173:105041. [PMID: 36934609 DOI: 10.1016/j.ijmedinf.2023.105041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 01/30/2023] [Accepted: 03/08/2023] [Indexed: 03/14/2023]
Abstract
BACKGROUND Artificial intelligence (AI) has potential to improve self-management of several chronic conditions. However, the perspective of patients and healthcare professionals regarding AI-enabled health management programs, which are key to successful implementation, remains poorly understood. PURPOSE To explore the opinions of people with a history of myocardial infarction (PHMI) and health professionals on the use of AI for secondary prevention of MI. PROCEDURE Three rounds of focus groups were conducted via videoconferencing with 38 participants: 22 PHMI and 16 health professionals. FINDINGS We identified 21 concepts stemming from participants' views, which we classified into five categories: Trust; Expected Functions; Adoption; Concerns; and Perceived Benefits. Trust covered the credibility of information and safety to believe health advice. Expected Functions covered tailored feedback and personalised advice. Adoption included usability features and overall interest in AI. Concerns originated from previous negative experience with AI. Perceived Benefits included the usefulness of AI to provide advice when regular contact with healthcare services is not feasible. Health professionals were more optimistic than PHMI about the usefulness of AI for improving health behaviour. CONCLUSIONS Altogether, our findings provide key insights from end-users to improve the likelihood of successful implementation and adoption of AI-enabled systems in the context of MI, as an exemplar of broader applications in chronic disease management.
Collapse
Affiliation(s)
- Melissa Pelly
- School of Psychological Sciences and Turner Institute for Brain and Mental Health, Monash University, Clayton, VIC 3800, Australia.
| | - Farhad Fatehi
- School of Psychological Sciences and Turner Institute for Brain and Mental Health, Monash University, Clayton, VIC 3800, Australia.
| | - Danny Liew
- School of Public Health and Preventive Medicine, Monash University, Clayton, VIC 3800, Australia; The Alfred Hospital, 55 Commercial Rd, Melbourne, VIC 3800, Australia.
| | - Antonio Verdejo-Garcia
- School of Psychological Sciences and Turner Institute for Brain and Mental Health, Monash University, Clayton, VIC 3800, Australia.
| |
Collapse
|
9
|
Smith C, Vajdic CM, Stephenson N. Centring equity in data‐driven public health: a call for guiding principles to support the equitable design and outcomes of Australia's data integration systems. Med J Aust 2023; 218:341-343. [PMID: 36990108 DOI: 10.5694/mja2.51902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 01/25/2023] [Accepted: 02/20/2023] [Indexed: 03/30/2023]
|
10
|
Aquino YSJ, Carter SM, Houssami N, Braunack-Mayer A, Win KT, Degeling C, Wang L, Rogers WA. Practical, epistemic and normative implications of algorithmic bias in healthcare artificial intelligence: a qualitative study of multidisciplinary expert perspectives. JOURNAL OF MEDICAL ETHICS 2023:jme-2022-108850. [PMID: 36823101 DOI: 10.1136/jme-2022-108850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 02/16/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND There is a growing concern about artificial intelligence (AI) applications in healthcare that can disadvantage already under-represented and marginalised groups (eg, based on gender or race). OBJECTIVES Our objectives are to canvas the range of strategies stakeholders endorse in attempting to mitigate algorithmic bias, and to consider the ethical question of responsibility for algorithmic bias. METHODOLOGY The study involves in-depth, semistructured interviews with healthcare workers, screening programme managers, consumer health representatives, regulators, data scientists and developers. RESULTS Findings reveal considerable divergent views on three key issues. First, views on whether bias is a problem in healthcare AI varied, with most participants agreeing bias is a problem (which we call the bias-critical view), a small number believing the opposite (the bias-denial view), and some arguing that the benefits of AI outweigh any harms or wrongs arising from the bias problem (the bias-apologist view). Second, there was a disagreement on the strategies to mitigate bias, and who is responsible for such strategies. Finally, there were divergent views on whether to include or exclude sociocultural identifiers (eg, race, ethnicity or gender-diverse identities) in the development of AI as a way to mitigate bias. CONCLUSION/SIGNIFICANCE Based on the views of participants, we set out responses that stakeholders might pursue, including greater interdisciplinary collaboration, tailored stakeholder engagement activities, empirical studies to understand algorithmic bias and strategies to modify dominant approaches in AI development such as the use of participatory methods, and increased diversity and inclusion in research teams and research participant recruitment and selection.
Collapse
Affiliation(s)
- Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, University of Wollongong, Wollongong, New South Wales, Australia
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, University of Wollongong, Wollongong, New South Wales, Australia
| | - Nehmat Houssami
- School of Public Health, The University of Sydney, Sydney, New South Wales, Australia
- The Daffodil Centre, Sydney, New South Wales, Australia
| | - Annette Braunack-Mayer
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, University of Wollongong, Wollongong, New South Wales, Australia
| | - Khin Than Win
- Centre for Persuasive Technology and Society, Faculty of Engineering and Information Sciences, University of Wollongong, Wollongong, New South Wales, Australia
| | - Chris Degeling
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, University of Wollongong, Wollongong, New South Wales, Australia
| | - Lei Wang
- Centre for Artificial Intelligence, School of Computing and Information Technology, University of Wollongong, Wollongong, New South Wales, Australia
| | - Wendy A Rogers
- Department of Philosophy and School of Medicine, Macquarie University, Sydney, New South Wales, Australia
| |
Collapse
|
11
|
Carter SM, Carolan L, Saint James Aquino Y, Frazer H, Rogers WA, Hall J, Degeling C, Braunack-Mayer A, Houssami N. Australian women's judgements about using artificial intelligence to read mammograms in breast cancer screening. Digit Health 2023; 9:20552076231191057. [PMID: 37559826 PMCID: PMC10408316 DOI: 10.1177/20552076231191057] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 07/13/2023] [Indexed: 08/11/2023] Open
Abstract
Objective Mammographic screening for breast cancer is an early use case for artificial intelligence (AI) in healthcare. This is an active area of research, mostly focused on the development and evaluation of individual algorithms. A growing normative literature argues that AI systems should reflect human values, but it is unclear what this requires in specific AI implementation scenarios. Our objective was to understand women's values regarding the use of AI to read mammograms in breast cancer screening. Methods We ran eight online discussion groups with a total of 50 women, focused on their expectations and normative judgements regarding the use of AI in breast screening. Results Although women were positive about the potential of breast screening AI, they argued strongly that humans must remain as central actors in breast screening systems and consistently expressed high expectations of the performance of breast screening AI. Women expected clear lines of responsibility for decision-making, to be able to contest decisions, and for AI to perform equally well for all programme participants. Women often imagined both that AI might replace radiographers and that AI implementation might allow more women to be screened: screening programmes will need to communicate carefully about these issues. Conclusions To meet women's expectations, screening programmes should delay implementation until there is strong evidence that the use of AI systems improves screening performance, should ensure that human expertise and responsibility remain central in screening programmes, and should avoid using AI in ways that exacerbate inequities.
Collapse
Affiliation(s)
- Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values (ACHEEV), School of Health & Society, University of Wollongong, Wollongong, NSW, Australia
| | - Lucy Carolan
- Australian Centre for Health Engagement, Evidence and Values (ACHEEV), School of Health & Society, University of Wollongong, Wollongong, NSW, Australia
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values (ACHEEV), School of Health & Society, University of Wollongong, Wollongong, NSW, Australia
| | - Helen Frazer
- St Vincent's Hospital BreastScreen, BreastScreen Victoria, Fitzroy, Victoria, Australia
| | - Wendy A Rogers
- Philosophy Department and School of Medicine, Macquarie University, Sydney, NSW, Australia
| | - Julie Hall
- Australian Centre for Health Engagement, Evidence and Values (ACHEEV), School of Health & Society, University of Wollongong, Wollongong, NSW, Australia
| | - Chris Degeling
- Australian Centre for Health Engagement, Evidence and Values (ACHEEV), School of Health & Society, University of Wollongong, Wollongong, NSW, Australia
| | - Annette Braunack-Mayer
- Australian Centre for Health Engagement, Evidence and Values (ACHEEV), School of Health & Society, University of Wollongong, Wollongong, NSW, Australia
| | - Nehmat Houssami
- Daffodil Centre, University of Sydney, Joint Venture with Cancer Council NSW, Sydney, NSW, Australia
- Sydney School of Public Health, Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
| |
Collapse
|