1
|
Copp T, van Nieuwenhoven T, McCaffery KJ, Hammarberg K, Cvejic E, Doust J, Lensen S, Peate M, Augustine L, van der Mee F, Mol BW, Lieberman D, Jansen J. Women's interest, knowledge, and attitudes relating to anti-Mullerian hormone testing: a randomized controlled trial. Hum Reprod 2024:deae147. [PMID: 39069635 DOI: 10.1093/humrep/deae147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Revised: 05/30/2024] [Indexed: 07/30/2024] Open
Abstract
STUDY QUESTION What is the impact of co-designed, evidence-based information regarding the anti-Mullerian hormone (AMH) test on women's interest in having the test? SUMMARY ANSWER Women who viewed the evidence-based information about the AMH test had lower interest in having an AMH test than women who viewed information produced by an online company selling the test direct-to-consumers. WHAT IS KNOWN ALREADY Online information about AMH testing often has unfounded claims about its ability to predict fertility and conception, and evidence suggests that women seek out and are recommended the AMH test as a measure of their fertility potential. STUDY DESIGN, SIZE, DURATION An online randomized trial was conducted from November to December 2022. Women were randomized (double-blind, equal allocation) to view one of two types of information: co-designed, evidence-based information about the AMH test (intervention), or existing information about the AMH test from a website which markets the test direct-to-consumers (control). A total of 967 women were included in the final analysis. PARTICIPANTS/MATERIALS, SETTING, METHODS Participants were women recruited through an online panel, who were aged 25-40 years, living in Australia or The Netherlands, had never given birth, were not currently pregnant but would like to have a child now or in the future, and had never had an AMH test. The primary outcome was interest in having an AMH test (seven-point scale; 1 = definitely NOT interested to 7 = definitely interested). Secondary outcomes included attitudes, knowledge, and psychosocial and behavioural outcomes relating to AMH testing. MAIN RESULTS AND THE ROLE OF CHANCE Women who viewed the evidence-based information about the AMH test had lower interest in having an AMH test (MD = 1.05, 95% CI = 0.83-1.30), less positive attitudes towards (MD = 1.29, 95% CI = 4.57-5.70), and higher knowledge about the test than women who viewed the control information (MD = 0.75, 95% CI = 0.71-0.82). LIMITATIONS, REASONS FOR CAUTION The sample was more highly educated than the broader Australian and Dutch populations and some measures (e.g. influence on family planning) were hypothetical in nature. WIDER IMPLICATIONS OF THE FINDINGS Women have higher knowledge of and lower interest in having the AMH test when given evidence-based information about the test and its limitations. Despite previous studies suggesting women are enthusiastic about AMH testing to learn about their fertility potential, we demonstrate that this enthusiasm does not hold when they are informed about the test's limitations. STUDY FUNDING/COMPETING INTEREST(S) This project was supported by an NHMRC Emerging Leader Research Fellowship (2009419) and the Australian Health Research Alliance's Women's Health Research, Translation and Impact Network EMCR award. B.W.M. reports consultancy for ObsEva and Merck and travel support from Merck. D.L. is the Medical Director of, and holds stock in, City Fertility NSW and reports consultancy for Organon and honoraria from Ferring, Besins, and Merck. K.H. reports consultancy and travel support from Merck and Organon. K.M. is a director of Health Literacy Solutions that owns a licence of the Sydney Health Literacy Lab Health Literacy Editor. No other relevant disclosures exist. TRIAL REGISTRATION NUMBER ACTRN12622001136796. TRIAL REGISTRATION DATE 17 August 2022. DATE OF FIRST PATIENT’S ENROLMENT 21 November 2022.
Collapse
Affiliation(s)
- T Copp
- Faculty of Medicine and Health, School of Public Health, The University of Sydney, Sydney, NSW, Australia
| | - T van Nieuwenhoven
- Faculty of Health, Medicine and Life Sciences, School of Public Health and Primary Care, Maastricht University, Maastricht, The Netherlands
| | - K J McCaffery
- Faculty of Medicine and Health, School of Public Health, The University of Sydney, Sydney, NSW, Australia
| | - K Hammarberg
- School of Public Health and Preventive Medicine, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, VIC, Australia
| | - E Cvejic
- Faculty of Medicine and Health, School of Public Health, The University of Sydney, Sydney, NSW, Australia
| | - J Doust
- Australian Women and Girls' Health Research Centre, School of Public Health, Faculty of Medicine, The University of Queensland, Brisbane, QLD, Australia
| | - S Lensen
- Department of Obstetrics and Gynaecology, Royal Women's Hospital, The University of Melbourne, Melbourne, VIC, Australia
| | - M Peate
- Department of Obstetrics and Gynaecology, Royal Women's Hospital, The University of Melbourne, Melbourne, VIC, Australia
| | - L Augustine
- Faculty of Medicine and Health, School of Public Health, The University of Sydney, Sydney, NSW, Australia
| | - F van der Mee
- Faculty of Health, Medicine and Life Sciences, School of Public Health and Primary Care, Maastricht University, Maastricht, The Netherlands
| | - B W Mol
- Department of Obstetrics and Gynaecology, Monash University, Melbourne, VIC, Australia
- Aberdeen Centre for Women's Health Research, School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK
| | - D Lieberman
- City Fertility Centre Pty Ltd, Sydney, NSW, Australia
| | - J Jansen
- Faculty of Health, Medicine and Life Sciences, School of Public Health and Primary Care, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
2
|
Ayre J, Kumarage R, Jenkins H, McCaffery KJ, Maher CG, Hancock MJ. A Decision Aid for Patients Considering Surgery for Sciatica: Codesign and User-Testing With Patients and Clinicians. Health Expect 2024; 27:e14111. [PMID: 38896009 PMCID: PMC11186058 DOI: 10.1111/hex.14111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 05/15/2024] [Accepted: 05/28/2024] [Indexed: 06/21/2024] Open
Abstract
BACKGROUND Surgery can help patients with leg pain caused by sciatica recover faster, but by 12 months outcomes are similar to nonsurgical management. For many the decision to have surgery may require reflection, and patient decision aids are an evidence-based clinical tool that can help guide patients through this decision. OBJECTIVE The aim of this study was to develop and refine a decision aid for patients with sciatica who are deciding whether to have surgery or 'wait and see' (i.e., try nonsurgical management first). DESIGN Semistructured interviews with think-aloud user-testing protocol. PARTICIPANTS Twenty clinicians and 20 patients with lived experience of low back pain or sciatica. OUTCOME MEASURES Items from Technology Acceptance Model, Preparation for Decision Making Scale and Decision Quality Instrument for Herniated Disc 2.0 (knowledge instrument). METHODS The prototype integrated relevant research with working group perspectives, decision aid standards and health literacy guidelines. The research team refined the prototype through seven rounds of user-testing, which involved discussing user-testing feedback and implementing changes before progressing to the next round. RESULTS As a result of working group feedback, the decision aid was divided into sections: before, during and after a visit to the surgeon. Across all rounds of user-testing, clinicians rated the resource 5.9/7 (SD = 1.0) for perceived usefulness, and 6.0/7 for perceived ease of use (SD = 0.8). Patients reported the decision aid was easy to understand, on average correctly answering 3.4/5 knowledge questions (SD = 1.2) about surgery for sciatica. The grade reading score for the website was 9.0. Patients scored highly on preparation for decision-making (4.4/5, SD = 0.7), suggesting strong potential to empower patients. Interview feedback showed that patients and clinicians felt the decision aid would encourage question-asking and help patients reflect on personal values. CONCLUSIONS Clinicians found the decision aid acceptable, patients found it was easy to understand and both groups felt it would empower patients to actively engage in their care and come to an informed decision that aligned with personal values. Input from the working group and user-testing was crucial for ensuring that the decision aid met patient and clinician needs. PATIENT OR PUBLIC CONTRIBUTION Patients and clinicians contributed to prototype development via the working group.
Collapse
Affiliation(s)
- Julie Ayre
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and HealthThe University of SydneySydneyNew South WalesAustralia
| | - Richie Kumarage
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and HealthThe University of SydneySydneyNew South WalesAustralia
| | - Hazel Jenkins
- Department of Chiropractic, Faculty of Medicine, Health and Human SciencesMacquarie UniversityMacquarie ParkNew South WalesAustralia
| | - Kirsten J. McCaffery
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and HealthThe University of SydneySydneyNew South WalesAustralia
| | - Christopher G. Maher
- Institute of Musculoskeletal Health, Faculty of Medicine and HealthThe University of Sydney and Sydney Local Health DistrictSydneyNew South WalesAustralia
| | - Mark J. Hancock
- Department of Health Professions, Faculty of Medicine, Health and Human SciencesMacquarie UniversityMacquarie ParkNew South WalesAustralia
| |
Collapse
|
3
|
Li H, Kalra M, Zhu L, Ackermann DM, Taba M, Bonner C, Bell KJ. Communicating the Imperfect Diagnostic Accuracy of COVID-19 Rapid Antigen Self-Tests: An Online Randomized Experiment. Med Decis Making 2024; 44:437-450. [PMID: 38651834 PMCID: PMC11102651 DOI: 10.1177/0272989x241242131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Accepted: 02/26/2024] [Indexed: 04/25/2024]
Abstract
OBJECTIVE To investigate the potential impacts of optimizing coronavirus disease 2019 (COVID-19) rapid antigen test (RAT) self-testing diagnostic accuracy information. DESIGN Online randomized experiment using hypothetical scenarios: in scenarios 1 to 3 (RAT result positive), the posttest probability was considered to be very high (likely true positives), and in scenarios 4 and 5 (RAT result negative), the posttest probability was considered to be moderately high (likely false negatives). SETTING December 12 to 22, 2022, during the mixed-variant Omicron wave in Australia. PARTICIPANTS Australian adults. Intervention: diagnostic accuracy of a COVID-19 self-RAT presented in a health literacy-sensitive way; usual care: diagnostic accuracy information provided by the manufacturer; control: no diagnostic accuracy information. MAIN OUTCOME MEASURE Intention to self-isolate. RESULTS A total of 226 participants were randomized (control n = 75, usual care n = 76, intervention n = 75). More participants in the intervention group correctly interpreted the meaning of the diagnostic accuracy information (P = 0.08 for understanding sensitivity, P < 0.001 for understanding specificity). The proportion who would self-isolate was similar across scenarios 1 to 3 (likely true positives). The proportion was higher in the intervention group than in the control for scenarios 4 and 5 (likely false negatives). These differences were not statistically significant. The largest potential effect was seen in scenario 5 (dinner party with confirmed cases, the person has symptoms, negative self-RAT result), with 63% of the intervention group and 49% of the control group indicating they would self-isolate (absolute difference 13.3%, 95% confidence interval: -2% to 30%, P = 0.10). CONCLUSION Health literacy sensitive formatting supported participant understanding and recall of diagnostic accuracy information. This may increase community intentions to self-isolate when there is a likely false-negative self-RAT result. Trial registration: Australia New Zealand Clinical Trial Registry (ACTRN12622001517763). HIGHLIGHTS Community-based diagnostic accuracy studies of COVID-19 self-RATs indicate substantially lower sensitivity (and higher risk of false-negative results) than the manufacturer-supplied information on most government public Web sites.This online randomized study found that a health literacy-sensitive presentation of the imperfect diagnostic accuracy COVID-19 self-RATs supported participant understanding and recall of diagnostic accuracy information.Health literacy-sensitive presentation may increase community intentions to self-isolate after a negative test result where the posttest probability is still moderately high (i.e., likely false-negative result).To prevent the onward spread of infection, efforts to improve communication about the high risk of false-negative results from COVID-19 self-RATs are urgently needed.
Collapse
Affiliation(s)
- Huijun Li
- Sydney School of Public Health, Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
| | - Megha Kalra
- Sydney School of Public Health, Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
| | - Lin Zhu
- Sydney School of Public Health, Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
| | - Deonna M. Ackermann
- Sydney School of Public Health, Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
| | - Melody Taba
- Sydney School of Public Health, Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
| | - Carissa Bonner
- Sydney School of Public Health, Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
| | - Katy J.L. Bell
- Sydney School of Public Health, Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
| |
Collapse
|
4
|
Karimi AH, Guyler MR, Hecht CJ, Burkhart RJ, Acuña AJ, Kamath AF. Assessing the Readability of Clinical Trial Consent Forms for Surgical Specialties. J Surg Res 2024; 296:711-719. [PMID: 38367522 DOI: 10.1016/j.jss.2024.01.045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 01/14/2024] [Accepted: 01/20/2024] [Indexed: 02/19/2024]
Abstract
INTRODUCTION To evaluate the readability of surgical clinical trial consent forms and compare readability across surgical specialties. METHODS We conducted a cross-sectional analysis of surgical clinical trial consent forms available on ClinicalTrials.gov to quantitatively evaluate readability, word count, and length variations among different specialties. The analysis was performed between November 2022 and January 2023. A total of 386 surgical clinical trial consent forms across 14 surgical specialties were included. RESULTS The main outcomes were language complexity (measured using Flesch-Kincaid Grade Level), number of words (measured as word count), time to read (measured at reading speeds of 240 per min), and readability (measured by Flesch Reading Ease Score, Gunning Frog Index, Simple Measures of Gobbledygook Index, FORCAST, and Automated Readability Index). The surgical consent forms were a mean (standard deviation) of 2626 (1668) words long, with a mean of 12:53 min to read at 240 words per min. None of the surgical specialties had an average readability level of sixth grade or lower across all six indices, and only 16 out of 386 (4%) clinical trials met the recommended reading level. Furthermore, there was no significant difference in reading grade level between surgical specialties based on the Flesch-Kincaid Grade Level and Flesch Reading Ease indices. CONCLUSIONS Our findings suggest that current surgical clinical trial consent documents are too long and complex, exceeding the recommended sixth-grade reading level. Ensuring readable clinical trial consent forms is not only ethically responsible but also crucial for protecting patients' rights and well-being by facilitating informed decision-making.
Collapse
Affiliation(s)
- Amir H Karimi
- Department of Orthopedic Surgery, Cleveland Clinic Foundation, Cleveland, Ohio
| | - Maura R Guyler
- Department of Orthopedic Surgery, Cleveland Clinic Foundation, Cleveland, Ohio
| | - Christian J Hecht
- Department of Orthopedic Surgery, Cleveland Clinic Foundation, Cleveland, Ohio
| | - Robert J Burkhart
- Department of Orthopedic Surgery, Cleveland Clinic Foundation, Cleveland, Ohio
| | - Alexander J Acuña
- Department of Orthopaedic Surgery, Midwest Orthopaedics at Rush, Chicago, Illinois
| | - Atul F Kamath
- Department of Orthopedic Surgery, Cleveland Clinic Foundation, Cleveland, Ohio.
| |
Collapse
|
5
|
Ayre J, Mac O, McCaffery K, McKay BR, Liu M, Shi Y, Rezwan A, Dunn AG. New Frontiers in Health Literacy: Using ChatGPT to Simplify Health Information for People in the Community. J Gen Intern Med 2024; 39:573-577. [PMID: 37940756 DOI: 10.1007/s11606-023-08469-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 10/06/2023] [Indexed: 11/10/2023]
Abstract
BACKGROUND Most health information does not meet the health literacy needs of our communities. Writing health information in plain language is time-consuming but the release of tools like ChatGPT may make it easier to produce reliable plain language health information. OBJECTIVE To investigate the capacity for ChatGPT to produce plain language versions of health texts. DESIGN Observational study of 26 health texts from reputable websites. METHODS ChatGPT was prompted to 'rewrite the text for people with low literacy'. Researchers captured three revised versions of each original text. MAIN MEASURES Objective health literacy assessment, including Simple Measure of Gobbledygook (SMOG), proportion of the text that contains complex language (%), number of instances of passive voice and subjective ratings of key messages retained (%). KEY RESULTS On average, original texts were written at grade 12.8 (SD = 2.2) and revised to grade 11.0 (SD = 1.2), p < 0.001. Original texts were on average 22.8% complex (SD = 7.5%) compared to 14.4% (SD = 5.6%) in revised texts, p < 0.001. Original texts had on average 4.7 instances (SD = 3.2) of passive text compared to 1.7 (SD = 1.2) in revised texts, p < 0.001. On average 80% of key messages were retained (SD = 15.0). The more complex original texts showed more improvements than less complex original texts. For example, when original texts were ≥ grade 13, revised versions improved by an average 3.3 grades (SD = 2.2), p < 0.001. Simpler original texts (< grade 11) improved by an average 0.5 grades (SD = 1.4), p < 0.001. CONCLUSIONS This study used multiple objective assessments of health literacy to demonstrate that ChatGPT can simplify health information while retaining most key messages. However, the revised texts typically did not meet health literacy targets for grade reading score, and improvements were marginal for texts that were already relatively simple.
Collapse
Affiliation(s)
- Julie Ayre
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, Rm 128C Edward Ford Building, Sydney, NSW, Australia.
| | - Olivia Mac
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, Rm 128C Edward Ford Building, Sydney, NSW, Australia
| | - Kirsten McCaffery
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, Rm 128C Edward Ford Building, Sydney, NSW, Australia
| | - Brad R McKay
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, Rm 128C Edward Ford Building, Sydney, NSW, Australia
| | - Mingyi Liu
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, Rm 128C Edward Ford Building, Sydney, NSW, Australia
| | - Yi Shi
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, Rm 128C Edward Ford Building, Sydney, NSW, Australia
| | - Atria Rezwan
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, Rm 128C Edward Ford Building, Sydney, NSW, Australia
| | - Adam G Dunn
- Discipline of Biomedical Informatics and Digital Health, School of Medical Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia
| |
Collapse
|
6
|
Cvetkovski B, Muscat D, Bousquet J, Cabrera M, House R, Katsoulotos G, Lourenco O, Papadopoulos N, Price DB, Rimmer J, Ryan D, Smith P, Yan K, Bosnic-Anticevich S. The future of allergic rhinitis management: A partnership between healthcare professionals and patients. World Allergy Organ J 2024; 17:100873. [PMID: 38463017 PMCID: PMC10924206 DOI: 10.1016/j.waojou.2024.100873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 01/12/2024] [Accepted: 01/16/2024] [Indexed: 03/12/2024] Open
Abstract
Allergic rhinitis (AR) is a chronic respiratory condition that internationally continues to be burdensome and impacts quality of life. Despite availability of medicines and guidelines for healthcare providers for the optimal management of AR, optimisation of its management in the community continues to be elusive. The reasons for this are multi-faceted and include both environmental and healthcare related factors. One factor that we can no longer ignore is that AR management is no longer limited to the domain of healthcare provider and that people with AR make their own choices when choosing how to manage their condition, without seeking advice from a health care provider. We must build a bridge between healthcare provider knowledge and guidelines and patient decision-making. With this commentary, we propose that a shared decision-making approach between healthcare professionals and people with AR be developed and promoted, with a focus on patient health literacy. As custodians of AR knowledge, we have a responsibility to ensure it is accessible to those that matter most-the people with AR.
Collapse
Affiliation(s)
| | | | | | | | - Rachel House
- Woolcock Institute of Medical Research, Australia
| | - Gregory Katsoulotos
- The University of Notre Dame Australia and The University of Technology, Australia
| | | | | | | | | | - Dermot Ryan
- University of Aberdeen Academic Primary Care Research Group, UK
| | - Pete Smith
- Griffith University - Gold Coast Campus, Australia
| | - Kwok Yan
- Royal Prince Alfred Hospital, Australia
| | | |
Collapse
|
7
|
Ayre J, Muscat DM, Mac O, Bonner C, Dunn AG, Dalmazzo J, Mouwad D, McCaffery K. Helping patient educators meet health literacy needs: End-user testing and iterative development of an innovative health literacy editing tool. PEC INNOVATION 2023; 2:100162. [PMID: 37384149 PMCID: PMC10294045 DOI: 10.1016/j.pecinn.2023.100162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 03/15/2023] [Accepted: 05/06/2023] [Indexed: 06/30/2023]
Abstract
Objective The Sydney Health Literacy Lab (SHeLL) Editor is an online text-editing tool that provides real-time assessment and feedback on written health information (assesses grade reading score, complex language, passive voice). This study aimed to explore how the design could be further enhanced to help health information providers interpret and act on automated feedback. Methods The prototype was iteratively refined across four rounds of user-testing with health services staff (N = 20). Participants took part in online interviews and a brief follow-up survey using validated usability scales (System Usability Scale, Technology Acceptance Model). After each round, Yardley's (2021) optimisation criteria guided which changes would be implemented. Results Participants rated the Editor as having adequate usability (M = 82.8 out of 100, SD = 13.5). Most modifications sought to reduce information overload (e.g. simplifying instructions for new users) or make feedback motivating and actionable (e.g. using frequent incremental feedback to highlight changes to the text altered assessment scores). Conclusion terative user-testing was critical to balancing academic values and the practical needs of the Editor's target users. The final version emphasises actionable real-time feedback and not just assessment. Innovation The Editor is a new tool that will help health information providers apply health literacy principles to written text.
Collapse
Affiliation(s)
- Julie Ayre
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, NSW, Australia
| | - Danielle M. Muscat
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, NSW, Australia
| | - Olivia Mac
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, NSW, Australia
| | - Carissa Bonner
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, NSW, Australia
- Menzies Centre for Health Policy and Economics, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Adam G. Dunn
- Discipline of Biomedical Informatics and Digital Health, School of Medical Sciences, Faculty of Medicine and Health, The University of Sydney, NSW, Australia
| | - Jason Dalmazzo
- Discipline of Biomedical Informatics and Digital Health, School of Medical Sciences, Faculty of Medicine and Health, The University of Sydney, NSW, Australia
| | - Dana Mouwad
- Western Sydney Local Health District, Health Literacy Hub, Westmead, NSW, Australia
| | - Kirsten McCaffery
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, NSW, Australia
| |
Collapse
|
8
|
Spallek S, Birrell L, Kershaw S, Devine EK, Thornton L. Can we use ChatGPT for Mental Health and Substance Use Education? Examining Its Quality and Potential Harms. JMIR MEDICAL EDUCATION 2023; 9:e51243. [PMID: 38032714 PMCID: PMC10722374 DOI: 10.2196/51243] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 11/02/2023] [Accepted: 11/08/2023] [Indexed: 12/01/2023]
Abstract
BACKGROUND The use of generative artificial intelligence, more specifically large language models (LLMs), is proliferating, and as such, it is vital to consider both the value and potential harms of its use in medical education. Their efficiency in a variety of writing styles makes LLMs, such as ChatGPT, attractive for tailoring educational materials. However, this technology can feature biases and misinformation, which can be particularly harmful in medical education settings, such as mental health and substance use education. This viewpoint investigates if ChatGPT is sufficient for 2 common health education functions in the field of mental health and substance use: (1) answering users' direct queries and (2) aiding in the development of quality consumer educational health materials. OBJECTIVE This viewpoint includes a case study to provide insight into the accessibility, biases, and quality of ChatGPT's query responses and educational health materials. We aim to provide guidance for the general public and health educators wishing to utilize LLMs. METHODS We collected real world queries from 2 large-scale mental health and substance use portals and engineered a variety of prompts to use on GPT-4 Pro with the Bing BETA internet browsing plug-in. The outputs were evaluated with tools from the Sydney Health Literacy Lab to determine the accessibility, the adherence to Mindframe communication guidelines to identify biases, and author assessments on quality, including tailoring to audiences, duty of care disclaimers, and evidence-based internet references. RESULTS GPT-4's outputs had good face validity, but upon detailed analysis were substandard in comparison to expert-developed materials. Without engineered prompting, the reading level, adherence to communication guidelines, and use of evidence-based websites were poor. Therefore, all outputs still required cautious human editing and oversight. CONCLUSIONS GPT-4 is currently not reliable enough for direct-consumer queries, but educators and researchers can use it for creating educational materials with caution. Materials created with LLMs should disclose the use of generative artificial intelligence and be evaluated on their efficacy with the target audience.
Collapse
Affiliation(s)
- Sophia Spallek
- The Matilda Centre for Research in Mental Health and Substance Use, The University of Sydney, Sydney, Australia
| | - Louise Birrell
- The Matilda Centre for Research in Mental Health and Substance Use, The University of Sydney, Sydney, Australia
| | - Stephanie Kershaw
- The Matilda Centre for Research in Mental Health and Substance Use, The University of Sydney, Sydney, Australia
| | - Emma Krogh Devine
- The Matilda Centre for Research in Mental Health and Substance Use, The University of Sydney, Sydney, Australia
| | - Louise Thornton
- The Matilda Centre for Research in Mental Health and Substance Use, The University of Sydney, Sydney, Australia
| |
Collapse
|
9
|
Ayre J, Bonner C, Gonzalez J, Vaccaro T, Cousins M, McCaffery K, Muscat DM. Integrating consumer perspectives into a large-scale health literacy audit of health information materials: learnings and next steps. BMC Health Serv Res 2023; 23:416. [PMID: 37120520 PMCID: PMC10148726 DOI: 10.1186/s12913-023-09434-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Accepted: 04/21/2023] [Indexed: 05/01/2023] Open
Abstract
BACKGROUND Health information is less effective when it does not meet the health literacy needs of its consumers. For health organisations, assessing the appropriateness of their existing health information resources is a key step to addressing this issue. This study describes novel methods for a consumer-centred large-scale health literacy audit of existing resources and reflects on opportunities to further refine the method. METHODS This audit focused on resources developed by NPS MedicineWise, an Australian not-for-profit that promotes safe and informed use of medicines. The audit comprised 4 stages, with consumers engaged at each stage: 1) Select a sample of resources for assessment; 2) Assess the sample using subjective (Patient Education Materials Assessment Tool) and objective (Sydney Health Literacy Lab Health Literacy Editor) assessment tools; 3) Review audit findings through workshops and identify priority areas for future work; 4) Reflect and gather feedback on the audit process via interviews. RESULTS Of 147 resources, consumers selected 49 for detailed assessment that covered a range of health topics, health literacy skills, and formats, and which had varied web usage. Overall, 42 resources (85.7%) were assessed as easy to understand, but only 26 (53.1%) as easy to act on. A typical text was written at a grade 12 reading level and used the passive voice 6 times. About one in five words in a typical text were considered complex (19%). Workshops identified three key areas for action: make resources easier to understand and act on; consider the readers' context, needs, and skills; and improve inclusiveness and representation. Interviews with workshop attendees highlighted that audit methods could be further improved by setting clear expectations about the project rationale, objectives, and consumer roles; providing consumers with a simpler subjective health literacy assessment tool, and addressing issues related to diverse representation. CONCLUSIONS This audit yielded valuable consumer-centred priorities for improving organisational health literacy with regards to updating a large existing database of health information resources. We also identified important opportunities to further refine the process. Study findings provide valuable practical insights that can inform organisational health actions for the upcoming Australian National Health Literacy Strategy.
Collapse
Affiliation(s)
- Julie Ayre
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, Rm 128C Edward Ford Building, Sydney, NSW, Australia.
| | - Carissa Bonner
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, Rm 128C Edward Ford Building, Sydney, NSW, Australia
- Menzies Centre for Health Policy and Economics, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | | | | | | | - Kirsten McCaffery
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, Rm 128C Edward Ford Building, Sydney, NSW, Australia
| | - Danielle M Muscat
- Sydney Health Literacy Lab, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney, Rm 128C Edward Ford Building, Sydney, NSW, Australia
| |
Collapse
|