1
|
Rothka AJ, Lorenz FJ, Hearn M, Meci A, LaBarge B, Walen SG, Slonimsky G, McGinn J, Chung T, Goyal N. Utilizing Artificial Intelligence to Increase the Readability of Patient Education Materials in Pediatric Otolaryngology. EAR, NOSE & THROAT JOURNAL 2024:1455613241289647. [PMID: 39467826 DOI: 10.1177/01455613241289647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/30/2024] Open
Abstract
Objectives: To identify the reading levels of existing patient education materials in pediatric otolaryngology and to utilize natural language processing artificial intelligence (AI) to reduce the reading level of patient education materials. Methods: Patient education materials for pediatric conditions were identified from the American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS) website. Patient education materials about the same conditions, if available, were identified and selected from the websites of 7 children's hospitals. The readability of the patient materials was scored before and after using AI with the Flesch-Kincaid calculator. ChatGPT version 3.5 was used to convert the materials to a fifth-grade reading level. Results: On average, AAO-HNS pediatric education material was written at a 10.71 ± 0.71 grade level. After requesting the reduction of those materials to a fifth-grade reading level, ChatGPT converted the same materials to an average grade level of 7.9 ± 1.18 (P < .01). When comparing the published materials from AAO-HNS and the 7 institutions, the average grade level was 9.32 ± 1.82, and ChatGPT was able to reduce the average level to 7.68 ± 1.12 (P = .0598). Of the 7 children's hospitals, only 1 institution had an average grade level below the recommended sixth-grade level. Conclusions: Patient education materials in pediatric otolaryngology were consistently above recommended reading levels. In its current state, AI can reduce the reading levels of education materials. However, it did not possess the capability to reduce all materials to be below the recommended reading level.
Collapse
Affiliation(s)
| | - F Jeffrey Lorenz
- Penn State Health Department of Otolaryngology-Head and Neck Surgery, Hershey, PA, USA
| | | | - Andrew Meci
- Penn State College of Medicine, Hershey, PA, USA
| | - Brandon LaBarge
- Penn State Health Department of Otolaryngology-Head and Neck Surgery, Hershey, PA, USA
| | - Scott G Walen
- Penn State Health Department of Otolaryngology-Head and Neck Surgery, Hershey, PA, USA
| | - Guy Slonimsky
- Penn State Health Department of Otolaryngology-Head and Neck Surgery, Hershey, PA, USA
| | - Johnathan McGinn
- Penn State Health Department of Otolaryngology-Head and Neck Surgery, Hershey, PA, USA
| | - Thomas Chung
- Penn State Health Department of Otolaryngology-Head and Neck Surgery, Hershey, PA, USA
| | - Neerav Goyal
- Penn State Health Department of Otolaryngology-Head and Neck Surgery, Hershey, PA, USA
| |
Collapse
|
2
|
Hochfelder CG, Shuman AG. Ethics and Palliation in Head and Neck Surgery. Surg Oncol Clin N Am 2024; 33:683-695. [PMID: 39244287 DOI: 10.1016/j.soc.2024.04.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/09/2024]
Abstract
Head and neck cancer is a potentially traumatizing disease with the potential to impact many of the functions which are core to human life: eating, drinking, breathing, and speaking. Patients with head and neck cancer are disproportionately impacted by socioeconomic challenges, social stigma, and difficult decisions about treatment approaches. Herein, the authors review foundational ethical principles and frameworks to guide care of these patients. The authors discuss specific challenges including shared decision-making and advance care planning. The authors further discuss palliative care with a discussion of the role of surgery as a component of palliation.
Collapse
Affiliation(s)
- Colleen G Hochfelder
- Department of Otolaryngology-Head and Neck Surgery, University of Michigan, 1500 East Medical Center Drive, 1903 Taubman Center, SPC 5312, Ann Arbor, MI 48109-5312, USA
| | - Andrew G Shuman
- Department of Otolaryngology-Head and Neck Surgery, University of Michigan, 1500 East Medical Center Drive, 1903 Taubman Center, SPC 5312, Ann Arbor, MI 48109-5312, USA.
| |
Collapse
|
3
|
Oliva AD, Pasick LJ, Hoffer ME, Rosow DE. Improving readability and comprehension levels of otolaryngology patient education materials using ChatGPT. Am J Otolaryngol 2024; 45:104502. [PMID: 39197330 DOI: 10.1016/j.amjoto.2024.104502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Accepted: 08/24/2024] [Indexed: 09/01/2024]
Abstract
OBJECTIVE A publicly available large language learning model platform may help determine current readability levels of otolaryngology patient education materials, as well as translate these materials to the recommended 6th-grade and 8th-grade reading levels. STUDY DESIGN Cross-sectional analysis. SETTING Online using large language learning model, ChatGPT. METHODS The Patient Education pages of the American Laryngological Association (ALA) and American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS) websites were accessed. Materials were input into ChatGPT (OpenAI, San Francisco, CA; version 3.5) and Microsoft Word (Microsoft, Redmond, WA; version 16.74). Programs calculated Flesch Reading Ease (FRE) scores, with higher scores indicating easier readability, and Flesch-Kincaid (FK) grade levels, estimating U.S. grade level required to understand text. ChatGPT was prompted to "translate to a 5th-grade reading level" and provide new scores. Scores were compared for statistical differences, as well as differences between ChatGPT and Word gradings. RESULTS Patient education materials were reviewed and 37 ALA and 72 AAO-HNS topics were translated. Overall FRE scores and FK grades demonstrated significant improvements following translation of materials, as scored by ChatGPT (p < 0.001). Word also scored significant improvements in FRE and FK following translation by ChatGPT for AAO-HNS materials overall (p < 0.001) but not for individual topics or for subspecialty-specific categories. Compared with Word, ChatGPT significantly exaggerated the change in FRE grades and FK scores (p < 0.001). CONCLUSION Otolaryngology patient education materials were found to be written at higher reading levels than recommended. Artificial intelligence may prove to be a useful resource to simplify content to make it more accessible to patients.
Collapse
Affiliation(s)
- Allison D Oliva
- Department of Otolaryngology-Head and Neck Surgery, University of Miami Miller School of Medicine, United States of America
| | - Luke J Pasick
- Department of Otolaryngology-Head and Neck Surgery, University of Miami Miller School of Medicine, United States of America
| | - Michael E Hoffer
- Department of Otolaryngology-Head and Neck Surgery, University of Miami Miller School of Medicine, United States of America
| | - David E Rosow
- Department of Otolaryngology-Head and Neck Surgery, University of Miami Miller School of Medicine, United States of America.
| |
Collapse
|
4
|
Swisher AR, Wu AW, Liu GC, Lee MK, Carle TR, Tang DM. Enhancing Health Literacy: Evaluating the Readability of Patient Handouts Revised by ChatGPT's Large Language Model. Otolaryngol Head Neck Surg 2024. [PMID: 39105460 DOI: 10.1002/ohn.927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Revised: 07/03/2024] [Accepted: 07/20/2024] [Indexed: 08/07/2024]
Abstract
OBJECTIVE To use an artificial intelligence (AI)-powered large language model (LLM) to improve readability of patient handouts. STUDY DESIGN Review of online material modified by AI. SETTING Academic center. METHODS Five handout materials obtained from the American Rhinologic Society (ARS) and the American Academy of Facial Plastic and Reconstructive Surgery websites were assessed using validated readability metrics. The handouts were inputted into OpenAI's ChatGPT-4 after prompting: "Rewrite the following at a 6th-grade reading level." The understandability and actionability of both native and LLM-revised versions were evaluated using the Patient Education Materials Assessment Tool (PEMAT). Results were compared using Wilcoxon rank-sum tests. RESULTS The mean readability scores of the standard (ARS, American Academy of Facial Plastic and Reconstructive Surgery) materials corresponded to "difficult," with reading categories ranging between high school and university grade levels. Conversely, the LLM-revised handouts had an average seventh-grade reading level. LLM-revised handouts had better readability in nearly all metrics tested: Flesch-Kincaid Reading Ease (70.8 vs 43.9; P < .05), Gunning Fog Score (10.2 vs 14.42; P < .05), Simple Measure of Gobbledygook (9.9 vs 13.1; P < .05), Coleman-Liau (8.8 vs 12.6; P < .05), and Automated Readability Index (8.2 vs 10.7; P = .06). PEMAT scores were significantly higher in the LLM-revised handouts for understandability (91 vs 74%; P < .05) with similar actionability (42 vs 34%; P = .15) when compared to the standard materials. CONCLUSION Patient-facing handouts can be augmented by ChatGPT with simple prompting to tailor information with improved readability. This study demonstrates the utility of LLMs to aid in rewriting patient handouts and may serve as a tool to help optimize education materials. LEVEL OF EVIDENCE Level VI.
Collapse
Affiliation(s)
- Austin R Swisher
- Department of Otolaryngology-Head and Neck Surgery, Mayo Clinic, Phoenix, Arizona, USA
| | - Arthur W Wu
- Division of Otolaryngology-Head and Neck Surgery, Cedars-Sinai, Los Angeles, California, USA
| | - Gene C Liu
- Division of Otolaryngology-Head and Neck Surgery, Cedars-Sinai, Los Angeles, California, USA
| | - Matthew K Lee
- Division of Otolaryngology-Head and Neck Surgery, Cedars-Sinai, Los Angeles, California, USA
| | - Taylor R Carle
- Division of Otolaryngology-Head and Neck Surgery, Cedars-Sinai, Los Angeles, California, USA
| | - Dennis M Tang
- Division of Otolaryngology-Head and Neck Surgery, Cedars-Sinai, Los Angeles, California, USA
| |
Collapse
|
5
|
Del Risco A, Cherches A, Polcaro L, Washabaugh C, Hales R, Jiang R, Allori A, Raynor E. Improving Health Literacy of Elective Procedures in Pediatric Otolaryngology. Otolaryngol Head Neck Surg 2024; 171:546-553. [PMID: 38520236 DOI: 10.1002/ohn.731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 02/02/2024] [Accepted: 02/29/2024] [Indexed: 03/25/2024]
Abstract
OBJECTIVE To identify if the addition of supplementary material, such as video or written resources, to the consent process, can improve a patient's or guardian's health literacy in pediatric otolaryngology. STUDY DESIGN Prospective randomized crossover design. SETTING Tertiary Academic Center. METHODS From April 18, 2022 to August 29, 2023, 151 children scheduled to undergo 1 of 6 procedures by the same provider were queried and completed a 6-question baseline test based on the information. They each watched a 2-minute video and read a written summary about the procedure; the order of resources was randomized. They answered the same 6-questions after viewing each resource. All tests were scored based on accuracy using an ordinal scale of 1 to 6. Resource preference was collected. Wilcoxon signed-rank tests were run to analyze differences in scores after the addition of supplementary resources and logistic regression modeling was run to analyze demographic effects on postresource score differences. RESULTS Of 151 participants, 74.2% were guardians, with 78.8% having completed a high school or greater education. The Wilcoxon signed-rank test indicated that postresource scores were statistically significantly higher (P < .001) than pretest scores. Logistic regression modeling showed that participants were less likely to show score improved if they were younger than 18 and were of white race. A majority (87.4%) preferred the addition of a video to the consent process. CONCLUSION The addition of video or written resources significantly improves understanding of elective procedures. The development of procedure-specific resources can supplement the consent process and ensure decision-makers have adequate health literacy for informed decision-making.
Collapse
Affiliation(s)
- Amanda Del Risco
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, North Carolina, USA
| | - Alexander Cherches
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, North Carolina, USA
| | - Lauren Polcaro
- Campbell University School of Osteopathic Medicine, Lillington, North Carolina, USA
| | - Claire Washabaugh
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, North Carolina, USA
| | - Robin Hales
- Department of Child Life, Duke University School of Medicine, Durham, North Carolina, USA
| | - Rong Jiang
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, North Carolina, USA
- Department of Head and Neck Surgery and Communication Sciences, Duke University School of Medicine, Durham, North Carolina, USA
| | - Alexander Allori
- Department of Head and Neck Surgery and Communication Sciences, Duke University School of Medicine, Durham, North Carolina, USA
- Department of Population Health Sciences, Duke University School of Medicine, Durham, North Carolina, USA
| | - Eileen Raynor
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, North Carolina, USA
- Department of Head and Neck Surgery and Communication Sciences, Duke University School of Medicine, Durham, North Carolina, USA
| |
Collapse
|
6
|
Patel EA, Fleischer L, Filip P, Eggerstedt M, Hutz M, Michaelides E, Batra PS, Tajudeen BA. The Use of Artificial Intelligence to Improve Readability of Otolaryngology Patient Education Materials. Otolaryngol Head Neck Surg 2024; 171:603-608. [PMID: 38751109 DOI: 10.1002/ohn.816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 02/14/2024] [Accepted: 02/24/2024] [Indexed: 07/27/2024]
Abstract
OBJECTIVE The recommended readability of health education materials is at the sixth-grade level. Artificial intelligence (AI) large language models such as the newly released ChatGPT4 might facilitate the conversion of patient-education materials at scale. We sought to ascertain whether online otolaryngology education materials meet recommended reading levels and whether ChatGPT4 could rewrite these materials to the sixth-grade level. We also wished to ensure that converted materials were accurate and retained sufficient content. METHODS Seventy-one articles from patient educational materials published online by the American Academy of Otolaryngology-Head and Neck Surgery were selected. Articles were entered into ChatGPT4 with the prompt "translate this text to a sixth-grade reading level." Flesch Reading Ease Score (FRES) and Flesch-Kincaid Grade Level (FKGL) were determined for each article before and after AI conversion. Each article and conversion were reviewed for factual inaccuracies, and each conversion was reviewed for content retention. RESULTS The 71 articles had an initial average FKGL of 11.03 and FRES of 46.79. After conversion by ChatGPT4, the average FKGL across all articles was 5.80 and FRES was 77.27. Converted materials provided enough detail for patient education with no factual errors. DISCUSSION We found that ChatGPT4 improved the reading accessibility of otolaryngology online patient education materials to recommended levels quickly and effectively. IMPLICATIONS FOR PRACTICE Physicians can determine whether their patient education materials exceed current recommended reading levels by using widely available measurement tools, and then apply AI dialogue platforms to modify materials to more accessible levels as needed. LEVEL OF EVIDENCE Level 5.
Collapse
Affiliation(s)
- Evan A Patel
- Department of Otorhinolaryngology-Head and Neck Surgery, Rush University Medical Center, Chicago, Illinois, USA
| | - Lindsay Fleischer
- Department of Otorhinolaryngology-Head and Neck Surgery, Rush University Medical Center, Chicago, Illinois, USA
| | - Peter Filip
- Department of Otorhinolaryngology-Head and Neck Surgery, Rush University Medical Center, Chicago, Illinois, USA
| | - Michael Eggerstedt
- Department of Otorhinolaryngology-Head and Neck Surgery, Rush University Medical Center, Chicago, Illinois, USA
| | - Michael Hutz
- Department of Otorhinolaryngology-Head and Neck Surgery, Rush University Medical Center, Chicago, Illinois, USA
| | - Elias Michaelides
- Department of Otorhinolaryngology-Head and Neck Surgery, Rush University Medical Center, Chicago, Illinois, USA
| | - Pete S Batra
- Department of Otorhinolaryngology-Head and Neck Surgery, Rush University Medical Center, Chicago, Illinois, USA
| | - Bobby A Tajudeen
- Department of Otorhinolaryngology-Head and Neck Surgery, Rush University Medical Center, Chicago, Illinois, USA
| |
Collapse
|
7
|
Armache M, Assi S, Wu R, Jain A, Lu J, Gordon L, Jacobs LM, Fundakowski CE, Rising KL, Leader AE, Fakhry C, Mady LJ. Readability of Patient Education Materials in Head and Neck Cancer: A Systematic Review. JAMA Otolaryngol Head Neck Surg 2024; 150:713-724. [PMID: 38900443 DOI: 10.1001/jamaoto.2024.1569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
Importance Patient education materials (PEMs) can promote patient engagement, satisfaction, and treatment adherence. The American Medical Association recommends that PEMs be developed for a sixth-grade or lower reading level. Health literacy (HL) refers to an individual's ability to seek, understand, and use health information to make appropriate decisions regarding their health. Patients with suboptimal HL may not be able to understand or act on health information and are at risk for adverse health outcomes. Objective To assess the readability of PEMs on head and neck cancer (HNC) and to evaluate HL among patients with HNC. Evidence Review A systematic review of the literature was performed by searching Cochrane, PubMed, and Scopus for peer-reviewed studies published from 1995 to 2024 using the keywords head and neck cancer, readability, health literacy, and related synonyms. Full-text studies in English that evaluated readability and/or HL measures were included. Readability assessments included the Flesch-Kincaid Grade Level (FKGL grade, 0-20, with higher grades indicating greater reading difficulty) and Flesch Reading Ease (FRE score, 1-100, with higher scores indicating easier readability), among others. Reviews, conference materials, opinion letters, and guidelines were excluded. Study quality was assessed using the Appraisal Tool for Cross-Sectional Studies. Findings Of the 3235 studies identified, 17 studies assessing the readability of 1124 HNC PEMs produced by professional societies, hospitals, and others were included. The mean FKGL grade ranged from 8.8 to 14.8; none of the studies reported a mean FKGL of grade 6 or lower. Eight studies assessed HL and found inadequate HL prevalence ranging from 11.9% to 47.0%. Conclusions and Relevance These findings indicate that more than one-third of patients with HNC demonstrate inadequate HL, yet none of the PEMs assessed were developed for a sixth grade or lower reading level, as recommended by the American Medical Association. This incongruence highlights the need to address the readability of HNC PEMs to improve patient understanding of the disease and to mitigate potential barriers to shared decision-making for patients with HNC. It is crucial to acknowledge the responsibility of health care professionals to produce and promote more effective PEMs to dismantle the potentially preventable literacy barriers.
Collapse
Affiliation(s)
- Maria Armache
- Department of Otolaryngology-Head & Neck Surgery, The Johns Hopkins School of Medicine, Baltimore, Maryland
| | - Sahar Assi
- Cochlear Center for Hearing and Public Health, Johns Hopkins University, Baltimore, Maryland
- Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
| | - Richard Wu
- Head and Neck Institute, Cleveland Clinic, Cleveland, Ohio
| | - Amiti Jain
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Joseph Lu
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Larissa Gordon
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Lisa M Jacobs
- Mixed Methods Research Lab, Perelman School of Medicine, University of Pennsylvania, Philadelphia
| | - Christopher E Fundakowski
- Department of Otolaryngology-Head and Neck Surgery, Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Kristin L Rising
- Jefferson Center for Connected Care, Thomas Jefferson University, Philadelphia, Pennsylvania
- Department of Emergency Medicine, Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Amy E Leader
- Department of Population Health, Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, Pennsylvania
- Department of Medical Oncology, Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, Pennsylvania
- Sidney Kimmel Cancer Center, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Carole Fakhry
- Department of Otolaryngology-Head & Neck Surgery, The Johns Hopkins School of Medicine, Baltimore, Maryland
| | - Leila J Mady
- Department of Otolaryngology-Head & Neck Surgery, The Johns Hopkins School of Medicine, Baltimore, Maryland
| |
Collapse
|
8
|
Şahin Ş, Tekin MS, Yigit YE, Erkmen B, Duymaz YK, Bahşi İ. Evaluating the Success of ChatGPT in Addressing Patient Questions Concerning Thyroid Surgery. J Craniofac Surg 2024:00001665-990000000-01698. [PMID: 38861337 DOI: 10.1097/scs.0000000000010395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Accepted: 05/15/2024] [Indexed: 06/13/2024] Open
Abstract
OBJECTIVE This study aimed to evaluate the utility and efficacy of ChatGPT in addressing questions related to thyroid surgery, taking into account accuracy, readability, and relevance. METHODS A simulated physician-patient consultation on thyroidectomy surgery was conducted by posing 21 hypothetical questions to ChatGPT. Responses were evaluated using the DISCERN score by 3 independent ear, nose and throat specialists. Readability measures including Flesch Reading Ease), Flesch-Kincaid Grade Level, Gunning Fog Index, Simple Measure of Gobbledygook, Coleman-Liau Index, and Automated Readability Index were also applied. RESULTS The majority of ChatGPT responses were rated fair or above using the DISCERN system, with an average score of 45.44 ± 11.24. However, the readability scores were consistently higher than the recommended grade 6 level, indicating the information may not be easily comprehensible to the general public. CONCLUSION While ChatGPT exhibits potential in answering patient queries related to thyroid surgery, its current formulation is not yet optimally tailored for patient comprehension. Further refinements are necessary for its efficient application in the medical domain.
Collapse
Affiliation(s)
- Şamil Şahin
- Ear Nose and Throat Specialist, Private Practice
| | | | - Yesim Esen Yigit
- Department of Otolaryngology, Umraniye Training and Research Hospital, University of Health Sciences, Istanbul
| | - Burak Erkmen
- Ear Nose and Throat Specialist, Private Practice
| | - Yasar Kemal Duymaz
- Department of Otolaryngology, Umraniye Training and Research Hospital, University of Health Sciences, Istanbul
| | - İlhan Bahşi
- Department of Anatomy, Faculty of Medicine, Gaziantep University, Gaziantep, Turkey
| |
Collapse
|
9
|
Sahin S, Erkmen B, Duymaz YK, Bayram F, Tekin AM, Topsakal V. Evaluating ChatGPT-4's performance as a digital health advisor for otosclerosis surgery. Front Surg 2024; 11:1373843. [PMID: 38903865 PMCID: PMC11188327 DOI: 10.3389/fsurg.2024.1373843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Accepted: 05/20/2024] [Indexed: 06/22/2024] Open
Abstract
Purpose This study aims to evaluate the effectiveness of ChatGPT-4, an artificial intelligence (AI) chatbot, in providing accurate and comprehensible information to patients regarding otosclerosis surgery. Methods On October 20, 2023, 15 hypothetical questions were posed to ChatGPT-4 to simulate physician-patient interactions about otosclerosis surgery. Responses were evaluated by three independent ENT specialists using the DISCERN scoring system. The readability was evaluated using multiple indices: Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (Gunning FOG), Simple Measure of Gobbledygook (SMOG), Coleman-Liau Index (CLI), and Automated Readability Index (ARI). Results The responses from ChatGPT-4 received DISCERN scores ranging from poor to excellent, with an overall score of 50.7 ± 8.2. The readability analysis indicated that the texts were above the 6th-grade level, suggesting they may not be easily comprehensible to the average reader. There was a significant positive correlation between the referees' scores. Despite providing correct information in over 90% of the cases, the study highlights concerns regarding the potential for incomplete or misleading answers and the high readability level of the responses. Conclusion While ChatGPT-4 shows potential in delivering health information accurately, its utility is limited by the level of readability of its responses. The study underscores the need for continuous improvement in AI systems to ensure the delivery of information that is both accurate and accessible to patients with varying levels of health literacy. Healthcare professionals should supervise the use of such technologies to enhance patient education and care.
Collapse
Affiliation(s)
| | | | - Yaşar Kemal Duymaz
- Umraniye Research and Training Hospital, University of Health Sciences, Istanbul, Türkiye
| | - Furkan Bayram
- Umraniye Research and Training Hospital, University of Health Sciences, Istanbul, Türkiye
| | - Ahmet Mahmut Tekin
- Department of Otolaryngology and Head & Neck Surgery, Vrije Universiteit Brussel, Brussels Health Care Center, Brussels, Belgium
| | - Vedat Topsakal
- Department of Otolaryngology and Head & Neck Surgery, Vrije Universiteit Brussel, Brussels Health Care Center, Brussels, Belgium
| |
Collapse
|
10
|
Shen SA, Perez-Heydrich CA, Xie DX, Nellis JC. ChatGPT vs. web search for patient questions: what does ChatGPT do better? Eur Arch Otorhinolaryngol 2024; 281:3219-3225. [PMID: 38416195 PMCID: PMC11410109 DOI: 10.1007/s00405-024-08524-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Accepted: 01/31/2024] [Indexed: 02/29/2024]
Abstract
PURPOSE Chat generative pretrained transformer (ChatGPT) has the potential to significantly impact how patients acquire medical information online. Here, we characterize the readability and appropriateness of ChatGPT responses to a range of patient questions compared to results from traditional web searches. METHODS Patient questions related to the published Clinical Practice Guidelines by the American Academy of Otolaryngology-Head and Neck Surgery were sourced from existing online posts. Questions were categorized using a modified Rothwell classification system into (1) fact, (2) policy, and (3) diagnosis and recommendations. These were queried using ChatGPT and traditional web search. All results were evaluated on readability (Flesch Reading Ease and Flesch-Kinkaid Grade Level) and understandability (Patient Education Materials Assessment Tool). Accuracy was assessed by two blinded clinical evaluators using a three-point ordinal scale. RESULTS 54 questions were organized into fact (37.0%), policy (37.0%), and diagnosis (25.8%). The average readability for ChatGPT responses was lower than traditional web search (FRE: 42.3 ± 13.1 vs. 55.6 ± 10.5, p < 0.001), while the PEMAT understandability was equivalent (93.8% vs. 93.5%, p = 0.17). ChatGPT scored higher than web search for questions the 'Diagnosis' category (p < 0.01); there was no difference in questions categorized as 'Fact' (p = 0.15) or 'Policy' (p = 0.22). Additional prompting improved ChatGPT response readability (FRE 55.6 ± 13.6, p < 0.01). CONCLUSIONS ChatGPT outperforms web search in answering patient questions related to symptom-based diagnoses and is equivalent in providing medical facts and established policy. Appropriate prompting can further improve readability while maintaining accuracy. Further patient education is needed to relay the benefits and limitations of this technology as a source of medial information.
Collapse
Affiliation(s)
- Sarek A Shen
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins School of Medicine, 601 North Caroline Street, Baltimore, MD, 21287, USA.
| | | | - Deborah X Xie
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins School of Medicine, 601 North Caroline Street, Baltimore, MD, 21287, USA
| | - Jason C Nellis
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins School of Medicine, 601 North Caroline Street, Baltimore, MD, 21287, USA
| |
Collapse
|
11
|
Hearn M, Sciscent BY, King TS, Goyal N. Factors Associated With Inadequate Health Literacy: An Academic Otolaryngology Clinic Population Study. OTO Open 2024; 8:e130. [PMID: 38618286 PMCID: PMC11015145 DOI: 10.1002/oto2.130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 02/24/2024] [Accepted: 03/17/2024] [Indexed: 04/16/2024] Open
Abstract
Objective To characterize the prevalence of inadequate health literacy among otolaryngology patients and assess the association of individual patient factors with inadequate health literacy. Study Design Cross-sectional study. Setting Tertiary academic medical center otolaryngology clinic. Methods Adult patients presenting to the clinic were recruited from March to June 2022. Participants completed a validated health literacy questionnaire in the waiting room. Data on age, sex, race, insurance, county of residence, and language were extracted from the electronic medical record, linked to the survey responses, and deidentified for analysis. Logistic regression analyses assessed the association between inadequate health literacy and patient factors. Results Of 374 participants, the mean age was 54.8 years (SD = 17.8) and most were white (79%) and native English speakers (95%). The median health literacy score was 14.5 (Q1-Q3: 12.0-15.0) and 43 participants (12%) had inadequate health literacy. Bivariate analysis showed the odds of inadequate health literacy were 2.5 times greater for those with public insurance (95% confidence interval [CI]: 1.24-5.20, P = .011), 3.5 times greater for males (95% CI: 1.75-6.92, P < .001), and significantly different among race groups (P = .003). When all factors were evaluated simultaneously with multivariable regression, only sex (P < .001) and race (P = .005) remained significant predictors of inadequate health literacy. There were no significant associations between health literacy and age or rurality. Conclusion Inadequate health literacy was associated with sex and race, but not with age or rurality. 12% of patients had inadequate health literacy, which may perpetuate disparities in care and necessitate interventions to improve care delivery in otolaryngology.
Collapse
Affiliation(s)
- Madison Hearn
- Department of Otolaryngology–Head and Neck SurgeryPenn State College of MedicineHersheyPennsylvaniaUSA
| | - Bao Y. Sciscent
- Department of Otolaryngology–Head and Neck SurgeryPenn State College of MedicineHersheyPennsylvaniaUSA
| | - Tonya S. King
- Department of Otolaryngology–Head and Neck SurgeryPenn State College of MedicineHersheyPennsylvaniaUSA
| | - Neerav Goyal
- Department of Otolaryngology–Head and Neck SurgeryPenn State College of MedicineHersheyPennsylvaniaUSA
| |
Collapse
|
12
|
Morse E, Odigie E, Gillespie H, Rameau A. The Readability of Patient-Facing Social Media Posts on Common Otolaryngologic Diagnoses. Otolaryngol Head Neck Surg 2024; 170:1051-1058. [PMID: 38018504 DOI: 10.1002/ohn.584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 10/09/2023] [Accepted: 10/28/2023] [Indexed: 11/30/2023]
Abstract
OBJECTIVE To assess the readability of patient-facing educational information about the most common otolaryngology diagnoses on popular social media platforms. STUDY DESIGN Cross-sectional study. SETTING Social media platforms. METHODS The top 5 otolaryngologic diagnoses were identified from the National Ambulatory Medical Care Survey Database. Facebook, Twitter, TikTok, and Instagram were searched using these terms, and the top 25 patient-facing posts from unique accounts for each search term and poster type (otolaryngologist, other medical professional, layperson) were identified. Captions, text, and audio from images and video, and linked articles were extracted. The readability of each post element was calculated with multiple readability formulae. Readability was summarized and was compared between poster types, platforms, and search terms via Kruskal-Wallis testing. RESULTS Median readability, by grade level, by grade level, was greater than 10 for captions, 5 for image-associated text, and 9 for linked articles. Captions and images in posts by laypeople were significantly more readable than captions by otolaryngologists or other medical professionals, but there was no difference for linked articles. All post components were more readable in posts about cerumen than those about other search terms. CONCLUSIONS When examining the readability of posts on social media regarding the most common otolaryngology diagnoses, we found that many posts are less readable than recommended for patients, and found that posts by laypeople were significantly more readable than those by medical professionals. Medical professionals should work to make educational social media posts more readable to facilitate patient comprehension.
Collapse
Affiliation(s)
- Elliot Morse
- Department of Otolaryngology-Head and Neck Surgery, Weill Cornell Medicine, New York, New York, USA
| | - Eseosa Odigie
- Department of Otolaryngology-Head and Neck Surgery, Weill Cornell Medicine, New York, New York, USA
- Sean Parker Institute for the Voice, Weill Cornell Medicine, New York, New York, USA
| | - Helen Gillespie
- Department of Otolaryngology-Head and Neck Surgery, Weill Cornell Medicine, New York, New York, USA
- Sean Parker Institute for the Voice, Weill Cornell Medicine, New York, New York, USA
| | - Anaïs Rameau
- Department of Otolaryngology-Head and Neck Surgery, Weill Cornell Medicine, New York, New York, USA
- Sean Parker Institute for the Voice, Weill Cornell Medicine, New York, New York, USA
| |
Collapse
|
13
|
Cırık AA, Yiğit YE, Tekin AM, Duymaz YK, Şahin Ş, Erkmen B, Topsakal V. Comprehensiveness of online sources for patient education on otosclerosis. Front Surg 2024; 11:1327793. [PMID: 38327547 PMCID: PMC10847337 DOI: 10.3389/fsurg.2024.1327793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 01/08/2024] [Indexed: 02/09/2024] Open
Abstract
Purpose This study aimed to assess the readability indices of websites including educational materials on otosclerosis. Methods We performed a Google search on 19 April 2023 using the term "otosclerosis." The first 50 hits were collected and analyzed. The websites were categorized into two groups: websites for health professionals and general websites for patients. Readability indices were calculated using the website https://www.webfx.com/tools/read-able/. Results A total of 33 websites were eligible and analyzed (20 health professional-oriented and 13 patient-oriented websites). When patient-oriented websites and health professional-oriented websites were individually analyzed, mean Flesch Reading Ease scores were found to be 52.16 ± 14.34 and 46.62 ± 10.07, respectively. There was no significant difference between the two groups upon statistical analysis. Conclusion Current patient educational material available online related to otosclerosis is written beyond the recommended sixth-grade reading level. The quality of good websites is worthless to the patients if they cannot comprehend the text.
Collapse
Affiliation(s)
- Ahmet Adnan Cırık
- Department of Otolaryngology, Umraniye Training and Research Hospital, University of Health Sciences, Istanbul, Türkiye
| | - Yeşim Esen Yiğit
- Department of Otolaryngology, Umraniye Training and Research Hospital, University of Health Sciences, Istanbul, Türkiye
| | - Ahmet Mahmut Tekin
- Department of Otolaryngology and Head & Neck Surgery, Vrije Universiteit Brussel, University Hospital UZ Brussel, Brussels Health Campus, Brussels, Belgium
| | - Yaşar Kemal Duymaz
- Department of Otolaryngology, Umraniye Training and Research Hospital, University of Health Sciences, Istanbul, Türkiye
| | | | - Burak Erkmen
- Sancaktepe Martyr Prof Dr Ilhan Varank Training and Research Hospital Department of Otolaryngology, University of Health Sciences, İstanbul, Türkiye
| | - Vedat Topsakal
- Department of Otolaryngology and Head & Neck Surgery, Vrije Universiteit Brussel, University Hospital UZ Brussel, Brussels Health Campus, Brussels, Belgium
| |
Collapse
|
14
|
Nicholls B, Acharya V, Slim MAM, Haywood M, Sharma R. Online health information on sinonasal inverted papillomas: An assessment on readability and quality. Clin Otolaryngol 2024; 49:124-129. [PMID: 37867392 DOI: 10.1111/coa.14115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 09/07/2023] [Accepted: 09/30/2023] [Indexed: 10/24/2023]
Abstract
BACKGROUND/OBJECTIVES Sinonasal inverted papilloma (IP) is a rare but serious diagnosis, with a paucity of patient-centred information regarding this condition. As more patients are seeking healthcare information online, the quality and comprehensibility of this information becomes ever more important. The aim of the study was to investigate the readability and quality of websites on inverted papilloma. METHODS The term IP and seven of its synonyms were inputted into the three of the most commonly used search engines in the English-speaking world (Google, Yahoo and Bing). The first 20 results returned for each search term were then screened with our exclusion criteria. The remaining websites were assessed for their readability using the using the Flesch Reading Ease Score (FRES) and average grade level (AGL). Quality was assessed using the DISCERN questionnaire. RESULTS Of the 480 websites returned using our search strategy, 410 were excluded using our screening criteria. Removal of duplicates from the remaining 70 websites left 14 for inclusion in the final analysis. The mean FRES score of the remaining websites was 30.5 ± 10 and the mean AGL was 15.2 ± 1.1, corresponding to a reading age of a 21-year-old. The median DISCERN score was 33.5 (30.5-36.5), a score which falls within the 'poor quality' range. CONCLUSION The readability and quality of online patient information on IP is far below the expected standard. Healthcare providers have a responsibility to direct patients to appropriate sources of information or consider producing new material should a lack of appropriate sources exist.
Collapse
Affiliation(s)
- Benjamin Nicholls
- Imperial College School of Medicine, Imperial College London, London, UK
| | - Vikas Acharya
- Royal National ENT and Eastman Dental Hospitals, University College London Hospitals, London, UK
| | | | - Matthew Haywood
- Royal National ENT and Eastman Dental Hospitals, University College London Hospitals, London, UK
| | | |
Collapse
|
15
|
Tan DJY, Ko TK, Fan KS. The Readability and Quality of Web-Based Patient Information on Nasopharyngeal Carcinoma: Quantitative Content Analysis. JMIR Form Res 2023; 7:e47762. [PMID: 38010802 DOI: 10.2196/47762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 08/25/2023] [Accepted: 10/25/2023] [Indexed: 11/29/2023] Open
Abstract
BACKGROUND Nasopharyngeal carcinoma (NPC) is a rare disease that is strongly associated with exposure to the Epstein-Barr virus and is characterized by the formation of malignant cells in nasopharynx tissues. Early diagnosis of NPC is often difficult owing to the location of initial tumor sites and the nonspecificity of initial symptoms, resulting in a higher frequency of advanced-stage diagnoses and a poorer prognosis. Access to high-quality, readable information could improve the early detection of the disease and provide support to patients during disease management. OBJECTIVE This study aims to assess the quality and readability of publicly available web-based information in the English language about NPC, using the most popular search engines. METHODS Key terms relevant to NPC were searched across 3 of the most popular internet search engines: Google, Yahoo, and Bing. The top 25 results from each search engine were included in the analysis. Websites that contained text written in languages other than English, required paywall access, targeted medical professionals, or included nontext content were excluded. Readability for each website was assessed using the Flesch Reading Ease score and the Flesch-Kincaid grade level. Website quality was assessed using the Journal of the American Medical Association (JAMA) and DISCERN tools as well as the presence of a Health on the Net Foundation seal. RESULTS Overall, 57 suitable websites were included in this study; 26% (15/57) of the websites were academic. The mean JAMA and DISCERN scores of all websites were 2.80 (IQR 3) and 57.60 (IQR 19), respectively, with a median of 3 (IQR 2-4) and 61 (IQR 49-68), respectively. Health care industry websites (n=3) had the highest mean JAMA score of 4 (SD 0). Academic websites (15/57, 26%) had the highest mean DISCERN score of 77.5. The Health on the Net Foundation seal was present on only 1 website, which also achieved a JAMA score of 3 and a DISCERN score of 50. Significant differences were observed between the JAMA score of hospital websites and the scores of industry websites (P=.04), news service websites (P<.048), charity and nongovernmental organization websites (P=.03). Despite being a vital source for patients, general practitioner websites were found to have significantly lower JAMA scores compared with charity websites (P=.05). The overall mean readability scores reflected an average reading age of 14.3 (SD 1.1) years. CONCLUSIONS The results of this study suggest an inconsistent and suboptimal quality of information related to NPC on the internet. On average, websites presented readability challenges, as written information about NPC was above the recommended reading level of sixth grade. As such, web-based information requires improvement in both quality and accessibility, and healthcare providers should be selective about information recommended to patients, ensuring they are reliable and readable.
Collapse
Affiliation(s)
- Denise Jia Yun Tan
- Department of Surgery, Royal Stoke University Hospital, Stoke on Trent, United Kingdom
| | - Tsz Ki Ko
- Department of Surgery, Royal Stoke University Hospital, Stoke on Trent, United Kingdom
| | - Ka Siu Fan
- Department of Surgery, Royal Surrey County Hospital, Guildford, Surrey, United Kingdom
| |
Collapse
|
16
|
Duymaz YK, Tekin AM, D’Haese P, Şahin Ş, Erkmen B, Cırık AA, Topsakal V. Comprehensiveness of online sources for patient education on hereditary hearing impairment. Front Pediatr 2023; 11:1147207. [PMID: 37404560 PMCID: PMC10315533 DOI: 10.3389/fped.2023.1147207] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 05/30/2023] [Indexed: 07/06/2023] Open
Abstract
Introduction The present study aimed at investigating the readability of online sources on hereditary hearing impairment (HHI). Methods In August 2022, the search terms "hereditary hearing impairment", "genetic deafness", hereditary hearing loss", and "sensorineural hearing loss of genetic origin" were entered into the Google search engine and educational materials were determined. The first 50 websites were determined for each search. The double hits were removed and websites with only graphics or tables were excluded. Websites were categorized into either a professional society, a clinical practice or a general health information website. The readability tests to evaluate the websites included: Flesch Reading Ease, Flesch-Kincaid grade level, Gunning-Fog Index, Simple Measure of Gobbledygook, Coleman-Liau Index, Automated Readability Index. Results Twentynine websites were included and categorized as from 4 professional societies, 11 from clinical practices and 14 providing general information. All analyzed websites required higher reading levels than sixth grade. On average 12-16 years of education is required to read and understand the websites focused on HHI. Although general health information websites have better readability, the difference was not statistically significant. Discussion The readability scores of every type of online educational materials on HHI are above the recommended level indicating that not all patients and parents can comprehend the information they seek for on these websites.
Collapse
Affiliation(s)
- Yaşar Kemal Duymaz
- Department of Otolaryngology, University of Health Science, Umraniye Training and Research Hospital, Istanbul, Türkiye
| | - Ahmet M. Tekin
- Department of Otolaryngology and Head & Neck Surgery, Vrije Universiteit Brussel, University Hospital UZ Brussel, Brussels Health Campus, Brussels, Belgium
| | - Patrick D’Haese
- Faculty of Medicine and Pharmacy, Vrije Universiteit Brussel, Brussels, Belgium
| | | | - Burak Erkmen
- Department of Otolaryngology, University of Health Science, Sancaktepe Prof Dr Ilhan Varank Training and Research Hospital, Istanbul, Türkiye
| | - Ahmet Adnan Cırık
- Department of Otolaryngology, University of Health Science, Umraniye Training and Research Hospital, Istanbul, Türkiye
| | - Vedat Topsakal
- Department of Otolaryngology and Head & Neck Surgery, Vrije Universiteit Brussel, University Hospital UZ Brussel, Brussels Health Campus, Brussels, Belgium
| |
Collapse
|
17
|
Ahmadzadeh K, Bahrami M, Zare-Farashbandi F, Adibi P, Boroumand MA, Rahimi A. Patient education information material assessment criteria: A scoping review. Health Info Libr J 2023; 40:3-28. [PMID: 36637218 DOI: 10.1111/hir.12467] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Revised: 10/13/2022] [Accepted: 11/03/2022] [Indexed: 01/14/2023]
Abstract
BACKGROUND Patient education information material (PEIM) is an essential component of patient education programs in increasing patients' ability to cope with their diseases. Therefore, it is essential to consider the criteria that will be used to prepare and evaluate these resources. OBJECTIVE This paper aims to identify these criteria and recognize the tools or methods used to evaluate them. METHODS National and international databases and indexing banks, including PubMed, Scopus, Web of Science, ProQuest, the Cochrane Library, Magiran, SID and ISC, were searched for this review. Original or review articles, theses, short surveys, and conference papers published between January 1990 and June 2022 were included. RESULTS Overall, 4688 documents were retrieved, of which 298 documents met the inclusion criteria. The criteria were grouped into 24 overarching criteria. The most frequently used criteria were readability, quality, suitability, comprehensibility and understandability. CONCLUSION This review has provided empirical evidence to identify criteria, tools, techniques or methods for developing or evaluating a PEIM. The authors suggest that developing a comprehensive tool based on these findings is critical for evaluating the overall efficiency of PEIM using effective criteria.
Collapse
Affiliation(s)
- Khadijeh Ahmadzadeh
- Health Information Technology Research Center, Isfahan University of Medical Sciences, Isfahan, Iran.,Student Research Commitee, Sirjan School of Medical Sciences, Sirjan, Iran
| | - Masoud Bahrami
- Department of Adult Health Nursing, Nursing and Midwifery Care Research Center, School of Nursing and Midwifery, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Firoozeh Zare-Farashbandi
- Health Information Technology Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Payman Adibi
- Gastroenterology Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Mohammad Ali Boroumand
- Department of Medical Library and Information Sciences, School of Health Management and Information Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Alireza Rahimi
- Health Information Technology Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| |
Collapse
|
18
|
Grose EM, Cheng EY, Levin M, Philteos J, Lee JW, Monteiro EA. Critical Quality and Readability Analysis of Online Patient Education Materials on Parotidectomy: A Cross-Sectional Study. Ann Otol Rhinol Laryngol 2022; 131:1317-1324. [PMID: 34991334 DOI: 10.1177/00034894211066670] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
PURPOSE Complications related to parotidectomy can cause significant morbidity, and thus, the decision to pursue this surgery needs to be well-informed. Given that information available online plays a critical role in patient education, this study aimed to evaluate the readability and quality of online patient education materials (PEMs) regarding parotidectomy. METHODS A Google search was performed using the term "parotidectomy" and the first 10 pages of the search were analyzed. Quality and reliability of the online information was assessed using the DISCERN instrument. Flesch-Kincaid Grade Level (FKGL) and Flesch-Reading Ease Score (FRE) were used to evaluate readability. RESULTS Thirty-five PEMs met the inclusion criteria. The average FRE score was 59.3 and 16 (46%) of the online PEMs had FRE scores below 60 indicating that they were fairly difficult to very difficult to read. The average grade level of the PEMs was above the eighth grade when evaluated with the FKGL. The average DISCERN score was 41.7, which is indicative of fair quality. There were no significant differences between PEMs originating from medical institutions and PEMs originating from other sources in terms of quality or readability. CONCLUSION Online PEMs on parotidectomy may not be comprehensible to the average individual. This study highlights the need for the development of more appropriate PEMs to inform patients about parotidectomy.
Collapse
Affiliation(s)
- Elysia Miriam Grose
- Department of Otolaryngology - Head and Neck Surgery, University of Toronto, Toronto, ON, Canada
| | - Emily YiQin Cheng
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Marc Levin
- Department of Otolaryngology - Head and Neck Surgery, University of Toronto, Toronto, ON, Canada
| | - Justine Philteos
- Department of Otolaryngology - Head and Neck Surgery, University of Toronto, Toronto, ON, Canada
| | - Jong Wook Lee
- Department of Otolaryngology - Head and Neck Surgery, University of Toronto, Toronto, ON, Canada
| | - Eric A Monteiro
- Department of Otolaryngology - Head and Neck Surgery, Sinai Health System, Toronto, ON, Canada
| |
Collapse
|
19
|
Nagahata K, Kanda M, Kamekura R, Sugawara M, Yama N, Suzuki C, Takano K, Hatakenaka M, Takahashi H. Abnormal [ 18F]fluorodeoxyglucose accumulation to tori tubarius in IgG4-related disease. Ann Nucl Med 2021; 36:200-207. [PMID: 34748155 DOI: 10.1007/s12149-021-01691-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Accepted: 10/27/2021] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Tubarial glands (TGs) are recently refocused gland tissues localized near the tori tubarius in the nasopharynx and their clinical relevance is not clear yet. IgG4-related disease (IgG4-RD) is a progressive fibrosing condition and salivary glands are well-affected lesions. The aim of the present study is to examine [18F]fluorodeoxyglucose ([18F]FDG) accumulation to the tori tubarius in IgG4-related disease (IgG4-RD). METHODS 48 patients with IgG4-RD who underwent positron emission tomography (PET) scanning with [18F]FDG were included and semi-quantitative analysis of [18F]FDG accumulation to tori tubarius was performed along with the clinical features and histopathological analysis. RESULTS Of the 48 patients, abnormal [18F]FDG accumulation (metabolic tumour volume ≥ 1) to tori tubarius was observed in 15 (31.3%), all of whom had lesions in other head and neck glands. IgG4-RD patients with abnormal [18F]FDG accumulation to tori tubarius showed swollen nasopharyngeal walls around tori tubarius and forceps biopsy of the lesion revealed acinar cells and IgG4-positive plasma cells histologically. Abnormal [18F]FDG accumulation (maximum standard uptake value, metabolic tumour volume and total lesion glycolysis) to tori tubarius correlated with higher IgG4 and lower IgA serum concentrations. CONCLUSIONS Abnormal [18F]FDG accumulation to tori tubarius can be observed in patients with IgG4-RD and the abnormal [18F]FDG accumulation to tori tubarius can be a clue of TG involvement in IgG4-RD.
Collapse
Affiliation(s)
- Ken Nagahata
- Department of Rheumatology and Clinical Immunology, Sapporo Medical University School of Medicine, South 1-West 16, Chuo-ku, Sapporo, Hokkaido, 060-8543, Japan
| | - Masatoshi Kanda
- Department of Rheumatology and Clinical Immunology, Sapporo Medical University School of Medicine, South 1-West 16, Chuo-ku, Sapporo, Hokkaido, 060-8543, Japan.
| | - Ryuta Kamekura
- Department of Otolaryngology, Sapporo Medical University School of Medicine, Sapporo, Japan
- Department of Human Immunology, Research Institute for Frontier Medicine, Sapporo Medical University School of Medicine, Sapporo, Japan
| | - Masanari Sugawara
- Department of Rheumatology and Clinical Immunology, Sapporo Medical University School of Medicine, South 1-West 16, Chuo-ku, Sapporo, Hokkaido, 060-8543, Japan
- Department of Rheumatology, Endocrinology and Nephrology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
| | - Naoya Yama
- Department of Diagnostic Radiology, Sapporo Medical University School of Medicine, Sapporo, Japan
| | - Chisako Suzuki
- Department of Rheumatology and Clinical Immunology, Sapporo Medical University School of Medicine, South 1-West 16, Chuo-ku, Sapporo, Hokkaido, 060-8543, Japan
| | - Kenichi Takano
- Department of Otolaryngology, Sapporo Medical University School of Medicine, Sapporo, Japan
| | - Masamitsu Hatakenaka
- Department of Diagnostic Radiology, Sapporo Medical University School of Medicine, Sapporo, Japan
| | - Hiroki Takahashi
- Department of Rheumatology and Clinical Immunology, Sapporo Medical University School of Medicine, South 1-West 16, Chuo-ku, Sapporo, Hokkaido, 060-8543, Japan
| |
Collapse
|