Fridman I, Boyles D, Chheda R, Baldwin-SoRelle C, Smith AB, Elston Lafata J. Identifying Misinformation About Unproven Cancer Treatments on Social Media Using User-Friendly Linguistic Characteristics: Content Analysis.
JMIR INFODEMIOLOGY 2025;
5:e62703. [PMID:
39938078 PMCID:
PMC11888050 DOI:
10.2196/62703]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Revised: 08/22/2024] [Accepted: 11/23/2024] [Indexed: 02/14/2025]
Abstract
BACKGROUND
Health misinformation, prevalent in social media, poses a significant threat to individuals, particularly those dealing with serious illnesses such as cancer. The current recommendations for users on how to avoid cancer misinformation are challenging because they require users to have research skills.
OBJECTIVE
This study addresses this problem by identifying user-friendly characteristics of misinformation that could be easily observed by users to help them flag misinformation on social media.
METHODS
Using a structured review of the literature on algorithmic misinformation detection across political, social, and computer science, we assembled linguistic characteristics associated with misinformation. We then collected datasets by mining X (previously known as Twitter) posts using keywords related to unproven cancer therapies and cancer center usernames. This search, coupled with manual labeling, allowed us to create a dataset with misinformation and 2 control datasets. We used natural language processing to model linguistic characteristics within these datasets. Two experiments with 2 control datasets used predictive modeling and Lasso regression to evaluate the effectiveness of linguistic characteristics in identifying misinformation.
RESULTS
User-friendly linguistic characteristics were extracted from 88 papers. The short-listed characteristics did not yield optimal results in the first experiment but predicted misinformation with an accuracy of 73% in the second experiment, in which posts with misinformation were compared with posts from health care systems. The linguistic characteristics that consistently negatively predicted misinformation included tentative language, location, URLs, and hashtags, while numbers, absolute language, and certainty expressions consistently predicted misinformation positively.
CONCLUSIONS
This analysis resulted in user-friendly recommendations, such as exercising caution when encountering social media posts featuring unwavering assurances or specific numbers lacking references. Future studies should test the efficacy of the recommendations among information users.
Collapse