1
|
Peng W, Lee HR, Lim S. Leveraging Chatbots to Combat Health Misinformation for Older Adults: Participatory Design Study. JMIR Form Res 2024; 8:e60712. [PMID: 39393065 DOI: 10.2196/60712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Revised: 07/29/2024] [Accepted: 09/06/2024] [Indexed: 10/13/2024] Open
Abstract
BACKGROUND Older adults, a population particularly susceptible to misinformation, may experience attempts at health-related scams or defrauding, and they may unknowingly spread misinformation. Previous research has investigated managing misinformation through media literacy education or supporting users by fact-checking information and cautioning for potential misinformation content, yet studies focusing on older adults are limited. Chatbots have the potential to educate and support older adults in misinformation management. However, many studies focusing on designing technology for older adults use the needs-based approach and consider aging as a deficit, leading to issues in technology adoption. Instead, we adopted the asset-based approach, inviting older adults to be active collaborators in envisioning how intelligent technologies can enhance their misinformation management practices. OBJECTIVE This study aims to understand how older adults may use chatbots' capabilities for misinformation management. METHODS We conducted 5 participatory design workshops with a total of 17 older adult participants to ideate ways in which chatbots can help them manage misinformation. The workshops included 3 stages: developing scenarios reflecting older adults' encounters with misinformation in their lives, understanding existing chatbot platforms, and envisioning how chatbots can help intervene in the scenarios from stage 1. RESULTS We found that issues with older adults' misinformation management arose more from interpersonal relationships than individuals' ability to detect misinformation in pieces of information. This finding underscored the importance of chatbots to act as mediators that facilitate communication and help resolve conflict. In addition, participants emphasized the importance of autonomy. They desired chatbots to teach them to navigate the information landscape and come to conclusions about misinformation on their own. Finally, we found that older adults' distrust in IT companies and governments' ability to regulate the IT industry affected their trust in chatbots. Thus, chatbot designers should consider using well-trusted sources and practicing transparency to increase older adults' trust in the chatbot-based tools. Overall, our results highlight the need for chatbot-based misinformation tools to go beyond fact checking. CONCLUSIONS This study provides insights for how chatbots can be designed as part of technological systems for misinformation management among older adults. Our study underscores the importance of inviting older adults to be active co-designers of chatbot-based interventions.
Collapse
Affiliation(s)
- Wei Peng
- Department of Media and Information, Michigan State University, East Lansing, MI, United States
| | - Hee Rin Lee
- Department of Media and Information, Michigan State University, East Lansing, MI, United States
| | - Sue Lim
- Department of Media and Information, Michigan State University, East Lansing, MI, United States
| |
Collapse
|
2
|
Zhang S, Zhou H, Zhu Y. Have we found a solution for health misinformation? A ten-year systematic review of health misinformation literature 2013-2022. Int J Med Inform 2024; 188:105478. [PMID: 38743994 DOI: 10.1016/j.ijmedinf.2024.105478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 01/22/2024] [Accepted: 05/08/2024] [Indexed: 05/16/2024]
Abstract
BACKGROUND Health misinformation (HM) has emerged as a prominent social issue in recent years, driven by declining public trust, popularisation of digital media platforms and escalating public health crisis. Since the Covid-19 pandemic, HM has raised critical concerns due to its significant impacts on both individuals and society as a whole. A comprehensive understanding of HM and HM-related studies would be instrumental in identifying possible solutions to address HM and the associated challenges. METHODS Following the PRISMA procedure, 11,739 papers published from January 2013 to December 2022 were retrieved from five electronic databases, and 813 papers matching the inclusion criteria were retained for further analysis. This article critically reviewed HM-related studies, detailing the factors facilitating HM creation and dissemination, negative impacts of HM, solutions to HM, and research methods employed in those studies. RESULTS A growing number of studies have focused on HM since 2013. Results of this study highlight that trust plays a significant while latent role in the circuits of HM, facilitating the creation and dissemination of HM, exacerbating the negative impacts of HM and amplifying the difficulty in addressing HM. CONCLUSION For health authorities and governmental institutions, it is essential to systematically build public trust in order to reduce the probability of individuals acceptation of HM and to improve the effectiveness of misinformation correction. Future studies should pay more attention to the role of trust in how to address HM.
Collapse
Affiliation(s)
- Shiyi Zhang
- School of Arts, Media and Communication, University of Leicester, UK
| | - Huiyu Zhou
- School of Computing and Mathematical Sciences, University of Leicester, UK
| | - Yimei Zhu
- School of Arts, Media and Communication, University of Leicester, UK.
| |
Collapse
|
3
|
Peng W, Meng J, Issaka B. Navigating persuasive strategies in online health misinformation: An interview study with older adults on misinformation management. PLoS One 2024; 19:e0307771. [PMID: 39052635 PMCID: PMC11271879 DOI: 10.1371/journal.pone.0307771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 07/11/2024] [Indexed: 07/27/2024] Open
Abstract
Online health misinformation commonly includes persuasive strategies that can easily deceive lay people. Yet, it is not well understood how individuals respond to misinformation with persuasive strategies at the moment of exposure. This study aims to address the research gap by exploring how and why older adults fall into the persuasive trap of online health misinformation and how they manage their encounters of online health misinformation. Using a think-aloud protocol, semi-structured interviews were conducted with twenty-nine older adults who were exposed to articles employing twelve groups of common persuasive strategies in online health misinformation. Thematic analysis of the transcripts revealed that some participants fell for the persuasive strategies, yet the same strategies were detected by others as cues to pin down misinformation. Based on the participants' own words, informational and individual factors as well as the interplay of these factors were identified as contributors to susceptibility to misinformation. Participants' strategies to manage misinformation for themselves and others were categorized. Implications of the findings are discussed.
Collapse
Affiliation(s)
- Wei Peng
- Department of Media and Information, Michigan State University, East Lansing, Michigan, United States of America
| | - Jingbo Meng
- School of Communication, Ohio State University, Columbus, Ohio, United States of America
| | - Barikisu Issaka
- Department of Advertising and Public Relations, Michigan State University, East Lansing, Michigan, United States of America
| |
Collapse
|
4
|
Khemani B, Patil S, Kotecha K, Vora D. Detecting health misinformation: A comparative analysis of machine learning and graph convolutional networks in classification tasks. MethodsX 2024; 12:102737. [PMID: 38774687 PMCID: PMC11107227 DOI: 10.1016/j.mex.2024.102737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 04/27/2024] [Indexed: 05/24/2024] Open
Abstract
In the digital age, the proliferation of health-related information online has heightened the risk of misinformation, posing substantial threats to public well-being. This research conducts a meticulous comparative analysis of classification models, focusing on detecting health misinformation. The study evaluates the performance of traditional machine learning models and advanced graph convolutional networks (GCN) across critical algorithmic metrics. The results comprehensively understand each algorithm's effectiveness in identifying health misinformation and provide valuable insights for combating the pervasive spread of false health information in the digital landscape. GCN with TF-IDF gives the best result, as shown in the result section. •The research method involves a comparative analysis of classification algorithms to detect health misinformation, exploring traditional machine learning models and graph convolutional networks.•This research used algorithms such as Passive Aggressive Classifier, Random Forest, Decision Tree, Logistic Regression, Light GBM, GCN, GCN with BERT, GCN with TF-IDF, and GCN with Word2Vec were employed. Performance Metrics: Accuracy: for Passive Aggressive Classifier: 85.75 %, Random Forest: 86 %, Decision Tree: 81.30 %, Light BGM: 83.29 %, normal GCN: 84.53 %, GCN with BERT: 85.00 %, GCN with TR-IDF: 93.86 % and GCN with word2Vec: 81.00 %•Algorithmic performance metrics, including accuracy, precision, recall, and F1-score, were systematically evaluated to assess the efficacy of each model in detecting health misinformation, focusing on understanding the strengths and limitations of different approaches. The superior performance of Graph Convolutional Networks (GCNs) with TF-IDF embedding, achieving an accuracy of 93.86.
Collapse
Affiliation(s)
- Bharti Khemani
- Symbiosis Institute of Technology, Symbiosis International (Deemed University, Pune, India
| | - Shruti Patil
- Symbiosis Centre for Applied Artificial Intelligence (SCAAI), Symbiosis Institute of Technology Pune Campus, Symbiosis International (Deemed University) (SIU), Lavale, Pune 412115, India
| | - Ketan Kotecha
- Symbiosis Centre for Applied Artificial Intelligence (SCAAI), Symbiosis Institute of Technology Pune Campus, Symbiosis International (Deemed University) (SIU), Lavale, Pune 412115, India
| | - Deepali Vora
- Symbiosis Institute of Technology, Symbiosis International (Deemed University, Pune, India
| |
Collapse
|
5
|
Haupt MR, Chiu M, Chang J, Li Z, Cuomo R, Mackey TK. Detecting nuance in conspiracy discourse: Advancing methods in infodemiology and communication science with machine learning and qualitative content coding. PLoS One 2023; 18:e0295414. [PMID: 38117843 PMCID: PMC10732406 DOI: 10.1371/journal.pone.0295414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 11/21/2023] [Indexed: 12/22/2023] Open
Abstract
The spread of misinformation and conspiracies has been an ongoing issue since the early stages of the internet era, resulting in the emergence of the field of infodemiology (i.e., information epidemiology), which investigates the transmission of health-related information. Due to the high volume of online misinformation in recent years, there is a need to continue advancing methodologies in order to effectively identify narratives and themes. While machine learning models can be used to detect misinformation and conspiracies, these models are limited in their generalizability to other datasets and misinformation phenomenon, and are often unable to detect implicit meanings in text that require contextual knowledge. To rapidly detect evolving conspiracist narratives within high volume online discourse while identifying nuanced themes requiring the comprehension of subtext, this study describes a hybrid methodology that combines natural language processing (i.e., topic modeling and sentiment analysis) with qualitative content coding approaches to characterize conspiracy discourse related to 5G wireless technology and COVID-19 on Twitter (currently known as 'X'). Discourse that focused on correcting 5G conspiracies was also analyzed for comparison. Sentiment analysis shows that conspiracy-related discourse was more likely to use language that was analytic, combative, past-oriented, referenced social status, and expressed negative emotions. Corrections discourse was more likely to use words reflecting cognitive processes, prosocial relations, health-related consequences, and future-oriented language. Inductive coding characterized conspiracist narratives related to global elites, anti-vax sentiment, medical authorities, religious figures, and false correlations between technology advancements and disease outbreaks. Further, the corrections discourse did not address many of the narratives prevalent in conspiracy conversations. This paper aims to further bridge the gap between computational and qualitative methodologies by demonstrating how both approaches can be used in tandem to emphasize the positive aspects of each methodology while minimizing their respective drawbacks.
Collapse
Affiliation(s)
- Michael Robert Haupt
- Department of Cognitive Science, University of California San Diego, La Jolla, California, United States of America
- Global Health Policy & Data Institute, San Diego, California, United States of America
| | - Michelle Chiu
- Department of Psychology, Temple University, Philadelphia, Pennsylvania, United States of America
| | - Joseline Chang
- Rady School of Management, University of California San Diego, La Jolla, California, United States of America
| | - Zoe Li
- Global Health Policy & Data Institute, San Diego, California, United States of America
- S-3 Research, San Diego, California, United States of America
| | - Raphael Cuomo
- Department of Anesthesiology, University of California, San Diego School of Medicine, San Diego, California, United States of America
| | - Tim K. Mackey
- S-3 Research, San Diego, California, United States of America
- Global Health Program, Department of Anthropology, University of California, San Diego, California, United States of America
| |
Collapse
|
6
|
Bizzotto N, Schulz PJ, de Bruijn GJ. The "Loci" of Misinformation and Its Correction in Peer- and Expert-Led Online Communities for Mental Health: Content Analysis. J Med Internet Res 2023; 25:e44656. [PMID: 37721800 PMCID: PMC10546261 DOI: 10.2196/44656] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 03/22/2023] [Accepted: 08/04/2023] [Indexed: 09/19/2023] Open
Abstract
BACKGROUND Mental health problems are recognized as a pressing public health issue, and an increasing number of individuals are turning to online communities for mental health to search for information and support. Although these virtual platforms have the potential to provide emotional support and access to anecdotal experiences, they can also present users with large amounts of potentially inaccurate information. Despite the importance of this issue, limited research has been conducted, especially on the differences that might emerge due to the type of content moderation of online communities: peer-led or expert-led. OBJECTIVE We aim to fill this gap by examining the prevalence, the communicative context, and the persistence of mental health misinformation on Facebook online communities for mental health, with a focus on understanding the mechanisms that enable effective correction of inaccurate information and differences between expert-led and peer-led groups. METHODS We conducted a content analysis of 1534 statements (from 144 threads) in 2 Italian-speaking Facebook groups. RESULTS The study found that an alarming number of comments (26.1%) contained medically inaccurate information. Furthermore, nearly 60% of the threads presented at least one misinformation statement without any correction attempt. Moderators were more likely to correct misinformation than members; however, they were not immune to posting content containing misinformation, which was an unexpected finding. Discussions about aspects of treatment (including side effects or treatment interruption) significantly increased the probability of encountering misinformation. Additionally, the study found that misinformation produced in the comments of a thread, rather than as the first post, had a lower probability of being corrected, particularly in peer-led communities. CONCLUSIONS The high prevalence of misinformation in online communities, particularly when left uncorrected, underscores the importance of conducting additional research to identify effective mechanisms to prevent its spread. This is especially important given the study's finding that misinformation tends to be more prevalent around specific "loci" of discussion that, once identified, can serve as a starting point to develop strategies for preventing and correcting misinformation within them.
Collapse
Affiliation(s)
- Nicole Bizzotto
- Faculty of Communication, Culture and Society, Università della Svizzera italiana, Lugano, Switzerland
| | - Peter Johannes Schulz
- Faculty of Communication, Culture and Society, Università della Svizzera italiana, Lugano, Switzerland
- Department of Communication and Media, Ewha Womans University, Seoul, Republic of Korea
- Wee Kim Wee School of Communication & Information & LKC School of Medicine, Nanyang Technological University, Singapore
| | - Gert-Jan de Bruijn
- Department of Communication Studies, University of Antwerp, Antwerp, Belgium
| |
Collapse
|
7
|
Lazard AJ, Nicolla S, Vereen RN, Pendleton S, Charlot M, Tan HJ, DiFranzo D, Pulido M, Dasgupta N. Exposure and Reactions to Cancer Treatment Misinformation and Advice: Survey Study. JMIR Cancer 2023; 9:e43749. [PMID: 37505790 PMCID: PMC10422174 DOI: 10.2196/43749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 04/28/2023] [Accepted: 05/23/2023] [Indexed: 07/29/2023] Open
Abstract
BACKGROUND Cancer treatment misinformation, or false claims about alternative cures, often spreads faster and farther than true information on social media. Cancer treatment misinformation can harm the psychosocial and physical health of individuals with cancer and their cancer care networks by causing distress and encouraging people to abandon support, potentially leading to deviations from evidence-based care. There is a pressing need to understand how cancer treatment misinformation is shared and uncover ways to reduce misinformation. OBJECTIVE We aimed to better understand exposure and reactions to cancer treatment misinformation, including the willingness of study participants to prosocially intervene and their intentions to share Instagram posts with cancer treatment misinformation. METHODS We conducted a survey on cancer treatment misinformation among US adults in December 2021. Participants reported their exposure and reactions to cancer treatment misinformation generally (saw or heard, source, type of advice, and curiosity) and specifically on social media (platform, believability). Participants were then randomly assigned to view 1 of 3 cancer treatment misinformation posts or an information post and asked to report their willingness to prosocially intervene and their intentions to share. RESULTS Among US adult participants (N=603; mean age 46, SD 18.83 years), including those with cancer and cancer caregivers, almost 1 in 4 (142/603, 23.5%) received advice about alternative ways to treat or cure cancer. Advice was primarily shared through family (39.4%) and friends (37.3%) for digestive (30.3%) and natural (14.1%) alternative cancer treatments, which generated curiosity among most recipients (106/142, 74.6%). More than half of participants (337/603, 55.9%) saw any cancer treatment misinformation on social media, with significantly higher exposure for those with cancer (53/109, 70.6%) than for those without cancer (89/494, 52.6%; P<.001). Participants saw cancer misinformation on Facebook (39.8%), YouTube (27%), Instagram (22.1%), and TikTok (14.1%), among other platforms. Participants (429/603, 71.1%) thought cancer treatment misinformation was true, at least sometimes, on social media. More than half (357/603, 59.2%) were likely to share any cancer misinformation posts shown. Many participants (412/603, 68.3%) were willing to prosocially intervene for any cancer misinformation posts, including flagging the cancer treatment misinformation posts as false (49.7%-51.4%) or reporting them to the platform (48.1%-51.4%). Among the participants, individuals with cancer and those who identified as Black or Hispanic reported greater willingness to intervene to reduce cancer misinformation but also higher intentions to share misinformation. CONCLUSIONS Cancer treatment misinformation reaches US adults through social media, including on widely used platforms for support. Many believe that social media posts about alternative cancer treatment are true at least some of the time. The willingness of US adults, including those with cancer and members of susceptible populations, to prosocially intervene could initiate the necessary community action to reduce cancer treatment misinformation if coupled with strategies to help individuals discern false claims.
Collapse
Affiliation(s)
- Allison J Lazard
- Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Sydney Nicolla
- Hussman School of Journalism and Media, University of North Carolina at Chapel Hll, Chapel Hill, NC, United States
| | - Rhyan N Vereen
- Hussman School of Journalism and Media, University of North Carolina at Chapel Hll, Chapel Hill, NC, United States
| | - Shanetta Pendleton
- Hussman School of Journalism and Media, University of North Carolina at Chapel Hll, Chapel Hill, NC, United States
| | - Marjory Charlot
- Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Department of Medicine, Division of Oncology, School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Hung-Jui Tan
- Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Department of Urology, School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Dominic DiFranzo
- P.C. Rossin College of Engineering and Applied Science, Lehigh University, Bethlehem, PA, United States
| | - Marlyn Pulido
- Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Nabarun Dasgupta
- Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Injury Prevention Research Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| |
Collapse
|
8
|
Turner MM, Lisse S, Rimal R, Kamlem T, Shaikh H, Biswas N. Rumour spread and control during the West African Ebola epidemic in Liberia. DISASTERS 2023; 47:346-365. [PMID: 35762515 DOI: 10.1111/disa.12552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The severity of the 2014-15 West African Ebola epidemic in Liberia was coupled with widespread misunderstanding of the virus among citizens and the proliferation of rumours. Rumour control during outbreaks is imperative to reduce the public's fears about a disease. In Liberia, a tracker system was developed to detect rumours as quickly as possible through SMS (short message service) text messaging. This study focused on assessing rumour circulation in newspapers and on radio and rumour control over time. It relied on a content analysis of SMS messages from the 'DeySay' tracker, print and audio communications of newspapers, and radio programmes, in the time frame between January 2014 and March 2015. The findings show that more rumours appeared in newspapers but were more likely to be overtly characterised as such on the radio. DeySay accurately predicted rumours before they appeared on the radio and in newspapers, supporting its usefulness in future health epidemics.
Collapse
Affiliation(s)
- Monique Mitchell Turner
- Professor and Chairperson in the Department of Communication, Michigan State University, United States
| | - Skylar Lisse
- MPH (Master of Public Health) Student at the Milken Institute School of Public Health, George Washington University, United States
| | - Rajiv Rimal
- Professor and Chairperson in the Department of Health, Behavior and Society, Johns Hopkins University, United States
| | - Tamah Kamlem
- MPH Student at the Milken Institute School of Public Health, George Washington University, United States
| | - Hina Shaikh
- MPH Student at the Milken Institute School of Public Health, George Washington University, United States
| | - Nilakshi Biswas
- MPH Student at the Milken Institute School of Public Health, George Washington University, United States
| |
Collapse
|
9
|
Segado-Fernández S, Herrera-Peco I, Jiménez-Gómez B, Ruiz Núñez C, Jiménez-Hidalgo PJ, Benítez de Gracia E, González-Rodríguez LG, Torres-Ramírez C, Lozano-Estevan MDC. Realfood and Cancer: Analysis of the Reliability and Quality of YouTube Content. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:5046. [PMID: 36981954 PMCID: PMC10048849 DOI: 10.3390/ijerph20065046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 03/09/2023] [Accepted: 03/10/2023] [Indexed: 06/18/2023]
Abstract
UNLABELLED This study analyzes the quality and reliability of videos related to nutrition and cancer on YouTube. STUDY DESIGN An observational, retrospective, cross-sectional, time-limited study analyzing activity on the social network YouTube was proposed. METHODS The information from the videos was extracted through an API search tool, using the NodeXL software. The criteria to select the videos on YouTube were the keywords "real food", "realfood", and "cancer" and the hashtags #realfood and #cancer were present, videos in English and videos available on 1 December 2022. RESULTS The DISCERN value in the total number of videos viewed was 2.25 (±0.88) points, indicating low reliability. The videos uploaded by HRU represented only 20.8%. Videos suggesting that the use of foods defined as "real food" could cure cancer without the intervention of any other treatment accounted for 12.5%. Videos that provided external links to scientific/technical evidence verifying the information represented only 13.89% of the total number of videos. Of these videos, 70% corresponded to HRU. The DISCERN value for videos from HRU users was 3.05 (0.88), a value that reflects a good reliability of videos from these users. CONCLUSIONS This study provides information on the content and quality of the videos that we can find on YouTube. We found videos of non-health users who do not base their content on any scientific evidence, with the danger that this entails for the population, but it also highlights that the videos published by HRU have greater reliability and quality, being better perceived by the population, so it is important to encourage healthcare professionals and health institutions to share verified information on YouTube.
Collapse
Affiliation(s)
- Sergio Segado-Fernández
- Department of Health Sciences, Universidad Europea de Canarias, Calle Inocencio García, 1, La Orotava, 38300 Santa Cruz de Tenerife, Spain;
| | - Ivan Herrera-Peco
- Faculty of Health Sciences, Alfonso X el Sabio University, Avda. Universidad, 1, Villanueva de la Cañada, 28691 Madrid, Spain; (I.H.-P.); (E.B.d.G.); (C.T.-R.)
| | - Beatriz Jiménez-Gómez
- Department of Nursing, Human Nutrition and Dietetics, Universidad Europea de Madrid, Calle Tajo, s/n, Villaviciosa de Odón, 28670 Madrid, Spain;
| | - Carlos Ruiz Núñez
- Program in Biomedicine, Translational Research and New Health Technologies, School of Medicine, University of Malaga, Blvr. Louis Pasteur, 29010 Málaga, Spain;
| | - Pedro Jesús Jiménez-Hidalgo
- Traumatology and Orthopedic Surgery Service, Hospital Universitario Nuestra Señora de Candelaria, Ctra. Gral. del Rosario, 145, 38010 Santa Cruz de Tenerife, Spain;
| | - Elvira Benítez de Gracia
- Faculty of Health Sciences, Alfonso X el Sabio University, Avda. Universidad, 1, Villanueva de la Cañada, 28691 Madrid, Spain; (I.H.-P.); (E.B.d.G.); (C.T.-R.)
| | - Liliana G. González-Rodríguez
- VALORNUT-UCM (920030) Research Group, Department of Nutrition and Food Science, Faculty of Pharmacy, Complutense University of Madrid, 28040 Madrid, Spain;
| | - Cristina Torres-Ramírez
- Faculty of Health Sciences, Alfonso X el Sabio University, Avda. Universidad, 1, Villanueva de la Cañada, 28691 Madrid, Spain; (I.H.-P.); (E.B.d.G.); (C.T.-R.)
| | - María del Carmen Lozano-Estevan
- VALORNUT-UCM (920030) Research Group, Department of Nutrition and Food Science, Faculty of Pharmacy, Complutense University of Madrid, 28040 Madrid, Spain;
| |
Collapse
|
10
|
Ng JY, Verhoeff N, Steen J. What are the ways in which social media is used in the context of complementary and alternative medicine in the health and medical scholarly literature? a scoping review. BMC Complement Med Ther 2023; 23:32. [PMID: 36732809 PMCID: PMC9893203 DOI: 10.1186/s12906-023-03856-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2021] [Accepted: 01/20/2023] [Indexed: 02/04/2023] Open
Abstract
BACKGROUND Despite the increased use of social media to share health-related information and the substantial impact that complementary and alternative medicine (CAM) can have on individuals' health and wellbeing, currently, to our knowledge, there is no review that compiles research on how social media is used in the context of CAM. The objective of this study was to summarize what are the ways in which social media is used in the context of CAM. METHODS A scoping review was conducted, following Arksey and O'Malley's five-stage methodological framework. MEDLINE, EMBASE, PsycINFO, AMED, and CINAHL databases were systematically searched from inception until October 3, 2020, in addition to the Canadian Agency for Drugs and Technology in Health (CADTH) website. Eligible studies had to have investigated how at least one social media platform is used in the context of a single or multiple types of CAM treatments. RESULTS Searches retrieved 1714 items following deduplication, of which 1687 titles and abstracts were eliminated, leaving 94 full-text articles to be considered. Of those, 65 were not eligible, leaving a total of 29 articles eligible for review. Three themes emerged from our analysis: 1) social media is used to share user/practitioner beliefs, attitudes, and experiences about CAM, 2) social media acts as a vehicle for the spread of misinformation about CAM, and 3) there are unique challenges with social media research in the context of CAM. CONCLUSIONS In addition to social media being a useful tool to share user/practitioner beliefs, attitudes, and experiences about CAM, it has shown to be accessible, effective, and a viable option in delivering CAM therapies and information. Social media has also been shown to spread a large amount of misleading and false information in the context of CAM. Additionally, this review highlights the challenges with conducting social media research in the context of CAM, particularly in collecting a representative sample.
Collapse
Affiliation(s)
- Jeremy Y. Ng
- grid.25073.330000 0004 1936 8227Department of Health Research Methods, Evidence, and Impact, Faculty of Health Sciences, McMaster University, Michael G. DeGroote Centre for Learning and Discovery, Room 2112, 1280 Main Street West, Hamilton, ON L8S 4K1 Canada
| | - Natasha Verhoeff
- grid.25073.330000 0004 1936 8227Department of Health Research Methods, Evidence, and Impact, Faculty of Health Sciences, McMaster University, Michael G. DeGroote Centre for Learning and Discovery, Room 2112, 1280 Main Street West, Hamilton, ON L8S 4K1 Canada
| | - Jeremy Steen
- grid.25073.330000 0004 1936 8227Department of Health Research Methods, Evidence, and Impact, Faculty of Health Sciences, McMaster University, Michael G. DeGroote Centre for Learning and Discovery, Room 2112, 1280 Main Street West, Hamilton, ON L8S 4K1 Canada
| |
Collapse
|
11
|
Afful-Dadzie E, Afful-Dadzie A, Egala SB. Social media in health communication: A literature review of information quality. HEALTH INF MANAG J 2023; 52:3-17. [PMID: 33818176 DOI: 10.1177/1833358321992683] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
BACKGROUND Social media is used in health communication by individuals, health professionals, disease centres and other health regulatory bodies. However, varying degrees of information quality are churned out daily on social media. This review is concerned with the quality of Social Media Health Information (SMHI). OBJECTIVE The review sought to understand how SMHI quality issues have been framed and addressed in the literature. Health topics, users and social media platforms that have raised health information quality concerns are reviewed. The review also looked at the suitability of existing criteria and instruments used in evaluating SMHI and identified gaps for future research. METHOD The Preferred Reporting Items for Systematic Reviews and Meta-Analyses and the forward chaining strategy were used in the document search. Data were sourced according to inclusion criteria from five academic databases, namely Scopus, Web of Science, Cochrane Library, PubMed and MEDLINE. RESULTS A total of 93 articles published between 2000 and 2019 were used in the review. The review revealed a worrying trend of health content and communication on social media, especially of cancer, dental care and diabetes information on YouTube. The review further discovered that the Journal of the American Medical Association, the DISCERN and the Health on the Net Foundation, which were designed before the advent of social media, continue to be used as quality evaluation instruments for SMHI, even though technical and user characteristics of social media differ from traditional portals such as websites. CONCLUSION The study synthesises varied opinions on SMHI quality in the literature and recommends that future research proposes quality evaluation criteria and instruments specifically for SMHI.
Collapse
|
12
|
Nabożny A, Balcerzak B, Morzy M, Wierzbicki A, Savov P, Warpechowski K. Improving medical experts' efficiency of misinformation detection: an exploratory study. WORLD WIDE WEB 2022; 26:773-798. [PMID: 35975112 PMCID: PMC9371952 DOI: 10.1007/s11280-022-01084-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 05/03/2022] [Accepted: 07/04/2022] [Indexed: 06/15/2023]
Abstract
Fighting medical disinformation in the era of the pandemic is an increasingly important problem. Today, automatic systems for assessing the credibility of medical information do not offer sufficient precision, so human supervision and the involvement of medical expert annotators are required. Our work aims to optimize the utilization of medical experts' time. We also equip them with tools for semi-automatic initial verification of the credibility of the annotated content. We introduce a general framework for filtering medical statements that do not require manual evaluation by medical experts, thus focusing annotation efforts on non-credible medical statements. Our framework is based on the construction of filtering classifiers adapted to narrow thematic categories. This allows medical experts to fact-check and identify over two times more non-credible medical statements in a given time interval without applying any changes to the annotation flow. We verify our results across a broad spectrum of medical topic areas. We perform quantitative, as well as exploratory analysis on our output data. We also point out how those filtering classifiers can be modified to provide experts with different types of feedback without any loss of performance.
Collapse
Affiliation(s)
| | | | - Mikołaj Morzy
- Polish-Japanese Academy of Information Technology, Warsaw, Poland
- Poznań University of Technology, Poznań, Poland
| | - Adam Wierzbicki
- Polish-Japanese Academy of Information Technology, Warsaw, Poland
| | - Pavel Savov
- Polish-Japanese Academy of Information Technology, Warsaw, Poland
| | | |
Collapse
|
13
|
|
14
|
Dixon E, Anderson J, Blackwelder D, Radnofsky M, Lazar A. Barriers to Online Dementia Information and Mitigation. PROCEEDINGS OF THE SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS. CHI CONFERENCE 2022; 2022:513. [PMID: 35789135 PMCID: PMC9201622 DOI: 10.1145/3491102.3517554] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
There is growing interest in HCI to study ways to support access to accurate, accessible, relevant online health information for different populations. Yet, there remains a need to understand the barriers that are posed by the way our platforms are designed as well as how we might overcome these barriers for people with dementia. To address this, we conducted sixteen interviews and observation sessions with people with mild to moderate dementia. Our analysis uncovered four barriers to online health information and corresponding mitigation strategies that participants employed. We discuss how HCI researchers may apply these findings towards new technical approaches and standards concerning information accessibility and credibility for neurodiverse populations. Finally, we broaden the scope of HCI research to include investigations of the accessibility and credibility of online information for people with age-related cognitive impairment independent of proxies.
Collapse
Affiliation(s)
- Emma Dixon
- University of Maryland, College Park, Maryland, USA
| | | | | | | | - Amanda Lazar
- University of Maryland, College Park, Maryland, USA
| |
Collapse
|
15
|
Muhammed T S, Mathew SK. The disaster of misinformation: a review of research in social media. INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS 2022; 13:271-285. [PMID: 35194559 PMCID: PMC8853081 DOI: 10.1007/s41060-022-00311-6] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 01/06/2022] [Indexed: 01/21/2023]
Abstract
The spread of misinformation in social media has become a severe threat to public interests. For example, several incidents of public health concerns arose out of social media misinformation during the COVID-19 pandemic. Against the backdrop of the emerging IS research focus on social media and the impact of misinformation during recent events such as the COVID-19, Australian Bushfire, and the USA elections, we identified disaster, health, and politics as specific domains for a research review on social media misinformation. Following a systematic review process, we chose 28 articles, relevant to the three themes, for synthesis. We discuss the characteristics of misinformation in the three domains, the methodologies that have been used by researchers, and the theories used to study misinformation. We adapt an Antecedents-Misinformation-Outcomes (AMIO) framework for integrating key concepts from prior studies. Based on the AMIO framework, we further discuss the inter-relationships of concepts and the strategies to control the spread of misinformation on social media. Ours is one of the early reviews focusing on social media misinformation research, particularly on three socially sensitive domains; disaster, health, and politics. This review contributes to the emerging body of knowledge in Data Science and social media and informs strategies to combat social media misinformation.
Collapse
Affiliation(s)
- Sadiq Muhammed T
- Department of Management Studies (DoMS), Indian Institute of Technology Madras, Chennai, Tamil Nadu 600036 India
| | - Saji K. Mathew
- Department of Management Studies (DoMS), Indian Institute of Technology Madras, Chennai, Tamil Nadu 600036 India
| |
Collapse
|
16
|
Liu J, Wright C, Williams P, Elizarova O, Dahne J, Bian J, Zhao Y, Tan ASL. Smokers' Likelihood to Engage With Information and Misinformation on Twitter About the Relative Harms of e-Cigarette Use: Results From a Randomized Controlled Trial. JMIR Public Health Surveill 2021; 7:e27183. [PMID: 34931999 PMCID: PMC8734921 DOI: 10.2196/27183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Revised: 04/06/2021] [Accepted: 10/27/2021] [Indexed: 11/28/2022] Open
Abstract
Background Information and misinformation on the internet about e-cigarette harms may increase smokers’ misperceptions of e-cigarettes. There is limited research on smokers’ engagement with information and misinformation about e-cigarettes on social media. Objective This study assessed smokers’ likelihood to engage with—defined as replying, retweeting, liking, and sharing—tweets that contain information and misinformation and uncertainty about the harms of e-cigarettes. Methods We conducted a web-based randomized controlled trial among 2400 UK and US adult smokers who did not vape in the past 30 days. Participants were randomly assigned to view four tweets in one of four conditions: (1) e-cigarettes are as harmful or more harmful than smoking, (2) e-cigarettes are completely harmless, (3) uncertainty about e-cigarette harms, or (4) control (physical activity). The outcome measure was participants’ likelihood of engaging with tweets, which comprised the sum of whether they would reply, retweet, like, and share each tweet. We fitted Poisson regression models to predict the likelihood of engagement with tweets among 974 Twitter users and 1287 non-Twitter social media users, adjusting for covariates and stratified by UK and US participants. Results Among Twitter users, participants were more likely to engage with tweets in condition 1 (e-cigarettes are as harmful or more harmful than smoking) than in condition 2 (e-cigarettes are completely harmless). Among other social media users, participants were more likely to likely to engage with tweets in condition 1 than in conditions 2 and 3 (e-cigarettes are completely harmless and uncertainty about e-cigarette harms). Conclusions Tweets stating information and misinformation that e-cigarettes were as harmful or more harmful than smoking regular cigarettes may receive higher engagement than tweets indicating e-cigarettes were completely harmless. Trial Registration International Standard Randomized Controlled Trial Number (ISRCTN) 16082420; https://doi.org/10.1186/ISRCTN16082420
Collapse
Affiliation(s)
- Jessica Liu
- Department of Social and Behavioral Sciences, Harvard TH Chan School of Public Health, Boston, MA, United States
| | - Caroline Wright
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, United Kingdom
| | - Philippa Williams
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, United Kingdom
| | | | - Jennifer Dahne
- Department of Psychiatry and Behavioral Sciences, College of Medicine, Medical University of South Carolina, Charleston, SC, United States
| | - Jiang Bian
- Department of Health Outcomes & Biomedical Informatics, College of Medicine, University of Florida, Gainesville, FL, United States
| | - Yunpeng Zhao
- Department of Health Outcomes & Biomedical Informatics, College of Medicine, University of Florida, Gainesville, FL, United States
| | - Andy S L Tan
- Annenberg School for Communication, University of Pennsylvania, Philadelphia, PA, United States
| |
Collapse
|
17
|
The Relationship among COVID-19 Information Seeking, News Media Use, and Emotional Distress at the Onset of the Pandemic. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph182413198. [PMID: 34948806 PMCID: PMC8701074 DOI: 10.3390/ijerph182413198] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/16/2021] [Revised: 12/04/2021] [Accepted: 12/10/2021] [Indexed: 12/28/2022]
Abstract
Although several theories posit that information seeking is related to better psychological health, this logic may not apply to a pandemic like COVID-19. Given uncertainty inherent to the novel virus, we expect that information seeking about COVID-19 will be positively associated with emotional distress. Additionally, we consider the type of news media from which individuals receive information—television, newspapers, and social media—when examining relationships with emotional distress. Using a U.S. national survey, we examine: (1) the link between information seeking about COVID-19 and emotional distress, (2) the relationship between reliance on television, newspapers, and social media as sources for news and emotional distress, and (3) the interaction between information seeking and use of these news media sources on emotional distress. Our findings show that seeking information about COVID-19 was significantly related to emotional distress. Moreover, even after accounting for COVID-19 information seeking, consuming news via television and social media was tied to increased distress, whereas consuming newspapers was not significantly related to greater distress. Emotional distress was most pronounced among individuals high in information seeking and television news use, whereas the association between information seeking and emotional distress was not moderated by newspapers or social media news use.
Collapse
|
18
|
Zhou C, Li K, Lu Y. Linguistic characteristics and the dissemination of misinformation in social media: The moderating effect of information richness. Inf Process Manag 2021. [DOI: 10.1016/j.ipm.2021.102679] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
19
|
Ng TC, Teo CH, Toh JY, Dunn AG, Ng CJ, Ang TF, Abdullah A, Syed A, Lim HM, Yin K, Liew CS. Factors influencing healthcare seeking in patients with dengue: systematic review. Trop Med Int Health 2021; 27:13-27. [PMID: 34655508 DOI: 10.1111/tmi.13695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE Delays in seeking healthcare for dengue are associated with poor health outcomes. Despite this, the factors influencing such delays remain unclear, rendering interventions to improve healthcare seeking for dengue ineffective. This systematic review aimed to synthesise the factors influencing healthcare seeking of patients with dengue and form a comprehensive framework. METHODS This review included both qualitative and quantitative studies. Studies were obtained by searching five databases, contacting field experts and performing backward reference searches. The best-fit meta-synthesis approach was used during data synthesis, where extracted data were fitted into the social-ecological model. Sub-analyses were conducted to identify the commonly reported factors and their level of statistical significance. RESULTS Twenty studies were selected for meta-synthesis. Eighteen factors influencing healthcare seeking in dengue were identified and categorised under four domains: individual (11 factors), interpersonal (one factor), organisational (four factors) and community (two factors). The most reported factors were knowledge of dengue, access to healthcare, quality of health service and resource availability. Overall, more barriers to dengue health seeking than facilitators were found. History of dengue infection and having knowledge of dengue were found to be ambiguous as they both facilitated and hindered dengue healthcare seeking. Contrary to common belief, women were less likely to seek help for dengue than men. CONCLUSIONS The factors affecting dengue healthcare-seeking behaviour are diverse, can be ambiguous and are found across multiple social-ecological levels. Understanding these complexities is essential for the development of effective interventions to improve dengue healthcare-seeking behaviour.
Collapse
Affiliation(s)
- Tze Chang Ng
- Department of Computer System & Technology, Faculty of Computer Science & Information Technology, Universiti Malaya, Malaysia
| | - Chin Hai Teo
- University of Malaya eHealth Unit, Faculty of Medicine, Universiti Malaya, Malaysia
| | - Jia Yong Toh
- Department of Primary Care Medicine, Faculty of Medicine, Universiti Malaya, Malaysia
| | - Adam G Dunn
- Biomedical Informatics and Digital Health, School of Medical Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Chirk Jenn Ng
- University of Malaya eHealth Unit, Faculty of Medicine, Universiti Malaya, Malaysia.,Department of Primary Care Medicine, Faculty of Medicine, Universiti Malaya, Malaysia
| | - Tan Fong Ang
- Department of Computer System & Technology, Faculty of Computer Science & Information Technology, Universiti Malaya, Malaysia
| | - Adina Abdullah
- University of Malaya eHealth Unit, Faculty of Medicine, Universiti Malaya, Malaysia.,Department of Primary Care Medicine, Faculty of Medicine, Universiti Malaya, Malaysia
| | - Ayeshah Syed
- Department of English Language, Faculty of Languages & Linguistics, Universiti Malaya, Malaysia
| | - Hooi Min Lim
- University of Malaya eHealth Unit, Faculty of Medicine, Universiti Malaya, Malaysia.,Department of Primary Care Medicine, Faculty of Medicine, Universiti Malaya, Malaysia
| | - Kathleen Yin
- Centre of Health Informatics, Australian Institute of Health Innovation, Macquarie University, Australia
| | - Chee Sun Liew
- Department of Computer System & Technology, Faculty of Computer Science & Information Technology, Universiti Malaya, Malaysia.,University of Malaya eHealth Unit, Faculty of Medicine, Universiti Malaya, Malaysia
| |
Collapse
|
20
|
Tomaszewski T, Morales A, Lourentzou I, Caskey R, Liu B, Schwartz A, Chin J. Identifying False Human Papillomavirus (HPV) Vaccine Information and Corresponding Risk Perceptions From Twitter: Advanced Predictive Models. J Med Internet Res 2021; 23:e30451. [PMID: 34499043 PMCID: PMC8461539 DOI: 10.2196/30451] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Accepted: 08/04/2021] [Indexed: 01/27/2023] Open
Abstract
Background The vaccination uptake rates of the human papillomavirus (HPV) vaccine remain low despite the fact that the effectiveness of HPV vaccines has been established for more than a decade. Vaccine hesitancy is in part due to false information about HPV vaccines on social media. Combating false HPV vaccine information is a reasonable step to addressing vaccine hesitancy. Objective Given the substantial harm of false HPV vaccine information, there is an urgent need to identify false social media messages before it goes viral. The goal of the study is to develop a systematic and generalizable approach to identifying false HPV vaccine information on social media. Methods This study used machine learning and natural language processing to develop a series of classification models and causality mining methods to identify and examine true and false HPV vaccine–related information on Twitter. Results We found that the convolutional neural network model outperformed all other models in identifying tweets containing false HPV vaccine–related information (F score=91.95). We also developed completely unsupervised causality mining models to identify HPV vaccine candidate effects for capturing risk perceptions of HPV vaccines. Furthermore, we found that false information contained mostly loss-framed messages focusing on the potential risk of vaccines covering a variety of topics using more diverse vocabulary, while true information contained both gain- and loss-framed messages focusing on the effectiveness of vaccines covering fewer topics using relatively limited vocabulary. Conclusions Our research demonstrated the feasibility and effectiveness of using predictive models to identify false HPV vaccine information and its risk perceptions on social media.
Collapse
Affiliation(s)
- Tre Tomaszewski
- School of Information Sciences, University of Illinois at Urbana-Champaign, Champaign, IL, United States
| | - Alex Morales
- Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, IL, United States
| | - Ismini Lourentzou
- Department of Computer Science, Virginia Polytechnic Institute and State University, Blacksburg, VA, United States
| | - Rachel Caskey
- College of Medicine, University of Illinois at Chicago, Chicago, IL, United States
| | - Bing Liu
- Department of Computer Science, University of Illinois at Chicago, Chicago, IL, United States
| | - Alan Schwartz
- Department of Medical Education, University of Illinois at Chicago, Chicago, IL, United States
| | - Jessie Chin
- School of Information Sciences, University of Illinois at Urbana-Champaign, Champaign, IL, United States.,Cancer Center at Illinois, University of Illinois at Urbana-Champaign, Urbana, IL, United States
| |
Collapse
|
21
|
Cultural Evolution and Digital Media: Diffusion of Fake News About COVID-19 on Twitter. SN COMPUTER SCIENCE 2021; 2:430. [PMID: 34485922 PMCID: PMC8397611 DOI: 10.1007/s42979-021-00836-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Accepted: 08/24/2021] [Indexed: 12/18/2022]
Abstract
Disinformation (fake news) is a major problem that affects modern populations, especially in an era when information can be spread from one corner of the world to another in just one click. The diffusion of misinformation becomes more problematic when it addresses issues related to health, as it can affect people at both the individual and population levels. Through the ideas proposed by cultural evolution theory, in this study, we seek to understand the dynamics of disseminating messages (cultural traits) with untrue content (maladaptive traits). For our investigation, we used the scenario caused by the Coronavirus Disease 2019 (COVID-19) pandemic as a model. The instability caused by the pandemic provides a good model for the study of adapted and maladaptive traits, as the information can directly affect individual and population fitness. Through data collected on the Twitter platform (259,176 tweets) and using machine learning techniques and web scraping, we built a predictive model to analyze the following questions: (1) Is false information more shared? (2) Is false information more adopted? (3) Do people with social prestige influence the dissemination of maladaptive traits of COVID-19? We observed that fake news features contained in messages with false information were shared and adopted as unblemished messages. We also observed that social prestige was not a determining factor for the diffusion of maladaptive traits. Even with the ability to allow connections between individuals participating in social media, some factors such as attachment to cultural traits and the formation of social bubbles can favor isolation and decrease connectivity between individuals. Consequently, in the scenario of isolation between groups and low connectivity between individuals, there is a reduction in cultural exchange between people, which interferes with the dynamics of the selection of cultural traits. Thus, maladaptive (harmful) traits are favored and maintained in the cultural system. We also argue that the local Brazilian cultural context can be a determining factor for maintaining maladaptive traits. We conclude that in an unstable (pandemic) scenario, the information transmitted on Twitter is not reliable in relation to the increase in fitness, which may occur because of the low cultural exchange promoted by the personalization of the social network and cultural context of the population. Supplementary Information The online version contains supplementary material available at 10.1007/s42979-021-00836-w.
Collapse
|
22
|
Zhou C, Xiu H, Wang Y, Yu X. Characterizing the dissemination of misinformation on social media in health emergencies: An empirical study based on COVID-19. Inf Process Manag 2021; 58:102554. [PMID: 36570740 PMCID: PMC9758388 DOI: 10.1016/j.ipm.2021.102554] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2020] [Revised: 02/18/2021] [Accepted: 02/19/2021] [Indexed: 12/27/2022]
Abstract
The dissemination of misinformation in health emergencies poses serious threats to public health and increases health anxiety. To understand the underlying mechanism of the dissemination of misinformation regarding health emergencies, this study creatively draws on social support theory and text mining. It also explores the roles of different types of misinformation, including health advice and caution misinformation and health help-seeking misinformation, and emotional support in affecting individuals' misinformation dissemination behavior on social media and whether such relationships are contingent on misinformation ambiguity and richness. The theoretical model is tested using 12,101 textual data about COVID-19 collected from Sina Weibo, a leading social media platform in China. The empirical results show that health caution and advice, help seeking misinformation, and emotional support significantly increase the dissemination of misinformation. Furthermore, when the level of ambiguity and richness regarding misinformation is high, the effect of health caution and advice misinformation is strengthened, whereas the effect of health help-seeking misinformation and emotional support is weakened, indicating both dark and bright misinformation ambiguity and richness. This study contributes to the literature on misinformation dissemination behavior on social media during health emergencies and social support theory and provides implications for practice.
Collapse
|
23
|
Abdelminaam DS, Ismail FH, Taha M, Taha A, Houssein EH, Nabil A. CoAID-DEEP: An Optimized Intelligent Framework for Automated Detecting COVID-19 Misleading Information on Twitter. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:27840-27867. [PMID: 34786308 PMCID: PMC8545243 DOI: 10.1109/access.2021.3058066] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 01/26/2021] [Indexed: 05/05/2023]
Abstract
COVID-19 has affected all peoples' lives. Though COVID-19 is on the rising, the existence of misinformation about the virus also grows in parallel. Additionally, the spread of misinformation has created confusion among people, caused disturbances in society, and even led to deaths. Social media is central to our daily lives. The Internet has become a significant source of knowledge. Owing to the widespread damage caused by fake news, it is important to build computerized systems to detect fake news. The paper proposes an updated deep neural network for identification of false news. The deep learning techniques are The Modified-LSTM (one to three layers) and The Modified GRU (one to three layers). In particular, we carry out investigations of a large dataset of tweets passing on data with respect to COVID-19. In our study, we separate the dubious claims into two categories: true and false. We compare the performance of the various algorithms in terms of prediction accuracy. The six machine learning techniques are decision trees, logistic regression, k nearest neighbors, random forests, support vector machines, and naïve Bayes (NB). The parameters of deep learning techniques are optimized using Keras-tuner. Four Benchmark datasets were used. Two feature extraction methods were used (TF-ID with N-gram) to extract essential features from the four benchmark datasets for the baseline machine learning model and word embedding feature extraction method for the proposed deep neural network methods. The results obtained with the proposed framework reveal high accuracy in detecting Fake and non-Fake tweets containing COVID-19 information. These results demonstrate significant improvement as compared to the existing state of art results of baseline machine learning models. In our approach, we classify the data into two categories: fake or nonfake. We compare the execution of the proposed approaches with Six machine learning procedures. The six machine learning procedures are Decision Tree (DT), Logistic Regression (LR), K Nearest Neighbor (KNN), Random Forest (RF), Support Vector Machine (SVM), and Naive Bayes (NB). The parameters of deep learning techniques are optimized using Keras-tuner. Four Benchmark datasets were used. Two feature extraction methods were used (TF-ID with N-gram) to extract essential features from the four benchmark datasets for the baseline machine learning model and word embedding feature extraction method for the proposed deep neural network methods. The results obtained with the proposed framework reveal high accuracy in detecting Fake and non-Fake tweets containing COVID-19 information. These results demonstrate significant improvement as compared to the existing state of art results of baseline machine learning models.
Collapse
Affiliation(s)
- Diaa Salama Abdelminaam
- Faculty of Computers and Artificial IntelligenceBenha UniversityBenha13511Egypt
- Faculty of Computer ScienceMisr International UniversityCairo11341Egypt
| | | | - Mohamed Taha
- Faculty of Computers and Artificial IntelligenceBenha UniversityBenha13511Egypt
| | - Ahmed Taha
- Faculty of Computers and Artificial IntelligenceBenha UniversityBenha13511Egypt
| | | | - Ayman Nabil
- Faculty of Computer ScienceMisr International UniversityCairo11341Egypt
| |
Collapse
|
24
|
Zhao Y, Da J, Yan J. Detecting health misinformation in online health communities: Incorporating behavioral features into machine learning based approaches. Inf Process Manag 2021. [DOI: 10.1016/j.ipm.2020.102390] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
25
|
Jha D, Singh R. Analysis of associations between emotions and activities of drug users and their addiction recovery tendencies from social media posts using structural equation modeling. BMC Bioinformatics 2020; 21:554. [PMID: 33375934 PMCID: PMC7772931 DOI: 10.1186/s12859-020-03893-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Accepted: 11/18/2020] [Indexed: 12/03/2022] Open
Abstract
Background Addiction to drugs and alcohol constitutes one of the significant factors underlying the decline in life expectancy in the US. Several context-specific reasons influence drug use and recovery. In particular emotional distress, physical pain, relationships, and self-development efforts are known to be some of the factors associated with addiction recovery. Unfortunately, many of these factors are not directly observable and quantifying, and assessing their impact can be difficult. Based on social media posts of users engaged in substance use and recovery on the forum Reddit, we employed two psycholinguistic tools, Linguistic Inquiry and Word Count and Empath and activities of substance users on various Reddit sub-forums to analyze behavior underlining addiction recovery and relapse. We then employed a statistical analysis technique called structural equation modeling to assess the effects of these latent factors on recovery and relapse. Results We found that both emotional distress and physical pain significantly influence addiction recovery behavior. Self-development activities and social relationships of the substance users were also found to enable recovery. Furthermore, within the context of self-development activities, those that were related to influencing the mental and physical well-being of substance users were found to be positively associated with addiction recovery. We also determined that lack of social activities and physical exercise can enable a relapse. Moreover, geography, especially life in rural areas, appears to have a greater correlation with addiction relapse. Conclusions The paper describes how observable variables can be extracted from social media and then be used to model important latent constructs that impact addiction recovery and relapse. We also report factors that impact self-induced addiction recovery and relapse. To the best of our knowledge, this paper represents the first use of structural equation modeling of social media data with the goal of analyzing factors influencing addiction recovery.
Collapse
Affiliation(s)
- Deeptanshu Jha
- Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA, 94132, USA
| | - Rahul Singh
- Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA, 94132, USA.
| |
Collapse
|
26
|
Warner EL, Waters AR, Cloyes KG, Ellington L, Kirchhoff AC. Young adult cancer caregivers' exposure to cancer misinformation on social media. Cancer 2020; 127:1318-1324. [PMID: 33368215 DOI: 10.1002/cncr.33380] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2020] [Revised: 08/30/2020] [Accepted: 10/12/2020] [Indexed: 01/21/2023]
Abstract
BACKGROUND The objective of this study was to describe young adult cancer caregivers' exposure to cancer misinformation on social media. METHODS Eligible participants were 18 to 39 years old, used social media weekly, and cared for an adult patient with cancer diagnosed 6 months to 5 years before (N = 21). Recruitment occurred from August 2017 to June 2018 in person at oncology clinics and online. Semistructured telephone interviews were recorded and analyzed with grounded theory methods through 2 rounds of coding (κ = 0.88). RESULTS Caregivers were on average 29.1 years old, female (76.2%), non-Hispanic White (90.5%), college educated (57.1%), and caring for a spouse/partner (47.6%). Caregivers were exposed to a variety of cancer misinformation topics; some felt uncertain in their caregiving ability when confronted with cancer misinformation. Caregivers crosschecked online sources and consulted trusted individuals (eg, family, friends, and health care providers) to determine the quality of cancer-related information and manage cancer misinformation. CONCLUSIONS Cancer misinformation on social media may influence caregiving behaviors and decision making. Whether caregivers correctly and consistently identify cancer misinformation on social media is unknown. Supporting digital literacy to mitigate cancer misinformation on social media may improve young cancer caregivers' access to credible cancer information. LAY SUMMARY Cancer misinformation on social media affects young adult cancer caregivers' perceptions of their social support network. Caregivers differ in how they evaluate cancer information for quality and in how this information influences their health behaviors and caregiving decisions.
Collapse
Affiliation(s)
- Echo L Warner
- Cancer Prevention and Control Program, University of Arizona Cancer Center, University of Arizona, Tucson, Arizona.,Division of Community and Systems Health Sciences, College of Nursing, University of Arizona, Tucson, Arizona
| | - Austin R Waters
- Cancer Control and Population Sciences Research Program, Huntsman Cancer Institute, Salt Lake City, Utah
| | | | - Lee Ellington
- Cancer Control and Population Sciences Research Program, Huntsman Cancer Institute, Salt Lake City, Utah.,College of Nursing, University of Utah, Salt Lake City, Utah
| | - Anne C Kirchhoff
- Cancer Control and Population Sciences Research Program, Huntsman Cancer Institute, Salt Lake City, Utah.,Pediatric Hematology/Oncology, Department of Pediatrics, University of Utah, Salt Lake City, Utah
| |
Collapse
|
27
|
Addressing Misinformation in Online Social Networks: Diverse Platforms and the Potential of Multiagent Trust Modeling. INFORMATION 2020. [DOI: 10.3390/info11110539] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
In this paper, we explore how various social networking platforms currently support the spread of misinformation. We then examine the potential of a few specific multiagent trust modeling algorithms from artificial intelligence, towards detecting that misinformation. Our investigation reveals that specific requirements of each environment may require distinct solutions for the processing. This then leads to a higher-level proposal for the actions to be taken in order to judge trustworthiness. Our final reflection concerns what information should be provided to users, once there are suspected misleading posts. Our aim is to enlighten both the organizations that host social networking and the users of those platforms, and to promote steps forward for more pro-social behaviour in these environments. As a look to the future and the growing need to address this vital topic, we reflect as well on two related topics of possible interest: the case of older adult users and the potential to track misinformation through dedicated data science studies, of particular use for healthcare.
Collapse
|
28
|
Song S, Zhang Y, Yu B. Interventions to support consumer evaluation of online health information credibility: A scoping review. Int J Med Inform 2020; 145:104321. [PMID: 33202372 DOI: 10.1016/j.ijmedinf.2020.104321] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2020] [Revised: 10/25/2020] [Accepted: 10/26/2020] [Indexed: 10/23/2022]
Abstract
PURPOSE Various interventions have been designed to help consumers better evaluate the credibility of online health information (OHI). However, assessing information credibility remained the most widely reported challenge by online health consumers. This review aims to provide an overview of major intervention approaches for improving consumer ability to evaluate OHI credibility in order to identify opportunities for future interventions. METHODS A scoping review was performed. Seven relevant scientific databases were searched to identify articles that report the design and/or evaluation of interventions to support, facilitate, or assist consumers in assessing the credibility of OHI. Thirty-one articles met the inclusion criteria. Relevant content was extracted from the articles and all codes were validated by second coders. RESULTS Three major intervention approaches for enhancing consumers' ability to evaluate OHI credibility were identified: educational program, algorithm, and interactive interface. The design of most interventions (particularly the credibility evaluation component) lacked the guidance of theories, and very few studies systematically evaluated their effectiveness in real online search contexts. Few interventions can provide spontaneous support to consumers while they search online. CONCLUSION Our understanding of what theoretical constructs contribute to effective OHI credibility evaluation interventions and how intervention outcomes should be measured remained limited. Future efforts need to focus on the design, development, test, and evaluation of theory-guided OHI credibility evaluation interventions that are scalable, sustainable, and can provide real-time support to consumers.
Collapse
Affiliation(s)
- Shijie Song
- School of Information Management, Nanjing University, Nanjing 210023, China
| | - Yan Zhang
- School of Information, The University of Texas at Austin, TX 78701, United States.
| | - Bei Yu
- School of Information Studies, Syracuse University, NY 13244, United States
| |
Collapse
|
29
|
HCI for biomedical decision-making: From diagnosis to therapy. J Biomed Inform 2020; 111:103593. [PMID: 33069887 DOI: 10.1016/j.jbi.2020.103593] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Accepted: 10/06/2020] [Indexed: 01/08/2023]
|
30
|
Abstract
Accompanied by false information, mass media content is hindering efforts to cope with the current outbreak. Although the World Health Organization and other concerned bodies are notified regarding misinformation, myths and rumors are highly prevalent. This paper aims to highlight the misinformation and its potential impacts during the Covid-19 by using the Systematic Review Approach. The researcher randomly selected n = 35 research articles published from 2015 to 2020, witnessing the misinformation as a major concern during previous endemics and the current Covid-19 pandemic. Myths and rumors through traditional and new media platforms cause Xenophobia, LGBT Rights violations, and psychological disorders among the masses. Despite the efforts made by the World Health Organization, much more is required to nullify the impacts of misinformation and Covid-19. Therefore, the researcher recommended improved global healthcare policies and strategies to counteract against misinformation to mitigate the impacts of Covid-19.
Collapse
|
31
|
López-Pujalte C, Nuño-Moral MV. La “infodemia” en la crisis del coronavirus: Análisis de desinformaciones en España y Latinoamérica. REVISTA ESPANOLA DE DOCUMENTACION CIENTIFICA 2020. [DOI: 10.3989/redc.2020.3.1807] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
En este trabajo se analizan las desinformaciones difundidas sobre el coronavirus en España y Latinoamérica en el periodo comprendido entre el 23/01/2020 y el 03/05/2020; se estudian cuantitativamente los siguientes datos: volumen de desinformación por país, la línea de evolución temporal, tipo de desinformación, canal de difusión, las fuentes, y redes de circulación de bulos entre distintos países. En el caso de España, se examina también la correlación en la producción de desinformaciones con la evolución de la pandemia, y las tendencias de búsquedas en internet sobre el coronavirus.
Los resultados muestran claramente que la evolución de la pandemia influye en la propagación de los bulos, disparándose estos en momentos críticos como la declaración de pandemia por parte de la OMS y, en el caso de España, en los momentos de mayor tasa de crecimiento de la curva, al tiempo que las búsquedas sobre el tema alcanzan su máxima popularidad.
Collapse
|
32
|
Morley J, Cowls J, Taddeo M, Floridi L. Public Health in the Information Age: Recognizing the Infosphere as a Social Determinant of Health. J Med Internet Res 2020; 22:e19311. [PMID: 32648850 PMCID: PMC7402642 DOI: 10.2196/19311] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 06/11/2020] [Accepted: 07/08/2020] [Indexed: 02/07/2023] Open
Abstract
Since 2016, social media companies and news providers have come under pressure to tackle the spread of political mis- and disinformation (MDI) online. However, despite evidence that online health MDI (on the web, on social media, and within mobile apps) also has negative real-world effects, there has been a lack of comparable action by either online service providers or state-sponsored public health bodies. We argue that this is problematic and seek to answer three questions: why has so little been done to control the flow of, and exposure to, health MDI online; how might more robust action be justified; and what specific, newly justified actions are needed to curb the flow of, and exposure to, online health MDI? In answering these questions, we show that four ethical concerns-related to paternalism, autonomy, freedom of speech, and pluralism-are partly responsible for the lack of intervention. We then suggest that these concerns can be overcome by relying on four arguments: (1) education is necessary but insufficient to curb the circulation of health MDI, (2) there is precedent for state control of internet content in other domains, (3) network dynamics adversely affect the spread of accurate health information, and (4) justice is best served by protecting those susceptible to inaccurate health information. These arguments provide a strong case for classifying the quality of the infosphere as a social determinant of health, thus making its protection a public health responsibility. In addition, they offer a strong justification for working to overcome the ethical concerns associated with state-led intervention in the infosphere to protect public health.
Collapse
Affiliation(s)
- Jessica Morley
- Oxford Internet Institute, University of Oxford, Oxford, United Kingdom
| | - Josh Cowls
- Oxford Internet Institute, University of Oxford, Oxford, United Kingdom
- Alan Turing Institute, London, United Kingdom
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, Oxford, United Kingdom
- Alan Turing Institute, London, United Kingdom
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, Oxford, United Kingdom
- Alan Turing Institute, London, United Kingdom
| |
Collapse
|
33
|
Rundo L, Pirrone R, Vitabile S, Sala E, Gambino O. Recent advances of HCI in decision-making tasks for optimized clinical workflows and precision medicine. J Biomed Inform 2020; 108:103479. [DOI: 10.1016/j.jbi.2020.103479] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Revised: 04/27/2020] [Accepted: 06/06/2020] [Indexed: 12/28/2022]
|
34
|
Wu L, Rao Y, Nazir A, Jin H. Discovering differential features: Adversarial learning for information credibility evaluation. Inf Sci (N Y) 2020. [DOI: 10.1016/j.ins.2019.12.040] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
35
|
Albalawi Y, Nikolov NS, Buckley J. Trustworthy Health-Related Tweets on Social Media in Saudi Arabia: Tweet Metadata Analysis. J Med Internet Res 2019; 21:e14731. [PMID: 31596242 PMCID: PMC6914129 DOI: 10.2196/14731] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2019] [Revised: 09/02/2019] [Accepted: 09/03/2019] [Indexed: 01/08/2023] Open
Abstract
Background Social media platforms play a vital role in the dissemination of health information. However, evidence suggests that a high proportion of Twitter posts (ie, tweets) are not necessarily accurate, and many studies suggest that tweets do not need to be accurate, or at least evidence based, to receive traction. This is a dangerous combination in the sphere of health information. Objective The first objective of this study is to examine health-related tweets originating from Saudi Arabia in terms of their accuracy. The second objective is to find factors that relate to the accuracy and dissemination of these tweets, thereby enabling the identification of ways to enhance the dissemination of accurate tweets. The initial findings from this study and methodological improvements will then be employed in a larger-scale study that will address these issues in more detail. Methods A health lexicon was used to extract health-related tweets using the Twitter application programming interface and the results were further filtered manually. A total of 300 tweets were each labeled by two medical doctors; the doctors agreed that 109 tweets were either accurate or inaccurate. Other measures were taken from these tweets’ metadata to see if there was any relationship between the measures and either the accuracy or the dissemination of the tweets. The entire range of this metadata was analyzed using Python, version 3.6.5 (Python Software Foundation), to answer the research questions posed. Results A total of 34 out of 109 tweets (31.2%) in the dataset used in this study were classified as untrustworthy health information. These came mainly from users with a non-health care background and social media accounts that had no corresponding physical (ie, organization) manifestation. Unsurprisingly, we found that traditionally trusted health sources were more likely to tweet accurate health information than other users. Likewise, these provisional results suggest that tweets posted in the morning are more trustworthy than tweets posted at night, possibly corresponding to official and casual posts, respectively. Our results also suggest that the crowd was quite good at identifying trustworthy information sources, as evidenced by the number of times a tweet’s author was tagged as favorited by the community. Conclusions The results indicate some initially surprising factors that might correlate with the accuracy of tweets and their dissemination. For example, the time a tweet was posted correlated with its accuracy, which may reflect a difference between professional (ie, morning) and hobbyist (ie, evening) tweets. More surprisingly, tweets containing a kashida—a decorative element in Arabic writing used to justify the text within lines—were more likely to be disseminated through retweets. These findings will be further assessed using data analysis techniques on a much larger dataset in future work.
Collapse
Affiliation(s)
- Yahya Albalawi
- Department of Computer Science and Information Systems, University of Limerick, Limerick, Ireland.,Department of Computer and Information Sciences, College of Arts and Science, University of Taibah, Al-Ula, Saudi Arabia.,The Irish Software Research Centre, Lero, University of Limerick, Limerick, Ireland
| | - Nikola S Nikolov
- Department of Computer Science and Information Systems, University of Limerick, Limerick, Ireland.,The Irish Software Research Centre, Lero, University of Limerick, Limerick, Ireland
| | - Jim Buckley
- Department of Computer Science and Information Systems, University of Limerick, Limerick, Ireland.,The Irish Software Research Centre, Lero, University of Limerick, Limerick, Ireland
| |
Collapse
|