1
|
Shen C, He P, Song Z, Zhang Y. Cognitive disparity in online rumor perception: a group analysis during COVID-19. BMC Public Health 2024; 24:3049. [PMID: 39501216 PMCID: PMC11536839 DOI: 10.1186/s12889-024-20549-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 10/29/2024] [Indexed: 11/09/2024] Open
Abstract
BACKGROUND The harmonious cognitive alignment among various netizen groups is pivotal for the spread and amplification of online rumors. This alignment, characterized by shared cognitive inclinations, fosters uniformity in attitudes and perspectives, thereby precipitating synchronized engagement in the dissemination of such rumors. Notably, discernible disparities emerge in group cognition as different types of rumors pertaining to the same event propagate. This research endeavors to dissect the roles of netizen groups through the lens of cognitive variance, thereby attaining a more profound comprehension of the distinctive traits and behavioral dynamics of various netizen factions in the context of online rumor dissemination. METHODS By integrating Bloom's taxonomy and crafting a survey questionnaire, this study captured the cognitive responses of netizens to various online rumor themes across two critical dimensions: (1) Information Cognition: exploring cognitive processing levels from basic recall to application and analysis and (2) Attitude Change: evaluating higher-order cognitive processes such as evaluating and creating in response to complex rumor scenarios. The decision tree classification algorithm was meticulously applied to dissect the catalysts behind the cognitive shifts among netizens. Additionally, the K-Means clustering algorithm was effectively utilized to categorize netizen groups along thematic lines, offering a nuanced view of their cognitive engagement. RESULTS The initial impression of a rumor significantly influences netizens' final cognitive perceptions. Twelve characteristics were observed in netizen groups during the dissemination of rumors on different themes, and these groups were classified into four categories: knowledge-oriented, competition-oriented, social-oriented, and entertainment-oriented, based on their cognitive differences. CONCLUSIONS Throughout the lifecycle of online rumors, from inception to dissemination, diverse netizen groups assume distinct roles, each exerting a unique influence on the spread and reception of information. By implementing tailored governance strategies that are sensitive to the characteristics of these groups, it is possible to attain substantially more effective outcomes in managing the propagation of online rumors. This nuanced approach to governance recognizes the heterogeneity of the online community and leverages it to enhance the efficacy of interventions.
Collapse
Affiliation(s)
- Chao Shen
- School of Management, Nanjing University of Posts and Telecommunications, Nanjing, 210003, China.
| | - Pengyu He
- School of Management, Nanjing University of Posts and Telecommunications, Nanjing, 210003, China
| | - Zhenyu Song
- School of Management, Nanjing University of Posts and Telecommunications, Nanjing, 210003, China
| | - Yimeng Zhang
- School of Management, Nanjing University of Posts and Telecommunications, Nanjing, 210003, China
| |
Collapse
|
2
|
Lu J, Zhang H, Xiao Y, Wang Y. An Environmental Uncertainty Perception Framework for Misinformation Detection and Spread Prediction in the COVID-19 Pandemic: Artificial Intelligence Approach. JMIR AI 2024; 3:e47240. [PMID: 38875583 PMCID: PMC11041461 DOI: 10.2196/47240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 07/30/2023] [Accepted: 12/16/2023] [Indexed: 06/16/2024]
Abstract
BACKGROUND Amidst the COVID-19 pandemic, misinformation on social media has posed significant threats to public health. Detecting and predicting the spread of misinformation are crucial for mitigating its adverse effects. However, prevailing frameworks for these tasks have predominantly focused on post-level signals of misinformation, neglecting features of the broader information environment where misinformation originates and proliferates. OBJECTIVE This study aims to create a novel framework that integrates the uncertainty of the information environment into misinformation features, with the goal of enhancing the model's accuracy in tasks such as misinformation detection and predicting the scale of dissemination. The objective is to provide better support for online governance efforts during health crises. METHODS In this study, we embraced uncertainty features within the information environment and introduced a novel Environmental Uncertainty Perception (EUP) framework for the detection of misinformation and the prediction of its spread on social media. The framework encompasses uncertainty at 4 scales of the information environment: physical environment, macro-media environment, micro-communicative environment, and message framing. We assessed the effectiveness of the EUP using real-world COVID-19 misinformation data sets. RESULTS The experimental results demonstrated that the EUP alone achieved notably good performance, with detection accuracy at 0.753 and prediction accuracy at 0.71. These results were comparable to state-of-the-art baseline models such as bidirectional long short-term memory (BiLSTM; detection accuracy 0.733 and prediction accuracy 0.707) and bidirectional encoder representations from transformers (BERT; detection accuracy 0.755 and prediction accuracy 0.728). Additionally, when the baseline models collaborated with the EUP, they exhibited improved accuracy by an average of 1.98% for the misinformation detection and 2.4% for spread-prediction tasks. On unbalanced data sets, the EUP yielded relative improvements of 21.5% and 5.7% in macro-F1-score and area under the curve, respectively. CONCLUSIONS This study makes a significant contribution to the literature by recognizing uncertainty features within information environments as a crucial factor for improving misinformation detection and spread-prediction algorithms during the pandemic. The research elaborates on the complexities of uncertain information environments for misinformation across 4 distinct scales, including the physical environment, macro-media environment, micro-communicative environment, and message framing. The findings underscore the effectiveness of incorporating uncertainty into misinformation detection and spread prediction, providing an interdisciplinary and easily implementable framework for the field.
Collapse
Affiliation(s)
- Jiahui Lu
- State Key Laboratory of Communication Content Cognition, People's Daily Online, Beijing, China
- School of New Media and Communication, Tianjin University, Tianjin, China
| | - Huibin Zhang
- School of New Media and Communication, Tianjin University, Tianjin, China
| | - Yi Xiao
- School of New Media and Communication, Tianjin University, Tianjin, China
| | - Yingyu Wang
- School of New Media and Communication, Tianjin University, Tianjin, China
| |
Collapse
|
3
|
Duffy FF, McDonnell GP, Auslander MV, Bricault SA, Kim PY, Rachlin NW, Quartana PJ. US Soldiers' Individual and Unit-level Factors Associated with Perceptions of Disinformation in the Military Context. Mil Med 2023; 188:698-708. [PMID: 37948291 DOI: 10.1093/milmed/usad322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 05/01/2023] [Accepted: 07/31/2023] [Indexed: 11/12/2023] Open
Abstract
INTRODUCTION Although the US Government considers threats of misinformation, disinformation, and mal-information to rise to the level of terrorism, little is known about service members' experiences with disinformation in the military context. We examined soldiers' perceptions of disinformation impact on the Army and their units. We also investigated associations between disinformation perceptions and soldiers' sociodemographic characteristics, reported use of fact-checking, and perceptions of unit cohesion and readiness. METHODS Active-duty soldiers (N = 19,465) across two large installations in the Southwest US completed an anonymous online survey. RESULTS Sixty-six percent of soldiers agreed that disinformation has a negative impact on the Army. Thirty-three percent of soldiers perceived disinformation as a problem in their unit. Females were more likely to agree that disinformation has a negative impact on the Army and is a problem in their unit. Higher military rank was associated with lower odds of agreeing that disinformation is a problem in units. Most soldiers were confident about their ability to recognize disinformation (62%) and reported using fact-checking resources (53%), and these factors were most often endorsed by soldiers who agreed that disinformation is a problem for the Army and their unit. Soldiers' perceptions of unit cohesion and readiness were negatively associated with the perception that disinformation is a problem in their unit. CONCLUSION While the majority of soldiers viewed disinformation as a problem across the Army, fewer perceived it as problematic within their units. Higher levels of reported fact-checking were most evident among those who perceived disinformation as a problem, suggesting that enhancing awareness of the problem of disinformation alone could help mitigate its deleterious impact. Perceptions of disinformation problems within units were associated with soldiers' perceptions of lower unit cohesion and readiness, highlighting misinformation, disinformation, and mal-information's impact on force readiness. Limitations and future directions are discussed.
Collapse
Affiliation(s)
- Farifteh Firoozmand Duffy
- Center for Military Psychiatry and Neuroscience, Walter Reed Army Institute of Research, Silver Spring, MD 20910, USA
| | - Gerald P McDonnell
- Center for Military Psychiatry and Neuroscience, Walter Reed Army Institute of Research, Silver Spring, MD 20910, USA
| | - Margeaux V Auslander
- Center for Military Psychiatry and Neuroscience, Walter Reed Army Institute of Research, Silver Spring, MD 20910, USA
| | - Stephanie A Bricault
- Center for Military Psychiatry and Neuroscience, Walter Reed Army Institute of Research, Silver Spring, MD 20910, USA
| | | | | | - Phillip J Quartana
- Center for Military Psychiatry and Neuroscience, Walter Reed Army Institute of Research, Silver Spring, MD 20910, USA
| |
Collapse
|
4
|
Jing J, Wu H, Sun J, Fang X, Zhang H. Multimodal fake news detection via progressive fusion networks. Inf Process Manag 2023. [DOI: 10.1016/j.ipm.2022.103120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
5
|
Li J, Chang X. Combating Misinformation by Sharing the Truth: a Study on the Spread of Fact-Checks on Social Media. INFORMATION SYSTEMS FRONTIERS : A JOURNAL OF RESEARCH AND INNOVATION 2022; 25:1-15. [PMID: 35729965 PMCID: PMC9188446 DOI: 10.1007/s10796-022-10296-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/19/2022] [Indexed: 05/16/2023]
Abstract
Misinformation on social media has become a horrendous problem in our society. Fact-checks on information often fall behind the diffusion of misinformation, which can lead to negative impacts on society. This research studies how different factors may affect the spread of fact-checks over the internet. We collected a dataset of fact-checks in a six-month period and analyzed how they spread on Twitter. The spread of fact-checks is measured by the total retweet count. The factors/variables include the truthfulness rating, topic of information, source credibility, etc. The research identifies truthfulness rating as a significant factor: conclusive fact-checks (either true or false) tend to be shared more than others. In addition, the source credibility, political leaning, and the sharing count also affect the spread of fact-checks. The findings of this research provide practical insights into accelerating the spread of the truth in the battle against misinformation online.
Collapse
Affiliation(s)
- Jiexun Li
- Western Washington University, Bellingham, WA 98225 USA
| | | |
Collapse
|
6
|
Wang X, Zhang M, Fan W, Zhao K. Understanding the spread of COVID-19 misinformation on social media: The effects of topics and a political leader's nudge. J Assoc Inf Sci Technol 2022; 73:726-737. [PMID: 34901312 PMCID: PMC8653058 DOI: 10.1002/asi.24576] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 08/15/2021] [Accepted: 09/12/2021] [Indexed: 11/08/2022]
Abstract
The spread of misinformation on social media has become a major societal issue during recent years. In this work, we used the ongoing COVID-19 pandemic as a case study to systematically investigate factors associated with the spread of multi-topic misinformation related to one event on social media based on the heuristic-systematic model. Among factors related to systematic processing of information, we discovered that the topics of a misinformation story matter, with conspiracy theories being the most likely to be retweeted. As for factors related to heuristic processing of information, such as when citizens look up to their leaders during such a crisis, our results demonstrated that behaviors of a political leader, former US President Donald J. Trump, may have nudged people's sharing of COVID-19 misinformation. Outcomes of this study help social media platform and users better understand and prevent the spread of misinformation on social media.
Collapse
Affiliation(s)
- Xiangyu Wang
- Interdisciplinary Graduate Program in InformaticsThe University of IowaIowa CityIowaUSA
| | - Min Zhang
- Interdisciplinary Graduate Program in InformaticsThe University of IowaIowa CityIowaUSA
| | - Weiguo Fan
- Business Analytics, Tipple College of BusinessThe University of IowaIowa CityIowaUSA
| | - Kang Zhao
- Business Analytics, Tipple College of BusinessThe University of IowaIowa CityIowaUSA
| |
Collapse
|
7
|
Li Y, Fan Z, Yuan X, Zhang X. Recognizing fake information through a developed feature scheme: A user study of health misinformation on social media in China. Inf Process Manag 2022. [DOI: 10.1016/j.ipm.2021.102769] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
8
|
Fedoruk B, Nelson H, Frost R, Fucile Ladouceur K. The Plebeian Algorithm: A Democratic Approach to Censorship and Moderation. JMIR Form Res 2021; 5:e32427. [PMID: 34854812 PMCID: PMC8691413 DOI: 10.2196/32427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Revised: 09/15/2021] [Accepted: 11/26/2021] [Indexed: 11/23/2022] Open
Abstract
Background The infodemic created by the COVID-19 pandemic has created several societal issues, including a rise in distrust between the public and health experts, and even a refusal of some to accept vaccination; some sources suggest that 1 in 4 Americans will refuse the vaccine. This social concern can be traced to the level of digitization today, particularly in the form of social media. Objective The goal of the research is to determine an optimal social media algorithm, one which is able to reduce the number of cases of misinformation and which also ensures that certain individual freedoms (eg, the freedom of expression) are maintained. After performing the analysis described herein, an algorithm was abstracted. The discovery of a set of abstract aspects of an optimal social media algorithm was the purpose of the study. Methods As social media was the most significant contributing factor to the spread of misinformation, the team decided to examine infodemiology across various text-based platforms (Twitter, 4chan, Reddit, Parler, Facebook, and YouTube). This was done by using sentiment analysis to compare general posts with key terms flagged as misinformation (all of which concern COVID-19) to determine their verity. In gathering the data sets, both application programming interfaces (installed using Python’s pip) and pre-existing data compiled by standard scientific third parties were used. Results The sentiment can be described using bimodal distributions for each platform, with a positive and negative peak, as well as a skewness. It was found that in some cases, misinforming posts can have up to 92.5% more negative sentiment skew compared to accurate posts. Conclusions From this, the novel Plebeian Algorithm is proposed, which uses sentiment analysis and post popularity as metrics to flag a post as misinformation. This algorithm diverges from that of the status quo, as the Plebeian Algorithm uses a democratic process to detect and remove misinformation. A method was constructed in which content deemed as misinformation to be removed from the platform is determined by a randomly selected jury of anonymous users. This not only prevents these types of infodemics but also guarantees a more democratic way of using social media that is beneficial for repairing social trust and encouraging the public’s evidence-informed decision-making.
Collapse
Affiliation(s)
- Benjamin Fedoruk
- Faculty of Science, University of Ontario, Institute of Technology, Oshawa, ON, Canada
| | - Harrison Nelson
- Faculty of Health Sciences, Queen's University, Kingston, ON, Canada
| | - Russell Frost
- Faculty of Engineering, Lakehead University, Thunder Bay, ON, Canada
| | - Kai Fucile Ladouceur
- School of Engineering Technology, Trades, and Aviation, Confederation College, Thunder Bay, ON, Canada
| |
Collapse
|
9
|
Abstract
This study aims to explore the time series context and sentiment polarity features of rumors’ life cycles, and how to use them to optimize the CNN model parameters and improve the classification effect. The proposed model is a convolutional neural network embedded with an attention mechanism of sentiment polarity and time series information. Firstly, the whole life cycle of rumors is divided into 20 groups by the time series algorithm and each group of texts is trained by Doc2Vec to obtain the text vector. Secondly, the SVM algorithm is used to obtain the sentiment polarity features of each group. Lastly, the CNN model with the spatial attention mechanism is used to obtain the rumors’ classification. The experiment results show that the proposed model introduced with features of time series and sentiment polarity is very effective for rumor detection, and can greatly reduce the number of iterations for model training as well. The accuracy, precision, recall and F1 of the attention CNN are better than the latest benchmark model.
Collapse
|
10
|
Kauk J, Kreysa H, Schweinberger SR. Understanding and countering the spread of conspiracy theories in social networks: Evidence from epidemiological models of Twitter data. PLoS One 2021; 16:e0256179. [PMID: 34383860 PMCID: PMC8360523 DOI: 10.1371/journal.pone.0256179] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Accepted: 08/02/2021] [Indexed: 11/24/2022] Open
Abstract
Conspiracy theories in social networks are considered to have adverse effects on individuals' compliance with public health measures in the context of a pandemic situation. A deeper understanding of how conspiracy theories propagate through social networks is critical for the development of countermeasures. The present work focuses on a novel approach to characterize the propagation of conspiracy theories through social networks by applying epidemiological models to Twitter data. A Twitter dataset was searched for tweets containing hashtags indicating belief in the "5GCoronavirus" conspiracy theory, which states that the COVID-19 pandemic is a result of, or enhanced by, the enrollment of the 5G mobile network. Despite the absence of any scientific evidence, the "5GCoronavirus" conspiracy theory propagated rapidly through Twitter, beginning at the end of January, followed by a peak at the beginning of April, and ceasing/disappearing approximately at the end of June 2020. An epidemic SIR (Susceptible-Infected-Removed) model was fitted to this time series with acceptable model fit, indicating parallels between the propagation of conspiracy theories in social networks and infectious diseases. Extended SIR models were used to simulate the effects that two specific countermeasures, fact-checking and tweet-deletion, could have had on the propagation of the conspiracy theory. Our simulations indicate that fact-checking is an effective mechanism in an early stage of conspiracy theory diffusion, while tweet-deletion shows only moderate efficacy but is less time-sensitive. More generally, an early response is critical to gain control over the spread of conspiracy theories through social networks. We conclude that an early response combined with strong fact-checking and a moderate level of deletion of problematic posts is a promising strategy to fight conspiracy theories in social networks. Results are discussed with respect to their theoretical validity and generalizability.
Collapse
Affiliation(s)
- Julian Kauk
- Department of General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Jena, Germany
| | - Helene Kreysa
- Department of General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Jena, Germany
| | - Stefan R. Schweinberger
- Department of General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Jena, Germany
- DFG Research Unit Person Perception, Friedrich Schiller University Jena, Jena, Germany
| |
Collapse
|
11
|
Ji J, Chao N, Wei S, Barnett GA. Microblog credibility indicators regarding misinformation of genetically modified food on Weibo. PLoS One 2021; 16:e0252392. [PMID: 34061876 PMCID: PMC8168881 DOI: 10.1371/journal.pone.0252392] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Accepted: 05/15/2021] [Indexed: 11/26/2022] Open
Abstract
The considerable amount of misinformation on social media regarding genetically modified (GM) food will not only hinder public understanding but also mislead the public to make unreasoned decisions. This study discovered a new mechanism of misinformation diffusion in the case of GM food and applied a framework of supervised machine learning to identify effective credibility indicators for the misinformation prediction of GM food. Main indicators are proposed, including user identities involved in spreading information, linguistic styles, and propagation dynamics. Results show that linguistic styles, including sentiment and topics, have the dominant predictive power. In addition, among the user identities, engagement, and extroversion are effective predictors, while reputation has almost no predictive power in this study. Finally, we provide strategies that readers should be aware of when assessing the credibility of online posts and suggest improvements that Weibo can use to avoid rumormongering and enhance the science communication of GM food.
Collapse
Affiliation(s)
- Jiaojiao Ji
- Department of Science and Technology Communication, University of Science and Technology of China, Hefei, Anhui, China
| | - Naipeng Chao
- School of Communication, Shenzhen University, Shenzhen, Guangdong, China
| | - Shitong Wei
- Department of Statistics, University of California, Davis, CA, United States of America
| | - George A. Barnett
- Department of Communication, University of California, Davis, CA, United States of America
| |
Collapse
|
12
|
Williams Kirkpatrick A. The spread of fake science: Lexical concreteness, proximity, misinformation sharing, and the moderating role of subjective knowledge. PUBLIC UNDERSTANDING OF SCIENCE (BRISTOL, ENGLAND) 2021; 30:55-74. [PMID: 33103578 DOI: 10.1177/0963662520966165] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The spread of science misinformation harms efforts to mitigate threats like climate change or coronavirus. Construal-level theory suggests that mediated messages can prime psychological proximity to threats, having consequences for behavior. Via two MTurk experiments, I tested a serial mediation process model predicting misinformation sharing from lexical concreteness, through psychological proximity and perceived threat. In Study 1, concrete misinformation primed psychological proximity which, in turn, increased perceived threat. Perceived threat then increased the likelihood that misinformation would be shared. Source credibility was also shown to positively influence misinformation sharing. Study 2 advanced this by showing this process was moderated by subjective knowledge. Specifically, the effect of perceived threat on misinformation sharing was stronger for those with higher subjective knowledge. Furthermore, the indirect effect of lexical concreteness on misinformation sharing was stronger for those with higher subjective knowledge. Results and limitations are discussed within the lens of construal-level theory and science communication.
Collapse
|
13
|
A short review on susceptibility to falling for fake political news. Curr Opin Psychol 2020; 36:44-48. [DOI: 10.1016/j.copsyc.2020.03.014] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Accepted: 03/27/2020] [Indexed: 11/18/2022]
|
14
|
Chang LYC, Mukherjee S, Coppel N. We Are All Victims: Questionable Content and Collective Victimisation in the Digital Age. ASIAN JOURNAL OF CRIMINOLOGY 2020; 16:37-50. [PMID: 33042290 PMCID: PMC7537372 DOI: 10.1007/s11417-020-09331-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Accepted: 09/25/2020] [Indexed: 06/11/2023]
Abstract
Traditionally, the idea of being a victim is associated with a crime, accident, trickery or being duped. With the advent of globalisation and rapid growth in the information technology sector, the world has opened itself to numerous vulnerabilities. These vulnerabilities range from individual-centric privacy issues to collective interests in the form of a nation's political and economic interests. While we have victims who can identify themselves as victims, there are also victims who can barely identify themselves as victims, and there are those who do not realise that they have become victims. Misinformation, disinformation, fake news and other methods of spreading questionable content can be regarded as a new and increasingly widespread type of collective victimisation. This paper, drawing on recent examples from India, examines and analyses the rationale and modus operandi-both methods and types-that lead us to regard questionable content as a new form of collective victimisation.
Collapse
Affiliation(s)
- Lennon Y. C. Chang
- School of Social Sciences, Monash University, Clayton, Victoria, Australia
| | | | | |
Collapse
|
15
|
Hartley K, Vu MK. Fighting fake news in the COVID-19 era: policy insights from an equilibrium model. POLICY SCIENCES 2020; 53:735-758. [PMID: 32921821 PMCID: PMC7479406 DOI: 10.1007/s11077-020-09405-z] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
The COVID-19 crisis has revealed structural failures in governance and coordination on a global scale. With related policy interventions dependent on verifiable evidence, pandemics require governments to not only consider the input of experts but also ensure that science is translated for public understanding. However, misinformation and fake news, including content shared through social media, compromise the efficacy of evidence-based policy interventions and undermine the credibility of scientific expertise with potentially longer-term consequences. We introduce a formal mathematical model to understand factors influencing the behavior of social media users when encountering fake news. The model illustrates that direct efforts by social media platforms and governments, along with informal pressure from social networks, can reduce the likelihood that users who encounter fake news embrace and further circulate it. This study has implications at a practical level for crisis response in politically fractious settings and at a theoretical level for research about post-truth and the construction of fact.
Collapse
Affiliation(s)
- Kris Hartley
- Department of Asian and Policy Studies, The Education University of Hong Kong, Tai Po, Hong Kong SAR China
| | - Minh Khuong Vu
- Lee Kuan Yew School of Public Policy, National University of Singapore, Singapore, Singapore
| |
Collapse
|
16
|
Vasileva AV. [Pandemic and mental adjustment disorders. Therapy options]. Zh Nevrol Psikhiatr Im S S Korsakova 2020; 120:146-152. [PMID: 32621481 DOI: 10.17116/jnevro2020120051146] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
The article considers the main triggers of adjustment and anxiety disorders in the time of pandemic that brings significant changes in all life dimensions with the high uncertainty level. The article highlights a role of coronavirus infodemic in the development of anxiety disorders. The main targets for cognitive, emotional and behavioral interventions are suggested. Recommendations for anxiety adjustment disorders psychopharmacotherapy with the use of tranquillizers as treatment of choice are given. Prevention measures for medical doctors working in conditions of pandemic are proposed.
Collapse
Affiliation(s)
- A V Vasileva
- Bekhterev National Research Medical Center for Psychiatry and Neurology, St. Petersburg, Russia.,Mechnikov North-Western State Medical University, St. Petersburg, Russia
| |
Collapse
|
17
|
A systematic mapping on automatic classification of fake news in social media. SOCIAL NETWORK ANALYSIS AND MINING 2020. [DOI: 10.1007/s13278-020-00659-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
18
|
Abstract
With the spiraling pandemic of the Coronavirus Disease 2019 (COVID-19), it has becoming inherently important to disseminate accurate and timely information about the disease. Due to the ubiquity of Internet connectivity and smart devices, social sensing is emerging as a dynamic AI-driven sensing paradigm to extract real-time observations from online users. In this paper, we propose CovidSens, a vision of social sensing-based risk alert systems to spontaneously obtain and analyze social data to infer the state of the COVID-19 propagation. CovidSens can actively help to keep the general public informed about the COVID-19 spread and identify risk-prone areas by inferring future propagation patterns. The CovidSens concept is motivated by three observations: (1) people have been actively sharing their state of health and experience of the COVID-19 via online social media, (2) official warning channels and news agencies are relatively slower than people reporting their observations and experiences about COVID-19 on social media, and (3) online users are frequently equipped with substantially capable mobile devices that are able to perform non-trivial on-device computation for data processing and analytics. We envision an unprecedented opportunity to leverage the posts generated by the ordinary people to build a real-time sensing and analytic system for gathering and circulating vital information of the COVID-19 propagation. Specifically, the vision of CovidSens attempts to answer the questions: How to distill reliable information about the COVID-19 with the coexistence of prevailing rumors and misinformation in the social media? How to inform the general public about the latest state of the spread timely and effectively, and alert them to remain prepared? How to leverage the computational power on the edge devices (e.g., smartphones, IoT devices, UAVs) to construct fully integrated edge-based social sensing platforms for rapid detection of the COVID-19 spread? In this vision paper, we discuss the roles of CovidSens and identify the potential challenges in developing reliable social sensing-based risk alert systems. We envision that approaches originating from multiple disciplines (e.g., AI, estimation theory, machine learning, constrained optimization) can be effective in addressing the challenges. Finally, we outline a few research directions for future work in CovidSens.
Collapse
|
19
|
Abstract
Governance of misinformation is a serious concern in social media platforms. Based on experiences gathered from different case studies, we offer insights for the policymakers on managing misinformation in social media. These platforms are widely used for not just communication but also content consumption. Managing misinformation is thus a challenge for policymakers and the platforms. This article explores the factors of rapid propagation of misinformation based on our experiences in the domain. An average of about 1.5 million tweets were analysed in each of the three different cases surrounding misinformation. The findings indicate that the tweet emotion and polarity plays a significant role in determining whether the shared content is authentic or not. A deeper exploration highlights that a higher element of surprise combined with other emotions is present in such tweets. Further, the tweets that show case-neutral content often lack the possibilities of virality when it comes to misinformation. The second case explores whether the misinformation is being propagated intentionally by means of the identified fake profiles or it is done by authentic users, which can also be either intentional, for gaining attention, or unintentional, under the assumption that the information is correct. Last, network attributes, including topological analysis, community, and centrality analysis, also catalyze the propagation of misinformation. Policymakers can utilize these findings in this experience study for the governance of misinformation. Tracking and disruption in any one of the identified drivers could act as a control mechanism to manage misinformation propagation.
Collapse
Affiliation(s)
- Reema Aswani
- Indian Institute of Technology Delhi, Hauz Khas, New Delhi
| | | | | |
Collapse
|
20
|
Floria SA, Leon F, Logofătu D. A model of information diffusion in dynamic social networks based on evidence theory. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2019. [DOI: 10.3233/jifs-179346] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Sabina-Adriana Floria
- Department of Computer Science and Engineering, “Gheorxghe Asachi” Technical University of Iaşi, RO, Romania
| | - Florin Leon
- Department of Computer Science and Engineering, “Gheorxghe Asachi” Technical University of Iaşi, RO, Romania
| | - Doina Logofătu
- Faculty of Computer Science and Engineering, Frankfurt University of Applied Sciences, DE, Germany
| |
Collapse
|
21
|
|
22
|
Yu F, Liu Q, Wu S, Wang L, Tan T. Attention-based convolutional approach for misinformation identification from massive and noisy microblog posts. Comput Secur 2019. [DOI: 10.1016/j.cose.2019.02.003] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
23
|
Andersen B, Hair L, Groshek J, Krishna A, Walker D. Understanding and Diagnosing Antimicrobial Resistance on Social Media: A Yearlong Overview of Data and Analytics. HEALTH COMMUNICATION 2019; 34:248-258. [PMID: 29206493 DOI: 10.1080/10410236.2017.1405479] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
To better understand user conversations revolving around antibiotics and antimicrobial resistance (AMR) on Twitter, we used an online data collection and analysis toolkit with full firehose access to collect corpuses of tweets with "antibiotic" and "antimicrobial resistance" keyword tracks. The date range included tweets from November 28, 2015, to November 25, 2016, for both datasets. This yearlong date range provides insight into how users have discussed antibiotics and AMR and identifies any spikes in activity during a particular time frame. Overall, we found that discussions about antibiotics and AMR predominantly occur in the United States and the United Kingdom, with roughly equal gender participation. These conversations are influenced by news sources, health professionals, and governmental health organizations. Users will often defer to retweet and recirculate content posted from these official sources and link to external articles instead of posting their own musings on the subjects. Our findings are important benchmarks in understanding the prevalence and reach of potential misinformation about antibiotics and AMR on Twitter.
Collapse
Affiliation(s)
| | - Lee Hair
- a Boston University , Division of Emerging Media Studies
| | - Jacob Groshek
- a Boston University , Division of Emerging Media Studies
| | - Arunima Krishna
- b Department of Mass Communication, Advertising, and Public Relations , Boston University
| | - Dylan Walker
- c Boston University, Questrom School of Business
| |
Collapse
|
24
|
|
25
|
Torres R, Gerhart N, Negahban A. Epistemology in the Era of Fake News. DATA BASE FOR ADVANCES IN INFORMATION SYSTEMS 2018. [DOI: 10.1145/3242734.3242740] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
Fake news has recently garnered increased attention across the world. Digital collaboration technologies now enable individuals to share information at unprecedented rates to advance their own ideologies. Much of this sharing occurs via social networking sites (SNSs), whose members may choose to share information without consideration for its authenticity. This research advances our understanding of information verification behaviors among SNS users in the context of fake news. Grounded in literature on the epistemology of testimony and theoretical perspectives on trust, we develop a news verification behavior research model and test six hypotheses with a survey of active SNS users. The empirical results confirm the significance of all proposed hypotheses. Perceptions of news sharers' network (perceived cognitive homogeneity, social tie variety, and trust), perceptions of news authors (fake news awareness and perceived media credibility), and innate intentions to share all influence information verification behaviors among SNS members. Theoretical implications, as well as implications for SNS users and designers, are presented in the light of these findings.
Collapse
|
26
|
|
27
|
Han K. How do you perceive this author? Understanding and modeling authors' communication quality in social media. PLoS One 2018; 13:e0192061. [PMID: 29389979 PMCID: PMC5794137 DOI: 10.1371/journal.pone.0192061] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2017] [Accepted: 01/16/2018] [Indexed: 11/19/2022] Open
Abstract
In this study, we leverage human evaluations, content analysis, and computational modeling to generate a comprehensive analysis of readers’ evaluations of authors’ communication quality in social media with respect to four factors: author credibility, interpersonal attraction, communication competence, and intent to interact. We review previous research on the human evaluation process and highlight its limitations in providing sufficient information for readers to assess authors’ communication quality. From our analysis of the evaluations of 1,000 Twitter authors’ communication quality from 300 human evaluators, we provide empirical evidence of the impact of the characteristics of the reader (demographic, social media experience, and personality), author (profile and social media engagement), and content (linguistic, syntactic, similarity, and sentiment) on the evaluation of an author’s communication quality. In addition, based on the author and message characteristics, we demonstrate the potential for building accurate models that can indicate an author’s communication quality.
Collapse
Affiliation(s)
- Kyungsik Han
- Department of Software and Compute Engineering, Ajou University, Suwon, South Korea
- * E-mail:
| |
Collapse
|
28
|
Rapp DN, Donovan AM. Routine Processes of Cognition Result in Routine Influences of Inaccurate Content. JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2017. [DOI: 10.1016/j.jarmac.2017.08.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
29
|
Aghababaei S, Makrehchi M. Activity-based Twitter sampling for content-based and user-centric prediction models. HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES 2017. [DOI: 10.1186/s13673-016-0084-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
AbstractIncreasingly more applications rely on crowd-sourced data from social media. Some of these applications are concerned with real-time data streams, while others are more focused on acquiring temporal footprints from historical data. Nevertheless, determining the subset of “credible” users is crucial. While the majority of sampling approaches focus on individual static networks, dynamic user activity over time is usually not considered, which may result in activity gaps in the collected data. Models based on noisy and missing data can significantly degrade in performance. In this study, we demonstrate how to sample Twitter users in order to produce more credible data for temporal prediction models. We present an activity-based sampling approach where users are selected based on their historical activities in Twitter. The predictability of the collected content from activity-based and random sampling is compared in a content-based and user-centric temporal model. The results indicate the importance of an activity-oriented sampling method for the acquisition of more credible content for temporal models.
Collapse
|
30
|
|
31
|
Ho CW, Wang YB. Re-purchase intentions and virtual customer relationships on social media brand community. HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES 2015. [DOI: 10.1186/s13673-015-0038-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
AbstractThe purpose of this study was to demonstrate how to manage digital customer relationships (i.e. relationships with the brand, the product, the company, and other fans) on social media based community (i.e. Facebook brand fan-pages) and to influence post-purchase intentions (i.e. word-of-mouth and re-purchase intentions). This study used partial least squares to test the hypotheses and analyze the data. The results of this study indicated that all these four customer-community relationships can enhance post-purchase behaviors by improving individual community participation or identification. The findings are of benefit to both academics and practitioners and this research is one of the first to demonstrate how to manage digital customer relationships on social media brand community.
Collapse
|
32
|
Multiple Minimum Support-Based Rare Graph Pattern Mining Considering Symmetry Feature-Based Growth Technique and the Differing Importance of Graph Elements. Symmetry (Basel) 2015. [DOI: 10.3390/sym7031151] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
33
|
Personal Information Leaks with Automatic Login in Mobile Social Network Services. ENTROPY 2015. [DOI: 10.3390/e17063947] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
34
|
Information Hiding Method Using Best DCT and Wavelet Coefficients and Its Watermark Competition. ENTROPY 2015. [DOI: 10.3390/e17031218] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|