1
|
Weng O, Johnson KJ, Kreuter MW. Repeated Exposure to COVID-19 Misinformation: A Longitudinal Analysis of Prevalence and Predictors in a Community Sample. JOURNAL OF PUBLIC HEALTH MANAGEMENT AND PRACTICE 2024; 30:E211-E214. [PMID: 39041773 DOI: 10.1097/phh.0000000000002019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/24/2024]
Abstract
Belief in health misinformation can affect individual health decisions and actions. Repeated exposure to the same misinformation strengthens its impact, yet little is known about how commonly repeated exposure occurs. To estimate the prevalence, we tracked exposure to 5 inaccurate COVID-19 claims every week for up to 23 consecutive weeks in a racially diverse panel of adults (n = 213). Repeated exposure was common: across the 5 claims, 10%-43% of respondents reported hearing the misinformation in ≥ 3 different weeks. Frontline workers were more likely than other community members to experience repeated exposure, with adjusted incidence rate ratios (IRRs) ranging from 1.8 to 4.9 across the 4 items. Repeated exposure was most common among older adults. Adjusted IRR for those ages ≥ 50 versus 18-29 years ranged from 1.8 to 2.5 per misinformation claim. Public health planning efforts to counter health misinformation should anticipate multiple exposures to the same false claim, especially in certain subgroups.
Collapse
Affiliation(s)
- Olivia Weng
- Brown School, Washington University in St Louis, St Louis, Missouri
| | | | | |
Collapse
|
2
|
Xue X, Ma H, Zhao YC, Zhu Q, Song S. Mitigating the influence of message features on health misinformation sharing intention in social media: Experimental evidence for accuracy-nudge intervention. Soc Sci Med 2024; 356:117136. [PMID: 39047519 DOI: 10.1016/j.socscimed.2024.117136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 06/23/2024] [Accepted: 07/11/2024] [Indexed: 07/27/2024]
Abstract
RATIONALE The escalating dissemination of health misinformation on social media platforms poses a significant threat to users' well-being. It is imperative to identify the types of health misinformation that are more susceptible to widespread dissemination and to explore strategies to curb its spread. METHOD This study designed a 2 (emotional appeal type: positive vs. negative) × 2 (fabricated source type: pseudo-common vs. pseudo-authoritative) × 2 (accuracy-nudge label: No vs. Yes) online between-subjects experiment controlling for factors such as e-health literacy, prior sharing experience, and personal involvement. A snowball sampling approach was used to recruit 1952 participants through social media, resulting in a final sample of 1393 valid responses. RESULTS Compared to positive emotional appeal and pseudo-common sources, negative emotional appeal and pseudo-authoritative sources resulted in higher levels of sharing intention. Under the condition of negative emotional appeal, the promotion effect of pseudo-authoritative sources on sharing intention was intensified. The accuracy-nudge intervention could significantly mitigate this tendency. The underlying mechanisms revealed more details: both negative emotional appeals and pseudo-authoritative sources increased the perceived credibility of health misinformation, thereby increasing users' sharing intention. However, in contrast to pseudo-authoritative sources, excessive negative emotional appeal induced vigilant verification behavior among users, which reduced sharing to some extent. Adding an accuracy-nudge label to health misinformation reduced users' misguided trust in health misinformation features and stimulated information verification, ultimately reducing health misinformation sharing intention. CONCLUSIONS Negative emotional appeal and pseudo-authoritative sources can enhance the perceived credibility of health misinformation, thereby strengthening the sharing intention of social media users. Therefore, health misinformation with negative emotional appeal and pseudo-authoritative sources is more likely to be widely shared. The accuracy nudge intervention can trigger users' information verification behavior, suppress the persuasive effects of the misinformation features mentioned above, and help prevent the spread of health misinformation on social media.
Collapse
Affiliation(s)
- Xiang Xue
- School of Elderly Care Services and Management, Nanjing University of Chinese Medicine, 210023, Nanjing, China.
| | - Haiyun Ma
- School of Sociology and Population Studies, Nanjing University of Posts and Telecommunications, 210023, Nanjing, China.
| | - Yuxiang Chris Zhao
- School of Information Management, Nanjing University, 210023, Nanjing, China.
| | - Qinghua Zhu
- School of Information Management, Nanjing University, 210023, Nanjing, China.
| | - Shijie Song
- Business School, Hohai University, 211100, Nanjing, China; School of Information Management, Wuhan University, 430072, Wuhan, China.
| |
Collapse
|
3
|
Pan W, Hu TY. More familiar, more credible? Distinguishing two types of familiarity on the truth effect using the drift-diffusion model. THE JOURNAL OF SOCIAL PSYCHOLOGY 2024:1-19. [PMID: 38852171 DOI: 10.1080/00224545.2024.2363366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 05/24/2024] [Indexed: 06/11/2024]
Abstract
Familiar information is more likely to be accepted as true. This illusory truth effect has a tremendous negative impact on misinformation intervention. Previous studies focused on the familiarity from repeated exposure in the laboratory, ignoring preexisting familiarity with real-world misinformation. Over three studies (total N = 337), we investigated the cognitive mechanisms behind the truth biases from these two familiarity sources, and whether fact-checking can curb such biased truth perceptions. Studies 1 and 2 found robust truth effects induced by two sources of familiarity but with different cognitive processes. According to the cognitive process model, repetition-induced familiarity reduced decision prudence. Preexisting familiarity instead enhanced truth-congruent evidence accumulation. Study 3 showed that pre-exposing statements with warning flags eliminated the bias to truth induced by repetition but not that from preexisting familiarity. These repeated statements with warning labels also reduced decision caution. These findings furthered the understanding of how different sources of familiarity affect truth perceptions and undermine the intervention through different cognitive processes.
Collapse
Affiliation(s)
- Wanke Pan
- Shanghai Normal University
- Nanjing Normal University
| | | |
Collapse
|
4
|
Budak C, Nyhan B, Rothschild DM, Thorson E, Watts DJ. Misunderstanding the harms of online misinformation. Nature 2024; 630:45-53. [PMID: 38840013 DOI: 10.1038/s41586-024-07417-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 04/11/2024] [Indexed: 06/07/2024]
Abstract
The controversy over online misinformation and social media has opened a gap between public discourse and scientific research. Public intellectuals and journalists frequently make sweeping claims about the effects of exposure to false content online that are inconsistent with much of the current empirical evidence. Here we identify three common misperceptions: that average exposure to problematic content is high, that algorithms are largely responsible for this exposure and that social media is a primary cause of broader social problems such as polarization. In our review of behavioural science research on online misinformation, we document a pattern of low exposure to false and inflammatory content that is concentrated among a narrow fringe with strong motivations to seek out such information. In response, we recommend holding platforms accountable for facilitating exposure to false and extreme content in the tails of the distribution, where consumption is highest and the risk of real-world harm is greatest. We also call for increased platform transparency, including collaborations with outside researchers, to better evaluate the effects of online misinformation and the most effective responses to it. Taking these steps is especially important outside the USA and Western Europe, where research and data are scant and harms may be more severe.
Collapse
Affiliation(s)
- Ceren Budak
- University of Michigan School of Information, Ann Arbor, MI, USA
| | - Brendan Nyhan
- Department of Government, Dartmouth College, Hanover, NH, USA
| | | | - Emily Thorson
- Maxwell School of Citizenship and Public Affairs, Syracuse University, Syracuse, NY, USA
| | - Duncan J Watts
- Department of Computer and Information Science, Annenberg School of Communication, and Operations, Information, and Decisions Department, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
5
|
Speckmann F, Unkelbach C. Illusions of knowledge due to mere repetition. Cognition 2024; 247:105791. [PMID: 38593568 DOI: 10.1016/j.cognition.2024.105791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 03/24/2024] [Accepted: 04/03/2024] [Indexed: 04/11/2024]
Abstract
Repeating information increases people's belief that the repeated information is true. This truth effect has been widely researched and is relevant for topics such as fake news and misinformation. Another effect of repetition, which is also relevant to those topics, has not been extensively studied so far: Do people believe they knew something before it was repeated? We used a standard truth effect paradigm in four pre-registered experiments (total N = 773), including a presentation and judgment phase. However, instead of "true"/"false" judgments, participants indicated whether they knew a given trivia statement before participating in the experiment. Across all experiments, participants judged repeated information as "known" more often than novel information. Participants even judged repeated false information to know it to be false. In addition, participants also generated sources of their knowledge. The inability to distinguish recent information from well-established knowledge in memory adds an explanation for the persistence and strength of repetition effects on truth. The truth effect might be so robust because people believe to know the repeatedly presented information as a matter of fact.
Collapse
|
6
|
Williams-Ceci S, Macy MW, Naaman M. Misinformation does not reduce trust in accurate search results, but warning banners may backfire. Sci Rep 2024; 14:10977. [PMID: 38744967 PMCID: PMC11094033 DOI: 10.1038/s41598-024-61645-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Accepted: 05/08/2024] [Indexed: 05/16/2024] Open
Abstract
People rely on search engines for information in critical contexts, such as public health emergencies-but what makes people trust some search results more than others? Can search engines influence people's levels of trust by controlling how information is presented? And, how does the presence of misinformation influence people's trust? Research has identified both rank and the presence of misinformation as factors impacting people's search behavior. Here, we extend these findings by measuring the effects of these factors, as well as misinformation warning banners, on the perceived trustworthiness of individual search results. We conducted three online experiments (N = 3196) using Covid-19-related queries, and found that although higher-ranked results are clicked more often, they are not more trusted. We also showed that misinformation does not damage trust in accurate results displayed below it. In contrast, while a warning about unreliable sources might decrease trust in misinformation, it significantly decreases trust in accurate information. This research alleviates some concerns about how people evaluate the credibility of information they find online, while revealing a potential backfire effect of one misinformation-prevention approach; namely, that banner warnings about source unreliability could lead to unexpected and nonoptimal outcomes in which people trust accurate information less.
Collapse
Affiliation(s)
- Sterling Williams-Ceci
- Department of Information Science, Cornell University, Ithaca, NY, USA.
- Cornell Tech, New York, NY, USA.
| | - Michael W Macy
- Department of Information Science, Cornell University, Ithaca, NY, USA
- Department of Sociology, Cornell University, Ithaca, NY, USA
| | - Mor Naaman
- Department of Information Science, Cornell University, Ithaca, NY, USA
- Cornell Tech, New York, NY, USA
| |
Collapse
|
7
|
Kemp PL, Sinclair AH, Adcock RA, Wahlheim CN. Memory and belief updating following complete and partial reminders of fake news. Cogn Res Princ Implic 2024; 9:28. [PMID: 38713308 PMCID: PMC11076432 DOI: 10.1186/s41235-024-00546-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 03/20/2024] [Indexed: 05/08/2024] Open
Abstract
Fake news can have enduring effects on memory and beliefs. An ongoing theoretical debate has investigated whether corrections (fact-checks) should include reminders of fake news. The familiarity backfire account proposes that reminders hinder correction (increasing interference), whereas integration-based accounts argue that reminders facilitate correction (promoting memory integration). In three experiments, we examined how different types of corrections influenced memory for and belief in news headlines. In the exposure phase, participants viewed real and fake news headlines. In the correction phase, participants viewed reminders of fake news that either reiterated the false details (complete) or prompted recall of missing false details (partial); reminders were followed by fact-checked headlines correcting the false details. Both reminder types led to proactive interference in memory for corrected details, but complete reminders produced less interference than partial reminders (Experiment 1). However, when participants had fewer initial exposures to fake news and experienced a delay between exposure and correction, this effect was reversed; partial reminders led to proactive facilitation, enhancing correction (Experiment 2). This effect occurred regardless of the delay before correction (Experiment 3), suggesting that the effects of partial reminders depend on the number of prior fake news exposures. In all experiments, memory and perceived accuracy were better when fake news and corrections were recollected, implicating a critical role for integrative encoding. Overall, we show that when memories of fake news are weak or less accessible, partial reminders are more effective for correction; when memories of fake news are stronger or more accessible, complete reminders are preferable.
Collapse
Affiliation(s)
- Paige L Kemp
- Department of Psychology, University of North Carolina at Greensboro, 296 Eberhart Building, P. O. Box 26170, Greensboro, NC, 27402-6170, USA.
| | - Alyssa H Sinclair
- Department of Psychology and Neuroscience, Duke University, Durham, NC, 27708, USA
- Center for Science, Sustainability, and the Media, University of Pennsylvania, Philadelphia, USA
| | - R Alison Adcock
- Department of Psychology and Neuroscience, Duke University, Durham, NC, 27708, USA
- Department of Psychiatry and Behavioral Sciences, Duke University, Durham, USA
| | - Christopher N Wahlheim
- Department of Psychology, University of North Carolina at Greensboro, 296 Eberhart Building, P. O. Box 26170, Greensboro, NC, 27402-6170, USA
| |
Collapse
|
8
|
Vu HT, Chen Y. What Influences Audience Susceptibility to Fake Health News: An Experimental Study Using a Dual Model of Information Processing in Credibility Assessment. HEALTH COMMUNICATION 2024; 39:1113-1126. [PMID: 37095061 DOI: 10.1080/10410236.2023.2206177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
This experimental study investigates the effects of several heuristic cues and systematic factors on users' misinformation susceptibility in the context of health news. Specifically, it examines whether author credentials, writing style, and verification check flagging influence participants' intent to follow article behavioral recommendations provided by the article, perceived article credibility, and sharing intent. Findings suggest that users rely only on verification checks (passing/failing) in assessing information credibility. Of the two antecedents to systematic processing, social media self-efficacy moderates the links between verification and participants' susceptibility. Theoretical and practical implications are discussed.
Collapse
Affiliation(s)
- Hong Tien Vu
- Clyde & Betty Reed Professor of Journalism, University of Kansas
| | - Yvonnes Chen
- Clyde & Betty Reed Professor of Journalism, University of Kansas
| |
Collapse
|
9
|
Privitera AJ, Ng SHS, Kong APH, Weekes BS. AI and Aphasia in the Digital Age: A Critical Review. Brain Sci 2024; 14:383. [PMID: 38672032 PMCID: PMC11047933 DOI: 10.3390/brainsci14040383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Revised: 04/11/2024] [Accepted: 04/14/2024] [Indexed: 04/28/2024] Open
Abstract
Aphasiology has a long and rich tradition of contributing to understanding how culture, language, and social environment contribute to brain development and function. Recent breakthroughs in AI can transform the role of aphasiology in the digital age by leveraging speech data in all languages to model how damage to specific brain regions impacts linguistic universals such as grammar. These tools, including generative AI (ChatGPT) and natural language processing (NLP) models, could also inform practitioners working with clinical populations in the assessment and treatment of aphasia using AI-based interventions such as personalized therapy and adaptive platforms. Although these possibilities have generated enthusiasm in aphasiology, a rigorous interrogation of their limitations is necessary before AI is integrated into practice. We explain the history and first principles of reciprocity between AI and aphasiology, highlighting how lesioning neural networks opened the black box of cognitive neurolinguistic processing. We then argue that when more data from aphasia across languages become digitized and available online, deep learning will reveal hitherto unreported patterns of language processing of theoretical interest for aphasiologists. We also anticipate some problems using AI, including language biases, cultural, ethical, and scientific limitations, a misrepresentation of marginalized languages, and a lack of rigorous validation of tools. However, as these challenges are met with better governance, AI could have an equitable impact.
Collapse
Affiliation(s)
- Adam John Privitera
- Centre for Research and Development in Learning, Nanyang Technological University, Singapore 637335, Singapore;
| | - Siew Hiang Sally Ng
- Centre for Research and Development in Learning, Nanyang Technological University, Singapore 637335, Singapore;
- Institute for Pedagogical Innovation, Research, and Excellence, Nanyang Technological University, Singapore 637335, Singapore
| | - Anthony Pak-Hin Kong
- Academic Unit of Human Communication, Learning, and Development, The University of Hong Kong, Pokfulam, Hong Kong;
- Aphasia Research and Therapy (ART) Laboratory, The University of Hong Kong, Pokfulam, Hong Kong
| | - Brendan Stuart Weekes
- Faculty of Education, The University of Hong Kong, Pokfulam, Hong Kong
- Melbourne School of Psychological Sciences, University of Melbourne, Parkville 3010, Australia
| |
Collapse
|
10
|
Fabio RA, Verzì D, Gangemi A. A contribute to the default-interventionist and parallel accounts in deductive reasoning. The effect of decisional styles on logic and belief. THE JOURNAL OF GENERAL PSYCHOLOGY 2024; 151:209-222. [PMID: 37526357 DOI: 10.1080/00221309.2023.2241952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 07/20/2023] [Indexed: 08/02/2023]
Abstract
Classical theories of reasoning equate System 1 with biases and System 2 with correct responses. Refined theories of reasoning propose the parallel model to explain the two systems. The first purpose of the present article is to give a contribution to the debate on the parallel and default-interventionfist models: we hypothesized when logic and belief conflict both logical validity and belief judgments will be affected with greater level of response errors and/or longer response times. The second purpose of this article is to assess the relationship between decisional styles and performance in deductive reasoning. Seventy-two participants participated in the experiment and completed 64 modus ponens and modus tollens syllogistic reasoning tasks. Accordingly, we found that belief and logic judgments were affected by the conflict condition, both in easy syllogisms (i.e., modus ponens) and in complex syllogisms (i.e., modus tollens). Findings showed also that participants with a rational decision-making style were more strongly influenced by logic than belief, whereas those with an intuitive decision-making style were more strongly influenced by belief than logic.
Collapse
|
11
|
Udry J, Barber SJ. The illusory truth effect: A review of how repetition increases belief in misinformation. Curr Opin Psychol 2024; 56:101736. [PMID: 38113667 DOI: 10.1016/j.copsyc.2023.101736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 11/01/2023] [Accepted: 11/10/2023] [Indexed: 12/21/2023]
Abstract
Repetition increases belief in information, a phenomenon known as the illusory truth effect. In laboratory experiments, the illusory truth effect has often been examined using general trivia statements as stimuli, but repetition also increases belief in misinformation, such as fake news headlines and conspiracy beliefs. Repetition even increases belief in claims that are implausible or that contradict prior knowledge. Repetition also has broader impacts beyond belief, such as increasing sharing intentions of news headlines and decreasing how unethical an act is perceived to be. Although the illusory truth effect is robust, some interventions reduce its magnitude, including instruction to focus on accuracy and awareness of the illusory truth effect. These strategies may be effective for reducing belief in misinformation.
Collapse
Affiliation(s)
- Jessica Udry
- Department of Psychology, Georgia State University, USA
| | - Sarah J Barber
- Department of Psychology, Georgia State University, USA; Gerontology Institute, Georgia State University, Atlanta, GA, USA.
| |
Collapse
|
12
|
Mayo R. Trust or distrust? Neither! The right mindset for confronting disinformation. Curr Opin Psychol 2024; 56:101779. [PMID: 38134524 DOI: 10.1016/j.copsyc.2023.101779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 12/02/2023] [Accepted: 12/04/2023] [Indexed: 12/24/2023]
Abstract
A primary explanation for why individuals believe disinformation is the truth bias, a predisposition to accept information as true. However, this bias is context-dependent, as research shows that rejection becomes the predominant process in a distrust mindset. Consequently, trust and distrust emerge as pivotal factors in addressing disinformation. The current review offers a more nuanced perspective by illustrating that whereas distrust may act as an antidote to the truth bias, it can also paradoxically serve as a catalyst for belief in disinformation. The review concludes that mindsets other than those rooted solely in trust (or distrust), such as an evaluative mindset, may prove to be more effective in detecting and refuting disinformation.
Collapse
Affiliation(s)
- Ruth Mayo
- The Hebrew University of Jerusalem, Israel.
| |
Collapse
|
13
|
Yu Y, Yan S, Zhang Q, Xu Z, Zhou G, Jin H. The Influence of Affective Empathy on Online News Belief: The Moderated Mediation of State Empathy and News Type. Behav Sci (Basel) 2024; 14:278. [PMID: 38667074 PMCID: PMC11047548 DOI: 10.3390/bs14040278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Revised: 03/10/2024] [Accepted: 03/25/2024] [Indexed: 04/29/2024] Open
Abstract
The belief in online news has become a topical issue. Previous studies demonstrated the role emotion plays in fake news vulnerability. However, few studies have explored the effect of empathy on online news belief. This study investigated the relationship between trait empathy, state empathy, belief in online news, and the potential moderating effect of news type. One hundred and forty undergraduates evaluated 50 online news pieces (25 real, 25 fake) regarding their belief, state empathy, valence, arousal, and familiarity. Trait empathy data were collected using the Chinese version of the Interpersonal Reactivity Index. State empathy was positively correlated with affective empathy in trait empathy and believability, and affective empathy was positively correlated with believability. The influence of affective empathy on news belief was partially mediated by state empathy and regulated by news type (fake, real). We discuss the influence of empathy on online news belief and its internal processes. This study shares some unique insights for researchers, practitioners, social media users, and social media platform providers.
Collapse
Affiliation(s)
- Yifan Yu
- Department of Psychology, Tianjin Normal University, Tianjin 300387, China; (Y.Y.)
| | - Shizhen Yan
- School of Health, Fujian Medical University, Fuzhou 350122, China;
| | - Qihan Zhang
- Department of Psychology, Tianjin Normal University, Tianjin 300387, China; (Y.Y.)
| | - Zhenzhen Xu
- Department of Psychology, Tianjin Normal University, Tianjin 300387, China; (Y.Y.)
| | - Guangfang Zhou
- Department of Psychology, Tianjin Normal University, Tianjin 300387, China; (Y.Y.)
| | - Hua Jin
- Department of Psychology, Tianjin Normal University, Tianjin 300387, China; (Y.Y.)
| |
Collapse
|
14
|
Vivion M, Reid V, Dubé E, Coutant A, Benoit A, Tourigny A. How older adults manage misinformation and information overload - A qualitative study. BMC Public Health 2024; 24:871. [PMID: 38515081 PMCID: PMC10956171 DOI: 10.1186/s12889-024-18335-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 03/12/2024] [Indexed: 03/23/2024] Open
Abstract
BACKGROUND The COVID-19 pandemic was characterized by an abundance of information, some of it reliable and some of it misinformation. Evidence-based data on the impact of misinformation on attitudes and behaviours remains limited. Studies indicate that older adults are more likely to embrace and disseminate misinformation than other population groups, making them vulnerable to misinformation. The purpose of this article is to explore the effects of misinformation and information overload on older adults, and to present the management strategies put in place to deal with such effects, in the context of COVID-19. METHODS A qualitative exploratory approach was adopted to conduct this research. A total of 36 semi-structured interviews were conducted with older adults living in Quebec, Canada. The interviews were fully transcribed and subjected to a thematic content analysis. RESULTS Participants said they could easily spot misinformation online. Despite this, misinformation and its treatment by the media could generate fear, stress and anxiety. Moreover, the polarization induced by misinformation resulted in tensions and even friendship breakdowns. Participants also denounced the information overload produced largely by the media. To this end, the participants set up information routines targeting the sources of information and the times at which they consulted the information. CONCLUSIONS This article questions the concept of vulnerability to misinformation by highlighting older adults' agency in managing misinformation and information overload. Furthermore, this study invites us to rethink communication strategies by distinguishing between information overload and misinformation.
Collapse
Affiliation(s)
- M Vivion
- Department of Social and Preventive Medecine, Université Laval, Quebec, Canada.
- CHU de Québec-Université Laval Research Center, Quebec, Canada.
| | - V Reid
- CHU de Québec-Université Laval Research Center, Quebec, Canada
- Laboratoire sur la communication et le numérique (LabCMO), Montreal, Canada
| | - E Dubé
- CHU de Québec-Université Laval Research Center, Quebec, Canada
- Department of Anthropology, Université Laval, Quebec, Canada
| | - A Coutant
- Laboratoire sur la communication et le numérique (LabCMO), Montreal, Canada
- Université du Québec à Montréal (UQAM), Montreal, Canada
| | - A Benoit
- GDR AREES (Groupe de recherche: Arctique: Enjeux pour l'environnement et les sociétés) du CRNS, Paris, France
| | - A Tourigny
- Institut sur le vieillissement et la participation sociale des aînés de l'Université Laval, Quebec, Canada
- VITAM Centre de recherche en santé durable, Quebec, Canada
| |
Collapse
|
15
|
George JF. Discovering why people believe disinformation about healthcare. PLoS One 2024; 19:e0300497. [PMID: 38512834 PMCID: PMC10956743 DOI: 10.1371/journal.pone.0300497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 02/27/2024] [Indexed: 03/23/2024] Open
Abstract
Disinformation-false information intended to cause harm or for profit-is pervasive. While disinformation exists in several domains, one area with great potential for personal harm from disinformation is healthcare. The amount of disinformation about health issues on social media has grown dramatically over the past several years, particularly in response to the COVID-19 pandemic. The study described in this paper sought to determine the characteristics of multimedia social network posts that lead them to believe and potentially act on healthcare disinformation. The study was conducted in a neuroscience laboratory in early 2022. Twenty-six study participants each viewed a series of 20 either honest or dishonest social media posts, dealing with various aspects of healthcare. They were asked to determine if the posts were true or false and then to provide the reasoning behind their choices. Participant gaze was captured through eye tracking technology and investigated through "area of interest" analysis. This approach has the potential to discover the elements of disinformation that help convince the viewer a given post is true. Participants detected the true nature of the posts they were exposed to 69% of the time. Overall, the source of the post, whether its claims seemed reasonable, and the look and feel of the post were the most important reasons they cited for determining whether it was true or false. Based on the eye tracking data collected, the factors most associated with successfully detecting disinformation were the total number of fixations on key words and the total number of revisits to source information. The findings suggest the outlines of generalizations about why people believe online disinformation, suggesting a basis for the development of mid-range theory.
Collapse
Affiliation(s)
- Joey F. George
- Ivy College of Business, Iowa State University, Ames, IA, United States of America
| |
Collapse
|
16
|
Fang Y. Why Do People Believe in Vaccine Misinformation? The Roles of Perceived Familiarity and Evidence Type. HEALTH COMMUNICATION 2024:1-13. [PMID: 38514925 DOI: 10.1080/10410236.2024.2328455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/23/2024]
Abstract
The proliferation of health misinformation poses a significant threat to public health, making it increasingly important to understand why misinformation is accepted. The illusory truth effect, which refers to the increased believability of a message due to repeated exposure, has been widely studied. However, there is limited research on this effect in the context of COVID-19 vaccine misinformation. This paper aims to examine the role of perceived familiarity with COVID-19 vaccine misinformation on various message perceptions, including perceived accuracy, agreement, perceived message effectiveness, and determinants of vaccination, including vaccine attitude and vaccination intention. Furthermore, it explores the impact of misinformation evidence (statistical vs. narrative) on the magnitude of the effects of perceived familiarity. To investigate these factors, a between-subjects experimental study was conducted, employing a 2 (Familiarity: strong vs. weak) × 3 (Evidence type: statistical, narrative, and both evidence) + 1 (Control: a message about drinking water) design. The results revealed that perceived familiarity with COVID-19 vaccine misinformation significantly predicted perceived accuracy, which was found to be negatively correlated with vaccine attitudes and vaccination intentions. Moreover, statistical evidence presented in misinformation was perceived as more persuasive in perceived message effectiveness, compared to narrative and mixed evidence. Interestingly, the effects of perceived familiarity were not contingent on the type of evidence used in COVID-19 vaccine misinformation. These findings emphasize the importance of avoiding the repetition of misinformation, reducing the processing fluency associated with misinformation correction, and educating individuals on how to critically evaluate statistical evidence when encountering (mis)information.
Collapse
Affiliation(s)
- Yuming Fang
- Hubbard School of Journalism and Mass Communication, University of Minnesota
| |
Collapse
|
17
|
Goebel JT, Susmann MW, Parthasarathy S, El Gamal H, Garrett RK, Wegener DT. Belief-consistent information is most shared despite being the least surprising. Sci Rep 2024; 14:6109. [PMID: 38480773 PMCID: PMC10937659 DOI: 10.1038/s41598-024-56086-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 03/01/2024] [Indexed: 03/17/2024] Open
Abstract
In the classical information theoretic framework, information "value" is proportional to how novel/surprising the information is. Recent work building on such notions claimed that false news spreads faster than truth online because false news is more novel and therefore surprising. However, another determinant of surprise, semantic meaning (e.g., information's consistency or inconsistency with prior beliefs), should also influence value and sharing. Examining sharing behavior on Twitter, we observed separate relations of novelty and belief consistency with sharing. Though surprise could not be assessed in those studies, belief consistency should relate to less surprise, suggesting the relevance of semantic meaning beyond novelty. In two controlled experiments, belief-consistent (vs. belief-inconsistent) information was shared more despite consistent information being the least surprising. Manipulated novelty did not predict sharing or surprise. Thus, classical information theoretic predictions regarding perceived value and sharing would benefit from considering semantic meaning in contexts where people hold pre-existing beliefs.
Collapse
Affiliation(s)
- Jacob T Goebel
- Department of Psychology, Ohio State University, Columbus, OH, USA.
| | - Mark W Susmann
- Department of Psychology, Ohio State University, Columbus, OH, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
- Department of Computer Science and Engineering, Ohio State University, Columbus, OH, USA
| | | | - Hesham El Gamal
- Faculty of Engineering, University of Sydney, Sydney, NSW, Australia
| | - R Kelly Garrett
- School of Communication, Ohio State University, Columbus, OH, USA
| | - Duane T Wegener
- Department of Psychology, Ohio State University, Columbus, OH, USA
| |
Collapse
|
18
|
Jones MI, Pauls SD, Fu F. Containing misinformation: Modeling spatial games of fake news. PNAS NEXUS 2024; 3:pgae090. [PMID: 38463039 PMCID: PMC10924450 DOI: 10.1093/pnasnexus/pgae090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 02/13/2024] [Indexed: 03/12/2024]
Abstract
The spread of fake news on social media is a pressing issue. Here, we develop a mathematical model on social networks in which news sharing is modeled as a coordination game. We use this model to study the effect of adding designated individuals who sanction fake news sharers (representing, for example, correction of false claims or public shaming of those who share such claims). By simulating our model on synthetic square lattices and small-world networks, we demonstrate that social network structure allows fake news spreaders to form echo chambers and more than doubles fake news' resistance to distributed sanctioning efforts. We confirm our results are robust to a wide range of coordination and sanctioning payoff parameters as well as initial conditions. Using a Twitter network dataset, we show that sanctioners can help contain fake news when placed strategically. Furthermore, we analytically determine the conditions required for peer sanctioning to be effective, including prevalence and enforcement levels. Our findings have implications for developing mitigation strategies to control misinformation and preserve the integrity of public discourse.
Collapse
Affiliation(s)
- Matthew I Jones
- Sociology Department, Yale University, New Haven, CT 06511, USA
- Mathematics Department, Dartmouth College, Hanover, NH 03755, USA
| | - Scott D Pauls
- Mathematics Department, Dartmouth College, Hanover, NH 03755, USA
| | - Feng Fu
- Mathematics Department, Dartmouth College, Hanover, NH 03755, USA
- Department of Biomedical Data Science, Dartmouth College, Hanover, NH 03756, USA
| |
Collapse
|
19
|
Schüz B, Jones C. [Mis- and disinformation in social media: mitigating risks in digital health communication]. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz 2024; 67:300-307. [PMID: 38332143 DOI: 10.1007/s00103-024-03836-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 01/15/2024] [Indexed: 02/10/2024]
Abstract
Misinformation and disinformation in social media have become a challenge for effective public health measures. Here, we examine factors that influence believing and sharing false information, both misinformation and disinformation, at individual, social, and contextual levels and discuss intervention possibilities.At the individual level, knowledge deficits, lack of skills, and emotional motivation have been associated with believing in false information. Lower health literacy, a conspiracy mindset and certain beliefs increase susceptibility to false information. At the social level, the credibility of information sources and social norms influence the sharing of false information. At the contextual level, emotions and the repetition of messages affect belief in and sharing of false information.Interventions at the individual level involve measures to improve knowledge and skills. At the social level, addressing social processes and social norms can reduce the sharing of false information. At the contextual level, regulatory approaches involving social networks is considered an important point of intervention.Social inequalities play an important role in the exposure to and processing of misinformation. It remains unclear to which degree the susceptibility to belief in and share misinformation is an individual characteristic and/or context dependent. Complex interventions are required that should take into account multiple influencing factors.
Collapse
Affiliation(s)
- Benjamin Schüz
- Institut für Public Health und Pflegeforschung, Universität Bremen, Grazer Straße 4, 28359, Bremen, Deutschland.
- Leibniz-WissenschaftsCampus Digital Public Health, Bremen, Deutschland.
| | - Christopher Jones
- Institut für Public Health und Pflegeforschung, Universität Bremen, Grazer Straße 4, 28359, Bremen, Deutschland
- Leibniz-WissenschaftsCampus Digital Public Health, Bremen, Deutschland
- Zentrum für Präventivmedizin und Digitale Gesundheit (CPD), Medizinische Fakultät Mannheim der Universität Heidelberg, Mannheim, Deutschland
| |
Collapse
|
20
|
Ferrer-Urbina R, Ramírez Y, Mena-Chamorro P, Carmona-Halty M, Sepúlveda-Páez G. Naive skepticism scale: development and validation tests applied to the chilean population. PSICOLOGIA-REFLEXAO E CRITICA 2024; 37:6. [PMID: 38376697 PMCID: PMC10879479 DOI: 10.1186/s41155-024-00288-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 01/26/2024] [Indexed: 02/21/2024] Open
Abstract
BACKGROUND Skepticism has traditionally been associated with critical thinking. However, philosophy has proposed a particular type of skepticism, termed naive skepticism, which may increase susceptibility to misinformation, especially when contrasting information from official sources. While some scales propose to measure skepticism, they are scarce and only measure specific topics; thus, new instruments are needed to assess this construct. OBJECTIVE This study aimed to develop a scale to measure naive skepticism in the adult population. METHOD The study involved 446 individuals from the adult population. Subjects were randomly selected for either the pilot study (phase 2; n = 126) or the validity-testing study (phase 3; n = 320). Parallel analyses and exploratory structural equation modelling were conducted to assess the internal structure of the test. Scale reliability was estimated using Cronbach's alpha and McDonald's omega coefficients Finally, a multigroup confirmatory factor analysis was performed to assess invariance, and a Set- Exploratory Structural Equation Modeling was applied to estimate evidence of validity based on associations with other variables. RESULTS The naive skepticism scale provided adequate levels of reliability (ω > 0.8), evidence of validity based on the internal structure of the test (CFI = 0.966; TLI = 0.951; RMSEA = 0.079), gender invariance, and a moderate inverse effect on attitudes towards COVID-19 vaccines. CONCLUSIONS The newly developed naive skepticism scale showed acceptable psychometric properties in an adult population, thus enabling the assessment of naive skepticism in similar demographics. This paper discusses the implications for the theoretical construct and possible limitations of the scale.
Collapse
Affiliation(s)
| | - Yasna Ramírez
- Escuela de Psicología y Filosofía, Universidad de Tarapacá, Arica, Chile
| | | | | | | |
Collapse
|
21
|
Goldstein JA, Chao J, Grossman S, Stamos A, Tomz M. How persuasive is AI-generated propaganda? PNAS NEXUS 2024; 3:pgae034. [PMID: 38380055 PMCID: PMC10878360 DOI: 10.1093/pnasnexus/pgae034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 01/03/2024] [Indexed: 02/22/2024]
Abstract
Can large language models, a form of artificial intelligence (AI), generate persuasive propaganda? We conducted a preregistered survey experiment of US respondents to investigate the persuasiveness of news articles written by foreign propagandists compared to content generated by GPT-3 davinci (a large language model). We found that GPT-3 can create highly persuasive text as measured by participants' agreement with propaganda theses. We further investigated whether a person fluent in English could improve propaganda persuasiveness. Editing the prompt fed to GPT-3 and/or curating GPT-3's output made GPT-3 even more persuasive, and, under certain conditions, as persuasive as the original propaganda. Our findings suggest that propagandists could use AI to create convincing content with limited effort.
Collapse
Affiliation(s)
- Josh A Goldstein
- Center for Security and Emerging Technology, Georgetown University, Washington, DC 20001, USA
| | - Jason Chao
- Stanford Internet Observatory, Stanford University, Stanford, CA 94305, USA
| | - Shelby Grossman
- Stanford Internet Observatory, Stanford University, Stanford, CA 94305, USA
| | - Alex Stamos
- Stanford Internet Observatory, Stanford University, Stanford, CA 94305, USA
| | - Michael Tomz
- Department of Political Science and Stanford Institute for Economic Policy Research, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
22
|
Jagayat A, Choma BL. A primer on open-source, experimental social media simulation software: Opportunities for misinformation research and beyond. Curr Opin Psychol 2024; 55:101726. [PMID: 38048652 DOI: 10.1016/j.copsyc.2023.101726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2023] [Revised: 11/01/2023] [Accepted: 11/03/2023] [Indexed: 12/06/2023]
Abstract
Social media simulation software (SMSS) allows researchers to collect behavioural data on how participants to engage with researcher-specified social media content using natural, interactive social media user interfaces. A notable subset of SMSS allow for experimental observation of how people engage with different types of content or user interfaces. Providing an avenue for collecting causal evidence on how algorithmic recommendation systems and design affordances of social media platforms impact behaviour; particularly online harms like misinformation. This article reviews key similarities and differences between three notable SMSS (The (Mis)information Game, the Mock Social Media Website Tool, and the Truman Platform), provides recommendations for use, and perspectives on the future of SMSS.
Collapse
|
23
|
Riesthuis P, Woods J. "That's just like, your opinion, man": the illusory truth effect on opinions. PSYCHOLOGICAL RESEARCH 2024; 88:284-306. [PMID: 37300704 PMCID: PMC10257371 DOI: 10.1007/s00426-023-01845-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2023] [Accepted: 05/22/2023] [Indexed: 06/12/2023]
Abstract
With the expanse of technology, people are constantly exposed to an abundance of information. Of vital importance is to understand how people assess the truthfulness of such information. One indicator of perceived truthfulness seems to be whether it is repeated. That is, people tend to perceive repeated information, regardless of its veracity, as more truthful than new information, also known as the illusory truth effect. In the present study, we examined whether such effect is also observed for opinions and whether the manner in which the information is encoded influenced the illusory truth effect. Across three experiments, participants (n = 552) were presented with a list of true information, misinformation, general opinion, and/or social-political opinion statements. First, participants were either instructed to indicate whether the presented statement was a fact or opinion based on its syntax structure (Exp. 1 & 2) or assign each statement to a topic category (Exp. 3). Subsequently, participants rated the truthfulness of various new and repeated statements. Results showed that repeated information, regardless of the type of information, received higher subjective truth ratings when participants simply encoded them by assigning each statement to a topic. However, when general and social-political opinions were encoded as an opinion, we found no evidence of such effect. Moreover, we found a reversed illusory truth effect for general opinion statements when only considering information that was encoded as an opinion. These findings suggest that how information is encoded plays a crucial role in evaluating truth.
Collapse
Affiliation(s)
- Paul Riesthuis
- Leuven Institute of Criminology, KU Leuven, Leuven, Belgium.
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.
| | - Josh Woods
- Faculty of Psychology, Grand View University, Des Moines, IA, USA
| |
Collapse
|
24
|
Buczel KA, Siwiak A, Szpitalak M, Polczyk R. How do forewarnings and post-warnings affect misinformation reliance? The impact of warnings on the continued influence effect and belief regression. Mem Cognit 2024:10.3758/s13421-024-01520-z. [PMID: 38261249 DOI: 10.3758/s13421-024-01520-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/06/2024] [Indexed: 01/24/2024]
Abstract
People often continue to rely on certain information in their reasoning, even if this information has been retracted; this is called the continued influence effect (CIE) of misinformation. One technique for reducing this effect involves explicitly warning people that there is a possibility that they might have been misled. The present study aimed to investigate these warnings' effectiveness, depending on when they were given (either before or after misinformation). In two experiments (N = 337), we found that while a forewarning did reduce reliance on misinformation, retrospectively warned participants (when the warning was placed either between the misinformation and the retraction or just before testing) relied on the misinformation to a similar degree as unwarned participants. However, the protective effect of the forewarning was not durable, as shown by the fact that reliance on the misinformation increased for over 7 days following the first testing, despite continued memory of the retraction.
Collapse
Affiliation(s)
- Klara Austeja Buczel
- Institute of Psychology, Jagiellonian University, Kraków, Poland.
- Doctoral School in the Social Sciences, Jagiellonian University, Kraków, Poland.
| | - Adam Siwiak
- Institute of Psychology, Jagiellonian University, Kraków, Poland
- Doctoral School in the Social Sciences, Jagiellonian University, Kraków, Poland
| | | | - Romuald Polczyk
- Institute of Psychology, Jagiellonian University, Kraków, Poland
| |
Collapse
|
25
|
Ly DP, Bernstein DM, Newman EJ. An ongoing secondary task can reduce the illusory truth effect. Front Psychol 2024; 14:1215432. [PMID: 38235277 PMCID: PMC10792064 DOI: 10.3389/fpsyg.2023.1215432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Accepted: 10/02/2023] [Indexed: 01/19/2024] Open
Abstract
Introduction People are more likely to believe repeated information-this is known as the Illusory Truth Effect (ITE). Recent research on the ITE has shown that semantic processing of statements plays a key role. In our day to day experience, we are often multi-tasking which can impact our ongoing processing of information around us. In three experiments, we investigate how asking participants to engage in an ongoing secondary task in the ITE paradigm influences the magnitude of the effect of repetition on belief. Methods Using an adapted ITE paradigm, we embedded a secondary task into each trial of the encoding and/or test phase (e.g., having participants count the number of vowels in a target word of each trivia claim) and calculated the overall accuracy on the task. Results We found that the overall ITE was larger when participants had no ongoing secondary task during the experiment. Further, we predicted and found that higher accuracy on the secondary task was associated with a larger ITE. Discussion These findings provide initial evidence that engaging in an ongoing secondary task may reduce the impact of repetition. Our findings suggest that exploring the impact of secondary tasks on the ITE is a fruitful area for further research.
Collapse
Affiliation(s)
- Deva P. Ly
- School of Medicine and Psychology, Australian National University, Canberra, ACT, Australia
| | - Daniel M. Bernstein
- Department of Psychology, Kwantlen Polytechnic University, Surrey, BC, Canada
| | - Eryn J. Newman
- School of Medicine and Psychology, Australian National University, Canberra, ACT, Australia
| |
Collapse
|
26
|
Schmidt O, Heck DW. The relevance of syntactic complexity for truth judgments: A registered report. Conscious Cogn 2024; 117:103623. [PMID: 38142632 DOI: 10.1016/j.concog.2023.103623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 11/04/2023] [Accepted: 12/08/2023] [Indexed: 12/26/2023]
Abstract
Fluency theories predict higher truth judgments for easily processed statements. We investigated two factors relevant for processing fluency: repetition and syntactic complexity. In three online experiments, we manipulated syntactic complexity by creating simple and complex versions of trivia statements. Experiments 1 and 2 replicated the repetition-based truth effect. However, syntactic complexity did not affect truth judgments although complex statements were processed slower than simple statements. This null effect is surprising given that both studies had high statistical power and varied in the relative salience of syntactic complexity. Experiment 3 provides a preregistered test of the discounting explanation by using improved trivia statements of equal length and by manipulating the salience of complexity in a randomized design. As predicted by fluency theories, simple statements were more likely judged as true than complex ones, while this effect was small and not moderated by the salience of complexity.
Collapse
Affiliation(s)
- Oliver Schmidt
- Department of Psychology, University of Marburg, Germany.
| | - Daniel W Heck
- Department of Psychology, University of Marburg, Germany
| |
Collapse
|
27
|
Mattavelli S, Béna J, Corneille O, Unkelbach C. People underestimate the influence of repetition on truth judgments (and more so for themselves than for others). Cognition 2024; 242:105651. [PMID: 37871412 DOI: 10.1016/j.cognition.2023.105651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 10/07/2023] [Accepted: 10/14/2023] [Indexed: 10/25/2023]
Abstract
People judge repeated statements as more truthful than new statements: a truth effect. In three pre-registered experiments (N = 463), we examined whether people expect repetition to influence truth judgments more for others than for themselves: a bias blind spot in the truth effect. In Experiments 1 and 2, using moderately plausible and implausible statements, respectively, the test for the bias blind spot did not pass the significance threshold set for a two-step sequential analysis. Experiment 3 considered moderately plausible statements but with a larger sample of participants. Additionally, it compared actual performance after a two-day delay with participants' predictions for themselves and others. This time, we found clear evidence for a bias blind spot in the truth effect. Experiment 3 also showed that participants underestimated the magnitude of the truth effect, especially so for themselves, and that predictions and actual truth effect scores were not significantly related. Finally, an integrative analysis focusing on a more conservative between-participant approach found clear frequentist and Bayesian evidence for a bias blind spot. Overall, the results indicate that people (1) hold beliefs about the effect of repetition on truth judgments, (2) believe that this effect is larger for others than for themselves, (3) and underestimate the effect's magnitude, and (4) particularly so for themselves.
Collapse
Affiliation(s)
- Simone Mattavelli
- University of Milano-Bicocca, Italy; Vita-Salute San Raffaele University, Italy.
| | - Jérémy Béna
- UCLouvain, Belgium; Aix-Marseille Université, France
| | | | | |
Collapse
|
28
|
Lee SJ, Lee CJ, Hwang H. The Role of Deliberative Cognitive Styles in Preventing Belief in Politicized COVID-19 Misinformation. HEALTH COMMUNICATION 2023; 38:2904-2914. [PMID: 36134653 DOI: 10.1080/10410236.2022.2125119] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Misinformation related to COVID-19 is a threat to public health. The present study examined the potential for deliberative cognitive styles such as actively open-minded thinking and need for evidence in deterring belief in misinformation and promoting belief in true information related to COVID-19. In addition, regarding how responses to the pandemic have been politicized, the role of political orientation and motivated reasoning were also examined. We conducted a survey in South Korea (N = 1466) during May 2020. Participants answered measures related to demographics, open-minded thinking, need for evidence, and accuracy perceptions of COVID-19 misinformation and true information items. Multi-level analyses of the survey data found that while motivated reasoning was present, deliberative cognitive styles (actively open-minded thinking and need for evidence) decreased belief in misinformation without intensifying motivated reasoning tendencies. Findings also showed a political asymmetry where conservatives detected COVID-19 misinformation at a lesser rate. Overall, results suggest that health communication related to COVID-19 misinformation should pay attention to conservative populations. Results also imply that interventions that activate deliberative cognitive styles hold promise in reducing belief in COVID-19 misinformation.
Collapse
Affiliation(s)
| | - Chul-Joo Lee
- Department of Communication, Seoul National University
| | - Hyunjung Hwang
- Department of Broadcasting Regulation Research, Korea Information Society Development Institute
| |
Collapse
|
29
|
Udry J, Barber SJ. The illusory truth effect requires semantic coherence across repetitions. Cognition 2023; 241:105607. [PMID: 37742428 DOI: 10.1016/j.cognition.2023.105607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 08/24/2023] [Accepted: 08/25/2023] [Indexed: 09/26/2023]
Abstract
Repeated exposure to information increases its' perceived truth, and this illusory truth effect is often explained by two theoretical frameworks: the fluency account and the referential theory of truth. Whereas the fluency account suggests that prior activation of a single referent within a statement should increase its perceived truth, the referential theory makes no such predictions. The referential theory instead proposes that when a statement is processed, it activates the corresponding memory referents within that statement and strengthens the connection between these referents in the semantic memory network. Because repeated statements will have more coherent corresponding referents than new statements, they are perceived as relatively truer. Experiments 1 and 2 focused on testing the fluency account, with participants exposed to one or two of a statement's referents before evaluating that statement's truth. Experiments 3 and 4 focused on the referential theory by exposing participants to non-critical facts that linked together two of a critical statements' referents before evaluating the truth of the critical statements. We consistently observed a standard illusory truth effect, such that facts that repeated verbatim were rated as truer than new facts. However, perceived truth was not affected by prior exposure to the critical statement's topic (Experiment 1) or by prior exposure to non-critical facts related to the same topic(s) as the critical statement (Experiment 2). There was also no boost in perceived truth following prior exposure to non-critical facts that linked together two of the primary referents of the critical statement but did so in a semantically distinct manner from how those same referents were linked in the critical statement itself (Experiments 3 and 4). However, Experiment 4 demonstrated that perceived truth significantly increased if there was prior exposure to non-critical facts that linked two of the critical statement primary referents in a way that was semantically coherent with how those same referents were linked within the critical statement. Together, these results are consistent with the referential theory, and suggest that semantic consistency across repetitions plays a crucial role in leading to repetition-based illusory truth effects.
Collapse
Affiliation(s)
- Jessica Udry
- Department of Psychology, Georgia State University, Atlanta, GA, USA
| | - Sarah J Barber
- Department of Psychology, Georgia State University, Atlanta, GA, USA; Gerontology Institute, Georgia State University, Atlanta, GA, USA.
| |
Collapse
|
30
|
Béna J, Rihet M, Carreras O, Terrier P. Repetition could increase the perceived truth of conspiracy theories. Psychon Bull Rev 2023; 30:2397-2406. [PMID: 37219761 PMCID: PMC10204694 DOI: 10.3758/s13423-023-02276-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/18/2023] [Indexed: 05/24/2023]
Abstract
Conspiracy theories can be encountered repeatedly, which raises the issue of the effect of their repeated exposure on beliefs. Earlier studies found that repetition increases truth judgments of factual statements, whether they are uncertain, highly implausible, or fake news, for instance. Would this "truth effect" be observed with conspiracy statements? If so, is the effect size smaller than the typical truth effect, and is it associated with individual differences such as cognitive style and conspiracy mentality? In the present preregistered study, we addressed these three issues. We asked participants to provide binary truth judgments to conspiracy and factual statements already displayed in an exposure phase (an interest judgment task) or that were new (displayed only in the truth judgment task). We measured participants' cognitive style with the three-item Cognitive Reflection Test (CRT), and conspiracy mentality with the Conspiracy Mentality Questionnaire (CMQ). Importantly, we found that repetition increased truth judgments of conspiracy theories, unmoderated by cognitive style and conspiracy mentality. Additionally, we found that the truth effect was smaller with conspiracy theories than with uncertain factual statements, and suggest explanations for this difference. The results suggest that repetition may be a simple way to increase belief in conspiracy theories. Whether repetition increases conspiracy beliefs in natural settings and how it contributes to conspiracism compared to other factors are important questions for future research.
Collapse
Affiliation(s)
- Jérémy Béna
- UCLouvain, PSP IPSY, 10 Place du Cardinal Mercier, 1348, Louvain-la-Neuve, Belgium.
| | - Mathias Rihet
- CLLE, Université de Toulouse, CNRS, Toulouse, France
| | | | | |
Collapse
|
31
|
Scales D, Hurth L, Xi W, Gorman S, Radhakrishnan M, Windham S, Akunne A, Florman J, Leininger L, Gorman J. Addressing Antivaccine Sentiment on Public Social Media Forums Through Web-Based Conversations Based on Motivational Interviewing Techniques: Observational Study. JMIR INFODEMIOLOGY 2023; 3:e50138. [PMID: 37962940 PMCID: PMC10685291 DOI: 10.2196/50138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 09/16/2023] [Accepted: 09/30/2023] [Indexed: 11/15/2023]
Abstract
BACKGROUND Health misinformation shared on social media can have negative health consequences; yet, there is a dearth of field research testing interventions to address health misinformation in real time, digitally, and in situ on social media. OBJECTIVE We describe a field study of a pilot program of "infodemiologists" trained with evidence-informed intervention techniques heavily influenced by principles of motivational interviewing. Here we provide a detailed description of the nature of infodemiologists' interventions on posts sharing misinformation about COVID-19 vaccines, present an initial evaluation framework for such field research, and use available engagement metrics to quantify the impact of these in-group messengers on the web-based threads on which they are intervening. METHODS We monitored Facebook (Meta Platforms, Inc) profiles of news organizations marketing to 3 geographic regions (Newark, New Jersey; Chicago, Illinois; and central Texas). Between December 2020 and April 2021, infodemiologists intervened in 145 Facebook news posts that generated comments containing either false or misleading information about vaccines or overt antivaccine sentiment. Engagement (emojis plus replies) data were collected on Facebook news posts, the initial comment containing misinformation (level 1 comment), and the infodemiologist's reply (level 2 reply comment). A comparison-group evaluation design was used, with numbers of replies, emoji reactions, and engagements for level 1 comments compared with the median metrics of matched comments using the Wilcoxon signed rank test. Level 2 reply comments (intervention) were also benchmarked against the corresponding metric of matched reply comments (control) using the Wilcoxon signed rank test (paired at the level 1 comment level). Infodemiologists' level 2 reply comments (intervention) and matched reply comments (control) were further compared using 3 Poisson regression models. RESULTS In total, 145 interventions were conducted on 132 Facebook news posts. The level 1 comments received a median of 3 replies, 3 reactions, and 7 engagements. The matched comments received a median of 1.5 (median of IQRs 3.75) engagements. Infodemiologists made 322 level 2 reply comments, precipitating 189 emoji reactions and a median of 0.5 (median of IQRs IQR 0) engagements. The matched reply comments received a median of 1 (median of IQRs 2.5) engagement. Compared to matched comments, level 1 comments received more replies, emoji reactions, and engagements. Compared to matched reply comments, level 2 reply comments received fewer and narrower ranges of replies, reactions, and engagements, except for the median comparison for replies. CONCLUSIONS Overall, empathy-first communication strategies based on motivational interviewing garnered less engagement relative to matched controls. One possible explanation is that our interventions quieted contentious, misinformation-laden threads about vaccines on social media. This work reinforces research on accuracy nudges and cyberbullying interventions that also reduce engagement. More research leveraging field studies of real-time interventions is needed, yet data transparency by technology platforms will be essential to facilitate such experiments.
Collapse
Affiliation(s)
- David Scales
- Weill Cornell Medicine, New York City, NY, United States
- Critica, Bronx, NY, United States
| | | | - Wenna Xi
- Weill Cornell Medicine, New York City, NY, United States
| | | | | | | | | | | | - Lindsey Leininger
- Tuck School of Business, Dartmouth College, Hannover, NH, United States
| | | |
Collapse
|
32
|
Gottlieb E, Baker M, Détienne F. Iranian scientists and French showers: collaborative fact-checking of identity-salient online information. Front Psychol 2023; 14:1295130. [PMID: 38022959 PMCID: PMC10665876 DOI: 10.3389/fpsyg.2023.1295130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 10/23/2023] [Indexed: 12/01/2023] Open
Abstract
In this study, we investigate what leads people to fact-check online information, how they fact-check such information in practice, how fact-checking affects their judgments about the information's credibility, and how each of the above processes is affected by the salience of the information to readers' cultural identities. Eight pairs of adult participants were recruited from diverse cultural backgrounds to participate online in joint fact-checking of suspect Tweets. To examine their collaborative deliberations we developed a novel experimental design and analytical model. Our analyses indicate that the salience of online information to people's cultural identities influences their decision to fact-check it, that fact-checking deliberations are often non-linear and iterative, that collaborative fact-checking leads people to revise their initial judgments about the credibility of online information, and that when online information is highly salient to people's cultural identities, they apply different standards of credibility when fact-checking it. In conclusion, we propose that cultural identity is an important factor in the fact-checking of online information, and that joint fact-checking of online information by people from diverse cultural backgrounds may have significant potential as an educational tool to reduce people's susceptibility to misinformation.
Collapse
Affiliation(s)
- Eli Gottlieb
- Institut Interdisciplinaire de l'Innovation, Centre National de la Recherche Scientifique, and Télécom Paris, Palaiseau, France
- Graduate School of Education and Human Development, The George Washington University, Washington, DC, United States
| | - Michael Baker
- Institut Interdisciplinaire de l'Innovation, Centre National de la Recherche Scientifique, and Télécom Paris, Palaiseau, France
| | - Françoise Détienne
- Institut Interdisciplinaire de l'Innovation, Centre National de la Recherche Scientifique, and Télécom Paris, Palaiseau, France
| |
Collapse
|
33
|
Duffy FF, McDonnell GP, Auslander MV, Bricault SA, Kim PY, Rachlin NW, Quartana PJ. US Soldiers' Individual and Unit-level Factors Associated with Perceptions of Disinformation in the Military Context. Mil Med 2023; 188:698-708. [PMID: 37948291 DOI: 10.1093/milmed/usad322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 05/01/2023] [Accepted: 07/31/2023] [Indexed: 11/12/2023] Open
Abstract
INTRODUCTION Although the US Government considers threats of misinformation, disinformation, and mal-information to rise to the level of terrorism, little is known about service members' experiences with disinformation in the military context. We examined soldiers' perceptions of disinformation impact on the Army and their units. We also investigated associations between disinformation perceptions and soldiers' sociodemographic characteristics, reported use of fact-checking, and perceptions of unit cohesion and readiness. METHODS Active-duty soldiers (N = 19,465) across two large installations in the Southwest US completed an anonymous online survey. RESULTS Sixty-six percent of soldiers agreed that disinformation has a negative impact on the Army. Thirty-three percent of soldiers perceived disinformation as a problem in their unit. Females were more likely to agree that disinformation has a negative impact on the Army and is a problem in their unit. Higher military rank was associated with lower odds of agreeing that disinformation is a problem in units. Most soldiers were confident about their ability to recognize disinformation (62%) and reported using fact-checking resources (53%), and these factors were most often endorsed by soldiers who agreed that disinformation is a problem for the Army and their unit. Soldiers' perceptions of unit cohesion and readiness were negatively associated with the perception that disinformation is a problem in their unit. CONCLUSION While the majority of soldiers viewed disinformation as a problem across the Army, fewer perceived it as problematic within their units. Higher levels of reported fact-checking were most evident among those who perceived disinformation as a problem, suggesting that enhancing awareness of the problem of disinformation alone could help mitigate its deleterious impact. Perceptions of disinformation problems within units were associated with soldiers' perceptions of lower unit cohesion and readiness, highlighting misinformation, disinformation, and mal-information's impact on force readiness. Limitations and future directions are discussed.
Collapse
Affiliation(s)
- Farifteh Firoozmand Duffy
- Center for Military Psychiatry and Neuroscience, Walter Reed Army Institute of Research, Silver Spring, MD 20910, USA
| | - Gerald P McDonnell
- Center for Military Psychiatry and Neuroscience, Walter Reed Army Institute of Research, Silver Spring, MD 20910, USA
| | - Margeaux V Auslander
- Center for Military Psychiatry and Neuroscience, Walter Reed Army Institute of Research, Silver Spring, MD 20910, USA
| | - Stephanie A Bricault
- Center for Military Psychiatry and Neuroscience, Walter Reed Army Institute of Research, Silver Spring, MD 20910, USA
| | | | | | - Phillip J Quartana
- Center for Military Psychiatry and Neuroscience, Walter Reed Army Institute of Research, Silver Spring, MD 20910, USA
| |
Collapse
|
34
|
Abel M, Bäuml KHT. Item-method directed forgetting and perceived truth of news headlines. Memory 2023; 31:1371-1386. [PMID: 37819019 DOI: 10.1080/09658211.2023.2267191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 09/06/2023] [Indexed: 10/13/2023]
Abstract
Research on item-method directed forgetting (IMDF) suggests that memories can be voluntarily forgotten. IMDF is however usually examined with relatively simple study materials, such as single words or pictures. In the present study, we examined voluntary forgetting of news headlines from (presumably) untrustworthy sources. Experiment 1 found intact IMDF when to-be-forgotten headlines were characterised as untrustworthy and to-be-remembered headlines were characterised as trustworthy. Experiment 2 separated remember/forget cues and trustworthiness prompts. Forget cues alone had a large effect on memory, but only a small reducing effect on perceived truth. In contrast, trustworthiness prompts alone had essentially no effect on memory, but a large effect on perceived truth. Finally, Experiment 3 fully crossed forget/remember cues and trustworthiness prompts, revealing that forget cues can reduce memory irrespective of whether headlines are characterised as trustworthy or untrustworthy. Moreover, forget cues may bias source attributions, which can explain their small reducing effect on perceived truth. Overall, this work suggests that news headlines can be voluntarily forgotten. At least when people are motivated to forget information from untrustworthy sources, such forgetting may be helpful for curtailing the spread of false information.
Collapse
Affiliation(s)
- Magdalena Abel
- Department of Experimental Psychology, Regensburg University, Regensburg, Germany
| | - Karl-Heinz T Bäuml
- Department of Experimental Psychology, Regensburg University, Regensburg, Germany
| |
Collapse
|
35
|
Ahmed S, Rasul ME. Examining the association between social media fatigue, cognitive ability, narcissism and misinformation sharing: cross-national evidence from eight countries. Sci Rep 2023; 13:15416. [PMID: 37723265 PMCID: PMC10507063 DOI: 10.1038/s41598-023-42614-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 09/12/2023] [Indexed: 09/20/2023] Open
Abstract
Several studies have explored the causes and consequences of public engagement with misinformation and, more recently, COVID-19 misinformation. However, there is still a need to understand the mechanisms that cause misinformation propagation on social media. In addition, evidence from non-Western societies remains rare. This study reports on survey evidence from eight countries to examine whether social media fatigue can influence users to believe misinformation, influencing their sharing intentions. Our insights also build on prior cognitive and personality literature by exploring how this mechanism is conditional upon users' cognitive ability and narcissism traits. The results suggest that social media fatigue can influence false beliefs of misinformation which translates into sharing on social media. We also find that those with high levels of cognitive ability are less likely to believe and share misinformation. However, those with low cognitive ability and high levels of narcissism are most likely to share misinformation on social media due to social media fatigue. This study is one of the first to provide cross-national comparative evidence highlighting the adverse effects of social media fatigue on misinformation propagation and establishing that the relationship is not universal but dependent on both cognitive and dark personality traits of individuals.
Collapse
Affiliation(s)
- Saifuddin Ahmed
- Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore, Singapore.
| | | |
Collapse
|
36
|
Nah S, Williamson LD, Kahlor LA, Atkinson L, Ntang-Beb JL, Upshaw SJ. COVID-19 Vaccine Hesitancy in Cameroon: The Role of Medical Mistrust and Social Media Use. JOURNAL OF HEALTH COMMUNICATION 2023; 28:619-632. [PMID: 37622325 DOI: 10.1080/10810730.2023.2250287] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/26/2023]
Abstract
Most African countries report low COVID-19 vaccination rates (Msellati et al., 2022; WHO Africa; 2020). This study focuses on factors associated with vaccine hesitancy specifically in the country of Cameroon. Social media use and medical mistrust have been suggested as key variables that may increase vaccine hesitancy. Adopting the information-related perspective guided by the risk information seeking and processing model, the current research explored how social media use and medical mistrust are related to vaccine hesitancy among Cameroonians. Survey results from a sample of 1,000 Cameroonians fielded in early 2022 showed that social media use and medical mistrust were positively associated with belief in misinformation related to the COVID-19 vaccine. Belief in misinformation about the COVID-19 vaccine was negatively associated with perceived information insufficiency. A positive relationship between perceived information insufficiency and information seeking, as well as a negative relationship between information seeking and vaccine hesitancy were also found. Theoretical and practical implications are discussed.
Collapse
Affiliation(s)
- Soya Nah
- The Stan Richards School of Advertising & Public Relations, The University of Texas at Austin, Austin, Texas, USA
| | - Lillie D Williamson
- Department of Communication Arts, The University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Lee Ann Kahlor
- The Stan Richards School of Advertising & Public Relations, The University of Texas at Austin, Austin, Texas, USA
| | - Lucy Atkinson
- The Stan Richards School of Advertising & Public Relations, The University of Texas at Austin, Austin, Texas, USA
| | - Jean-Louis Ntang-Beb
- Advanced School of Mass Communication, University of Yaounde 2, Yaoundé, Cameroon
| | - Sean J Upshaw
- The Stan Richards School of Advertising & Public Relations, The University of Texas at Austin, Austin, Texas, USA
| |
Collapse
|
37
|
Lisi M. Navigating the COVID-19 infodemic: the influence of metacognitive efficiency on health behaviours and policy attitudes. ROYAL SOCIETY OPEN SCIENCE 2023; 10:230417. [PMID: 37680503 PMCID: PMC10480698 DOI: 10.1098/rsos.230417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Accepted: 08/18/2023] [Indexed: 09/09/2023]
Abstract
The COVID-19 pandemic has been accompanied by an infodemic of misinformation and increasing polarization around public health measures, such as social distancing and national lockdowns. In this study, I examined metacognitive efficiency-the extent to which the subjective feeling of knowing predicts the objective accuracy of knowledge-as a tool to understand and measure the assimilation of misleading misinformation in a balanced sample of Great Britain's population (N = 1689), surveyed at the end of the third national lockdown. Using a signal-detection theory approach to quantify metacognitive efficiency, I found that at the population level, metacognitive efficiency for COVID-19 knowledge was impaired compared with general knowledge, indicating a worse alignment between confidence levels and the actual ability to discern true and false statements. Crucially, individual differences in metacognitive efficiency related to COVID-19 knowledge predicted health-protective behaviours, vaccination intentions and attitudes towards public health measures, even after accounting for the level of knowledge itself and demographic covariates, such as education, income and political alignment. These results reveal the significant impact of misinformation on public beliefs and suggest that fostering confidence in accurate knowledge should be a key target for science communication efforts aimed at promoting compliance with public health and social measures.
Collapse
Affiliation(s)
- Matteo Lisi
- Department of Psychology, University of Essex, Essex, UK
- Department of Psychology, Royal Holloway, University of London, London, UK
| |
Collapse
|
38
|
Arechar AA, Allen J, Berinsky AJ, Cole R, Epstein Z, Garimella K, Gully A, Lu JG, Ross RM, Stagnaro MN, Zhang Y, Pennycook G, Rand DG. Understanding and combatting misinformation across 16 countries on six continents. Nat Hum Behav 2023; 7:1502-1513. [PMID: 37386111 DOI: 10.1038/s41562-023-01641-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 05/24/2023] [Indexed: 07/01/2023]
Abstract
The spread of misinformation online is a global problem that requires global solutions. To that end, we conducted an experiment in 16 countries across 6 continents (N = 34,286; 676,605 observations) to investigate predictors of susceptibility to misinformation about COVID-19, and interventions to combat the spread of this misinformation. In every country, participants with a more analytic cognitive style and stronger accuracy-related motivations were better at discerning truth from falsehood; valuing democracy was also associated with greater truth discernment, whereas endorsement of individual responsibility over government support was negatively associated with truth discernment in most countries. Subtly prompting people to think about accuracy had a generally positive effect on the veracity of news that people were willing to share across countries, as did minimal digital literacy tips. Finally, aggregating the ratings of our non-expert participants was able to differentiate true from false headlines with high accuracy in all countries via the 'wisdom of crowds'. The consistent patterns we observe suggest that the psychological factors underlying the misinformation challenge are similar across different regional settings, and that similar solutions may be broadly effective.
Collapse
Affiliation(s)
- Antonio A Arechar
- Center for Research and Teaching in Economics (CIDE), Aguascalientes, Mexico
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
- Centre for Decision Research and Experimental Economics, University of Nottingham, Nottingham, UK
| | - Jennifer Allen
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Adam J Berinsky
- Department of Political Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | | | - Ziv Epstein
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
- Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA
| | | | | | - Jackson G Lu
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Robert M Ross
- Department of Philosophy, Macquarie University, Sydney, New South Wales, Australia
| | - Michael N Stagnaro
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Yunhao Zhang
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Gordon Pennycook
- Hill/Levene Schools of Business, University of Regina, Regina, Saskatchewan, Canada.
- Department of Psychology, University of Regina, Regina, Saskatchewan, Canada.
| | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
39
|
Pillai RM, Fazio LK, Effron DA. Repeatedly Encountered Descriptions of Wrongdoing Seem More True but Less Unethical: Evidence in a Naturalistic Setting. Psychol Sci 2023; 34:863-874. [PMID: 37428445 DOI: 10.1177/09567976231180578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/11/2023] Open
Abstract
When news about moral transgressions goes viral on social media, the same person may repeatedly encounter identical reports about a wrongdoing. In a longitudinal experiment (N = 607 U.S. adults from Mechanical Turk), we found that these repeated encounters can affect moral judgments. As participants went about their lives, we text-messaged them news headlines describing corporate wrongdoings (e.g., a cosmetics company harming animals). After 15 days, they rated these wrongdoings as less unethical than new wrongdoings. Extending prior laboratory research, these findings reveal that repetition can have a lasting effect on moral judgments in naturalistic settings, that affect plays a key role, and that increasing the number of repetitions generally makes moral judgments more lenient. Repetition also made fictitious descriptions of wrongdoing seem truer, connecting this moral-repetition effect with past work on the illusory-truth effect. The more times we hear about a wrongdoing, the more we may believe it-but the less we may care.
Collapse
Affiliation(s)
- Raunak M Pillai
- Department of Psychology and Human Development, Vanderbilt University
| | - Lisa K Fazio
- Department of Psychology and Human Development, Vanderbilt University
| | - Daniel A Effron
- Organisational Behaviour Subject Area, London Business School
| |
Collapse
|
40
|
Markowitz DM, Levine TR, Serota KB, Moore AD. Cross-checking journalistic fact-checkers: The role of sampling and scaling in interpreting false and misleading statements. PLoS One 2023; 18:e0289004. [PMID: 37490489 PMCID: PMC10368232 DOI: 10.1371/journal.pone.0289004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 07/01/2023] [Indexed: 07/27/2023] Open
Abstract
Professional fact-checkers and fact-checking organizations provide a critical public service. Skeptics of modern media, however, often question the accuracy and objectivity of fact-checkers. The current study assessed agreement among two independent fact-checkers, The Washington Post and PolitiFact, regarding the false and misleading statements of then President Donald J. Trump. Differences in statement selection and deceptiveness scaling were investigated. The Washington Post checked PolitiFact fact-checks 77.4% of the time (22.6% selection disagreement). Moderate agreement was observed for deceptiveness scaling. Nearly complete agreement was observed for bottom-line attributed veracity. Additional cross-checking with other sources (Snopes, FactCheck.org), original sources, and with fact-checking for the first 100 days of President Joe Biden's administration were inconsistent with potential ideology effects. Our evidence suggests fact-checking is a difficult enterprise, there is considerable variability between fact-checkers in the raw number of statements that are checked, and finally, selection and scaling account for apparent discrepancies among fact-checkers.
Collapse
Affiliation(s)
- David M. Markowitz
- Department of Communication, Michigan State University, East Lansing, MI, United States of America
| | - Timothy R. Levine
- Department of Communication Studies, University of Alabama at Birmingham, Birmingham, AL, United States of America
| | - Kim B. Serota
- Department of Management and Marketing, School of Business Administration, Oakland University, Oakland, MI, United States of America
| | - Alivia D. Moore
- Department of Communication, Cornell University, Ithaca, NY, United States of America
| |
Collapse
|
41
|
Nerino V. Overcome the fragmentation in online propaganda literature: the role of cultural and cognitive sociology. FRONTIERS IN SOCIOLOGY 2023; 8:1170447. [PMID: 37497101 PMCID: PMC10366602 DOI: 10.3389/fsoc.2023.1170447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 06/19/2023] [Indexed: 07/28/2023]
Abstract
Evidence concerning the proliferation of propaganda on social media has renewed scientific interest in persuasive communication practices, resulting in a thriving yet quite disconnected scholarship. This fragmentation poses a significant challenge, as the absence of a structured and comprehensive organization of this extensive literature hampers the interpretation of findings, thus jeopardizing the understanding of online propaganda functioning. To address this fragmentation, I propose a systematization approach that involves utilizing Druckman's Generalizing Persuasion Framework as a unified interpretative tool to organize this scholarly work. By means of this approach, it is possible to systematically identify the various strands within the field, detect their respective shortcomings, and formulate new strategies to bridge these research strands and advance our knowledge of how online propaganda operates. I conclude by arguing that these strategies should involve the sociocultural perspectives offered by cognitive and cultural sociology, as these provide important insights and research tools to disentangle and evaluate the role played by supra-individual factors in the production, distribution, consumption, and evaluation of online propaganda.
Collapse
Affiliation(s)
- Valentina Nerino
- Interdisciplinary Centre for Gender Studies (ICFG), University of Bern, Bern, Switzerland
- Department of Sociology, University of Trento, Trento, Italy
| |
Collapse
|
42
|
Murphy G, Ching D, Twomey J, Linehan C. Face/Off: Changing the face of movies with deepfakes. PLoS One 2023; 18:e0287503. [PMID: 37410765 PMCID: PMC10325052 DOI: 10.1371/journal.pone.0287503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 06/05/2023] [Indexed: 07/08/2023] Open
Abstract
There are growing concerns about the potential for deepfake technology to spread misinformation and distort memories, though many also highlight creative applications such as recasting movies using other actors, or younger versions of the same actor. In the current mixed-methods study, we presented participants (N = 436) with deepfake videos of fictitious movie remakes (such as Will Smith staring as Neo in The Matrix). We observed an average false memory rate of 49%, with many participants remembering the fake remake as better than the original film. However, deepfakes were no more effective than simple text descriptions at distorting memory. Though our findings suggest that deepfake technology is not uniquely placed to distort movie memories, our qualitative data suggested most participants were uncomfortable with deepfake recasting. Common concerns were disrespecting artistic integrity, disrupting the shared social experience of films, and a discomfort at the control and options this technology would afford.
Collapse
Affiliation(s)
- Gillian Murphy
- School of Applied Psychology, University College Cork, Cork, Ireland
- Lero, The Science Foundation Ireland Centre for Software Research, Limerick, Ireland
| | - Didier Ching
- School of Applied Psychology, University College Cork, Cork, Ireland
- Lero, The Science Foundation Ireland Centre for Software Research, Limerick, Ireland
| | - John Twomey
- School of Applied Psychology, University College Cork, Cork, Ireland
- Lero, The Science Foundation Ireland Centre for Software Research, Limerick, Ireland
| | - Conor Linehan
- School of Applied Psychology, University College Cork, Cork, Ireland
- Lero, The Science Foundation Ireland Centre for Software Research, Limerick, Ireland
| |
Collapse
|
43
|
Prike T, Blackley P, Swire-Thompson B, Ecker UKH. Examining the replicability of backfire effects after standalone corrections. Cogn Res Princ Implic 2023; 8:39. [PMID: 37395864 DOI: 10.1186/s41235-023-00492-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 06/06/2023] [Indexed: 07/04/2023] Open
Abstract
Corrections are a frequently used and effective tool for countering misinformation. However, concerns have been raised that corrections may introduce false claims to new audiences when the misinformation is novel. This is because boosting the familiarity of a claim can increase belief in that claim, and thus exposing new audiences to novel misinformation-even as part of a correction-may inadvertently increase misinformation belief. Such an outcome could be conceptualized as a familiarity backfire effect, whereby a familiarity boost increases false-claim endorsement above a control-condition or pre-correction baseline. Here, we examined whether standalone corrections-that is, corrections presented without initial misinformation exposure-can backfire and increase participants' reliance on the misinformation in their subsequent inferential reasoning, relative to a no-misinformation, no-correction control condition. Across three experiments (total N = 1156) we found that standalone corrections did not backfire immediately (Experiment 1) or after a one-week delay (Experiment 2). However, there was some mixed evidence suggesting corrections may backfire when there is skepticism regarding the correction (Experiment 3). Specifically, in Experiment 3, we found the standalone correction to backfire in open-ended responses, but only when there was skepticism towards the correction. However, this did not replicate with the rating scales measure. Future research should further examine whether skepticism towards the correction is the first replicable mechanism for backfire effects to occur.
Collapse
Affiliation(s)
- Toby Prike
- School of Psychological Science, University of Western Australia, Perth, Australia.
| | - Phoebe Blackley
- School of Psychological Science, University of Western Australia, Perth, Australia
| | - Briony Swire-Thompson
- Network Science Institute, Northeastern University, Boston, USA
- Institute of Quantitative Social Science, Harvard University, Cambridge, USA
| | - Ullrich K H Ecker
- School of Psychological Science, University of Western Australia, Perth, Australia
- Public Policy Institute, University of Western Australia, Perth, Australia
| |
Collapse
|
44
|
Thakral PP, Barberio NM, Devitt AL, Schacter DL. Constructive episodic retrieval processes underlying memory distortion contribute to creative thinking and everyday problem solving. Mem Cognit 2023; 51:1125-1144. [PMID: 36526954 PMCID: PMC10272288 DOI: 10.3758/s13421-022-01377-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/28/2022] [Indexed: 12/23/2022]
Abstract
Constructive episodic retrieval processes play an adaptive role in supporting divergent thinking (i.e., creatively combining diverse bits of information) and means-end problem solving (i.e., generating steps to solve a social problem). However, the constructive nature of episodic memory that supports these adaptive functions also leads to memory error. In three experiments we aimed to identify a direct link between divergent thinking and means-end problem solving - as assessed in the Alternative Uses Task (AUT) and Means-End Problem Solving (MEPS) task - with the generation of false memories in the Deese-Roediger-McDermott paradigm. In Experiment 1, we replicated prior findings where false memory was positively correlated with performance on the AUT, and also showed for the first time that increased performance in the MEPS task is associated with increased false recall. In Experiment 2, we demonstrated that the link between false recall and performance on the MEPS task did not extend to other forms of problem solving, as assessed with the Everyday Descriptions Task (EDT). In Experiment 3, we showed that when the EDT was preceded by the MEPS task in an attempt to influence participants to engage in a similar episodic-problem solving strategy, performance in both tasks was correlated with false memory. These findings provide evidence for a direct link between the adaptive benefits of constructive episodic processes, in the form of enhanced divergent creative thinking and problem solving, and costs, in the form of increased memory error.
Collapse
Affiliation(s)
- Preston P Thakral
- Department of Psychology and Neuroscience, Boston College, Chestnut Hill, MA, 02467, USA.
| | - Natasha M Barberio
- Department of Psychology and Neuroscience, Boston College, Chestnut Hill, MA, 02467, USA
| | - Aleea L Devitt
- School of Psychology, The University of Waikato, Hamilton, New Zealand
| | | |
Collapse
|
45
|
Vellani V, Zheng S, Ercelik D, Sharot T. The illusory truth effect leads to the spread of misinformation. Cognition 2023; 236:105421. [PMID: 36871397 PMCID: PMC10636596 DOI: 10.1016/j.cognition.2023.105421] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 02/21/2023] [Accepted: 02/22/2023] [Indexed: 03/06/2023]
Abstract
Misinformation can negatively impact people's lives in domains ranging from health to politics. An important research goal is to understand how misinformation spreads in order to curb it. Here, we test whether and how a single repetition of misinformation fuels its spread. Over two experiments (N = 260) participants indicated which statements they would like to share with other participants on social media. Half of the statements were repeated and half were new. The results reveal that participants were more likely to share statements they had previously been exposed to. Importantly, the relationship between repetition and sharing was mediated by perceived accuracy. That is, repetition of misinformation biased people's judgment of accuracy and as a result fuelled the spread of misinformation. The effect was observed in the domain of health (Exp 1) and general knowledge (Exp 2), suggesting it is not tied to a specific domain.
Collapse
Affiliation(s)
- Valentina Vellani
- Affective Brain Lab, Department of Experimental Psychology, University College London, London WC1H 0AP, UK; Max Planck University College London Centre for Computational Psychiatry and Ageing Research, London WC1B 5EH, UK.
| | - Sarah Zheng
- Affective Brain Lab, Department of Experimental Psychology, University College London, London WC1H 0AP, UK; Max Planck University College London Centre for Computational Psychiatry and Ageing Research, London WC1B 5EH, UK
| | - Dilay Ercelik
- Affective Brain Lab, Department of Experimental Psychology, University College London, London WC1H 0AP, UK; Max Planck University College London Centre for Computational Psychiatry and Ageing Research, London WC1B 5EH, UK
| | - Tali Sharot
- Affective Brain Lab, Department of Experimental Psychology, University College London, London WC1H 0AP, UK; Max Planck University College London Centre for Computational Psychiatry and Ageing Research, London WC1B 5EH, UK; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
46
|
Ruggieri S, Bonfanti RC, Santoro G, Passanisi A, Pace U. Fake News and the Sleeper Effect in Social Media Posts: the Case of Perception of Safety in the Workplace. CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING 2023. [PMID: 37335915 DOI: 10.1089/cyber.2022.0199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/21/2023]
Abstract
Fake news and misinformation on social media platforms are two of the biggest problems of the last few years. Understanding the underlying mechanisms of memory is of fundamental importance to develop specific intervention programs. In this study, 324 white-collar workers viewed Facebook posts focused on coronavirus disease-2019 prevention norms in the workplace. In a within-participants design, we manipulated the message and the source to expose each participant to real news, real news presented by a discounting cue (sleeper effect condition), and fake news. The results show that participants were more susceptible to fake news during a 1-week delayed posttest following a memory recall process. Furthermore, they remembered the message easily, but not the source, which did not differ in the real-news conditions. We discuss the results, mentioning the sleeper effect and fake news theories.
Collapse
Affiliation(s)
- Stefano Ruggieri
- Università degli Studi di Enna "Kore," Faculty of Human and Social Sciences, Enna, Italy
| | - Rubinia C Bonfanti
- Department of Psychology, Educational Science and Human Movement, University of Palermo, Italy
| | - Gianluca Santoro
- Università degli Studi di Enna "Kore," Faculty of Human and Social Sciences, Enna, Italy
| | - Alessia Passanisi
- Università degli Studi di Enna "Kore," Faculty of Human and Social Sciences, Enna, Italy
| | - Ugo Pace
- Università degli Studi di Enna "Kore," Faculty of Human and Social Sciences, Enna, Italy
| |
Collapse
|
47
|
Kwek A, Peh L, Tan J, Lee JX. Distractions, analytical thinking and falling for fake news: A survey of psychological factors. HUMANITIES & SOCIAL SCIENCES COMMUNICATIONS 2023; 10:319. [PMID: 37333884 PMCID: PMC10259813 DOI: 10.1057/s41599-023-01813-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 05/30/2023] [Indexed: 06/20/2023]
Abstract
Analytical thinking safeguards us against believing or spreading fake news. In various forms, this common assumption has been reported, investigated, or implemented in fake news education programs. Some have associated this assumption with the inverse claim, that distractions from analytical thinking may render us vulnerable to believing or spreading fake news. This paper surveys the research done between 2016 and 2022 on psychological factors influencing one's susceptibility to believing or spreading fake news, considers which of the psychological factors are plausible distractors to one's exercise of analytical thinking, and discusses some implications of considering them as distractors to analytical thinking. From these, the paper draws five conclusions: (1) It is not analytical thinking per se, but analytical thinking directed to evaluating the truth that safeguards us from believing or spreading fake news. (2) While psychological factors can distract us from exercising analytical thinking and they can also distract us in exercising analytical thinking. (3) Whether a psychological factor functions as a distractor from analytical thinking or in analytical thinking may depend on contextual factors. (4) Measurements of analytical thinking may not indicate vulnerability to believing or spreading fake news. (5) The relevance of motivated reasoning to our tendency to believe fake news should not yet be dismissed. These findings may be useful to guide future research in the intersection of analytical thinking and susceptibility to believing or spreading fake news.
Collapse
Affiliation(s)
- Adrian Kwek
- College of Interdisciplinary and Experiential Learning, Singapore University of Social Sciences, Singapore, Singapore
| | - Luke Peh
- School of Science and Technology, Singapore University of Social Sciences, Singapore, Singapore
| | - Josef Tan
- Curriculum Planning and Development Division, Ministry of Education, Singapore, Singapore
| | - Jin Xing Lee
- School of Computing, National University of Singapore, Singapore, Singapore
| |
Collapse
|
48
|
Singer JB. Closing the Barn Door? Fact-Checkers as Retroactive Gatekeepers of the COVID-19 "Infodemic". JOURNALISM & MASS COMMUNICATION QUARTERLY 2023; 100:332-353. [PMID: 38602946 PMCID: PMC10119658 DOI: 10.1177/10776990231168599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/13/2024]
Abstract
Based on a study of U.S.-tagged items in a global database of fact-checked statements about the novel coronavirus throughout the first year of the pandemic, this article explores the nature of fact-checkers' "retroactive gatekeeping." This term is introduced here to describe the process of assessing the veracity of information after it has entered the public domain rather than before. Although an overwhelming majority of statements across 16 thematic categories were deemed false and debunked, often repeatedly, misinformation continued to circulate freely and widely.
Collapse
|
49
|
Tandoc EC, Kim HK. Avoiding real news, believing in fake news? Investigating pathways from information overload to misbelief. JOURNALISM (LONDON, ENGLAND) 2023; 24:1174-1192. [PMID: 38603202 PMCID: PMC9111942 DOI: 10.1177/14648849221090744] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
This study sought to examine the potential role of news avoidance in belief in COVID-19 misinformation. Using two-wave panel survey data in Singapore, we found that information overload is associated with news fatigue as well as with difficulty in analyzing information. News fatigue and analysis paralysis also subsequently led to news avoidance, which increased belief in COVID-19 misinformation. However, this link is present only among those who are frequently exposed to misinformation about COVID-19.
Collapse
Affiliation(s)
- Edson C Tandoc
- Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore
| | - Hye Kyung Kim
- Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore
| |
Collapse
|
50
|
Larsen MZ, Haupt MR, McMann T, Cuomo RE, Mackey TK. The Influence of News Consumption Habits and Dispositional Traits on Trust in Medical Scientists. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:ijerph20105842. [PMID: 37239568 DOI: 10.3390/ijerph20105842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/25/2023] [Accepted: 05/09/2023] [Indexed: 05/28/2023]
Abstract
Public trust in medical institutions is essential for ensuring compliance with medical directives. However, the politicization of public health issues and the polarized nature of major news outlets suggest that partisanship and news consumption habits can influence medical trust. This study employed a survey with 858 participants and used regression analysis to assesses how news consumption habits and information assessment traits (IATs) influence trust in medical scientists. IATs included were conscientiousness, openness, need for cognitive closure (NFCC), and cognitive reflective thinking (CRT). News sources were classified on the basis of factuality and political bias. Initially, readership of liberally biased news was positively associated with medical trust (p < 0.05). However, this association disappeared when controlling for the news source's factuality (p = 0.28), while CRT (p < 0.05) was positively associated with medical trust. When controlling for conservatively biased news sources, factuality of the news source (p < 0.05) and NFCC (p < 0.05) were positively associated with medical trust. While partisan media bias may influence medical trust, these results suggest that those who have higher abilities to assess information and who prefer more credible news sources have a greater trust in medical scientists.
Collapse
Affiliation(s)
- Meng Zhen Larsen
- Global Health Policy and Data Institute, San Diego, CA 92123, USA
- S-3 Research LLC, San Diego, CA 92123, USA
| | - Michael R Haupt
- Global Health Policy and Data Institute, San Diego, CA 92123, USA
- Department of Cognitive Science, University of California, San Diego, CA 92093, USA
| | - Tiana McMann
- Global Health Policy and Data Institute, San Diego, CA 92123, USA
- S-3 Research LLC, San Diego, CA 92123, USA
- Global Health Program, Department of Anthropology, University of California, San Diego, CA 92093, USA
| | - Raphael E Cuomo
- Global Health Policy and Data Institute, San Diego, CA 92123, USA
- Department of Anesthesiology, School of Medicine, University of California, San Diego, CA 94720, USA
| | - Tim K Mackey
- Global Health Policy and Data Institute, San Diego, CA 92123, USA
- S-3 Research LLC, San Diego, CA 92123, USA
- Global Health Program, Department of Anthropology, University of California, San Diego, CA 92093, USA
| |
Collapse
|