1
|
Sangiorgio E, Cinelli M, Cerqueti R, Quattrociocchi W. Followers do not dictate the virality of news outlets on social media. PNAS NEXUS 2024; 3:pgae257. [PMID: 38988972 PMCID: PMC11235336 DOI: 10.1093/pnasnexus/pgae257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 06/18/2024] [Indexed: 07/12/2024]
Abstract
Initially conceived for entertainment, social media platforms have profoundly transformed the dissemination of information and consequently reshaped the dynamics of agenda-setting. In this scenario, understanding the factors that capture audience attention and drive viral content is crucial. Employing Gibrat's Law, which posits that an entity's growth rate is unrelated to its size, we examine the engagement growth dynamics of news outlets on social media. Our analysis includes the Facebook historical data of over a thousand news outlets, encompassing approximately 57 million posts in four European languages from 2008 to the end of 2022. We discover universal growth dynamics according to which news virality is independent of the traditional size of the outlet. Moreover, our analysis reveals a significant long-term impact of news source reliability on engagement growth, with engagement induced by unreliable sources decreasing over time. We conclude the article by presenting a statistical model replicating the observed growth dynamics.
Collapse
Affiliation(s)
- Emanuele Sangiorgio
- Department of Social Sciences and Economics, Sapienza University of Rome, Rome 00185, Italy
| | - Matteo Cinelli
- Department of Computer Science, Sapienza University of Rome, Rome 00161, Italy
| | - Roy Cerqueti
- Department of Social Sciences and Economics, Sapienza University of Rome, Rome 00185, Italy
- GRANEM, Université d’Angers, SFR Confluences, Angers F-49000, France
| | | |
Collapse
|
2
|
Bakir V, Laffer A, McStay A, Miranda D, Urquhart L. On manipulation by emotional AI: UK adults' views and governance implications. FRONTIERS IN SOCIOLOGY 2024; 9:1339834. [PMID: 38912311 PMCID: PMC11190365 DOI: 10.3389/fsoc.2024.1339834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Accepted: 05/22/2024] [Indexed: 06/25/2024]
Abstract
With growing commercial, regulatory and scholarly interest in use of Artificial Intelligence (AI) to profile and interact with human emotion ("emotional AI"), attention is turning to its capacity for manipulating people, relating to factors impacting on a person's decisions and behavior. Given prior social disquiet about AI and profiling technologies, surprisingly little is known on people's views on the benefits and harms of emotional AI technologies, especially their capacity for manipulation. This matters because regulators of AI (such as in the European Union and the UK) wish to stimulate AI innovation, minimize harms and build public trust in these systems, but to do so they should understand the public's expectations. Addressing this, we ascertain UK adults' perspectives on the potential of emotional AI technologies for manipulating people through a two-stage study. Stage One (the qualitative phase) uses design fiction principles to generate adequate understanding and informed discussion in 10 focus groups with diverse participants (n = 46) on how emotional AI technologies may be used in a range of mundane, everyday settings. The focus groups primarily flagged concerns about manipulation in two settings: emotion profiling in social media (involving deepfakes, false information and conspiracy theories), and emotion profiling in child oriented "emotoys" (where the toy responds to the child's facial and verbal expressions). In both these settings, participants express concerns that emotion profiling covertly exploits users' cognitive or affective weaknesses and vulnerabilities; additionally, in the social media setting, participants express concerns that emotion profiling damages people's capacity for rational thought and action. To explore these insights at a larger scale, Stage Two (the quantitative phase), conducts a UK-wide, demographically representative national survey (n = 2,068) on attitudes toward emotional AI. Taking care to avoid leading and dystopian framings of emotional AI, we find that large majorities express concern about the potential for being manipulated through social media and emotoys. In addition to signaling need for civic protections and practical means of ensuring trust in emerging technologies, the research also leads us to provide a policy-friendly subdivision of what is meant by manipulation through emotional AI and related technologies.
Collapse
Affiliation(s)
- Vian Bakir
- School of History, Law and Social Sciences, Bangor University, Bangor, United Kingdom
| | - Alexander Laffer
- School of Media and Film, University of Winchester, Winchester, United Kingdom
| | - Andrew McStay
- School of History, Law and Social Sciences, Bangor University, Bangor, United Kingdom
| | - Diana Miranda
- Faculty of Social Sciences, University of Stirling, Scotland, United Kingdom
| | - Lachlan Urquhart
- Edinburgh Law School, University of Edinburgh, Scotland, United Kingdom
| |
Collapse
|
3
|
Allen J, Watts DJ, Rand DG. Quantifying the impact of misinformation and vaccine-skeptical content on Facebook. Science 2024; 384:eadk3451. [PMID: 38815040 DOI: 10.1126/science.adk3451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 04/17/2024] [Indexed: 06/01/2024]
Abstract
Low uptake of the COVID-19 vaccine in the US has been widely attributed to social media misinformation. To evaluate this claim, we introduce a framework combining lab experiments (total N = 18,725), crowdsourcing, and machine learning to estimate the causal effect of 13,206 vaccine-related URLs on the vaccination intentions of US Facebook users (N ≈ 233 million). We estimate that the impact of unflagged content that nonetheless encouraged vaccine skepticism was 46-fold greater than that of misinformation flagged by fact-checkers. Although misinformation reduced predicted vaccination intentions significantly more than unflagged vaccine content when viewed, Facebook users' exposure to flagged content was limited. In contrast, unflagged stories highlighting rare deaths after vaccination were among Facebook's most-viewed stories. Our work emphasizes the need to scrutinize factually accurate but potentially misleading content in addition to outright falsehoods.
Collapse
Affiliation(s)
- Jennifer Allen
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Duncan J Watts
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, USA
- Annenberg School for Communication, University of Pennsylvania, Philadelphia, PA, USA
- Operations, Information, and Decisions Department, University of Pennsylvania, Philadelphia, PA, USA
| | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, USA
- Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
4
|
Baribi-Bartov S, Swire-Thompson B, Grinberg N. Supersharers of fake news on Twitter. Science 2024; 384:979-982. [PMID: 38815033 DOI: 10.1126/science.adl4435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 04/19/2024] [Indexed: 06/01/2024]
Abstract
Governments may have the capacity to flood social media with fake news, but little is known about the use of flooding by ordinary voters. In this work, we identify 2107 registered US voters who account for 80% of the fake news shared on Twitter during the 2020 US presidential election by an entire panel of 664,391 voters. We found that supersharers were important members of the network, reaching a sizable 5.2% of registered voters on the platform. Supersharers had a significant overrepresentation of women, older adults, and registered Republicans. Supersharers' massive volume did not seem automated but was rather generated through manual and persistent retweeting. These findings highlight a vulnerability of social media for democracy, where a small group of people distort the political reality for many.
Collapse
Affiliation(s)
- Sahar Baribi-Bartov
- Software and Information Systems Engineering, Ben-Gurion University, Be'er Sheva, Israel
| | - Briony Swire-Thompson
- Network Science Institute, Department of Political Science, Department of Psychology, Northeastern University, Boston, MA, USA
| | - Nir Grinberg
- Software and Information Systems Engineering, Ben-Gurion University, Be'er Sheva, Israel
| |
Collapse
|
5
|
Timpka T. Time for Medicine and Public Health to Leave Platform X. JMIR MEDICAL EDUCATION 2024; 10:e53810. [PMID: 38801712 PMCID: PMC11144836 DOI: 10.2196/53810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 03/31/2024] [Accepted: 03/31/2024] [Indexed: 05/29/2024]
Abstract
Unlabelled For more than 50 years, digital technologies have been employed for the creation and distribution of knowledge in health services. In the last decade, digital social media have been developed for applications in clinical decision support and population health monitoring. Recently, these technologies have also been used for knowledge translation, such as in the process where research findings created in academic settings are established as evidence and distributed for use in clinical practice, policy making, and health self-management. To date, it has been common for medical and public health institutions to have social media accounts for the dissemination of novel research findings and to facilitate conversations about these findings. However, recent events such as the transformation of the microblog Twitter to platform X have brought to light the need for the social media industry to exploit user data to generate revenue. In this viewpoint, it is argued that a redirection of social media use is required in the translation of knowledge to action in the fields of medicine and public health. A new kind of social internet is currently forming, known as the "fediverse," which denotes an ensemble of open social media that can communicate with each other while remaining independent platforms. In several countries, government institutions, universities, and newspapers use open social media to distribute information and enable discussions. These organizations control their own channels while being able to communicate with other platforms through open standards. Examples of medical knowledge translation via such open social media platforms, where users are less exposed to disinformation than in general platforms, are also beginning to appear. The current status of the social media industry calls for a broad discussion about the use of social technologies by health institutions involving researchers and health service practitioners, academic leaders, scientific publishers, social technology providers, policy makers, and the public. This debate should not primarily take place on social media platforms but rather at universities, in scientific journals, at public seminars, and other venues, allowing for the transparent and undisturbed communication and formation of opinions.
Collapse
Affiliation(s)
- Toomas Timpka
- Department of Health, Medicine, and Caring Sciences, Linköping University, Linköping, Sweden
- Department of Computer and Information Science, Linköping University, Linköping, Sweden
- Regional Executive Office, Region Östergötland, Linköping, Sweden
| |
Collapse
|
6
|
Allcott H, Gentzkow M, Mason W, Wilkins A, Barberá P, Brown T, Cisneros JC, Crespo-Tenorio A, Dimmery D, Freelon D, González-Bailón S, Guess AM, Kim YM, Lazer D, Malhotra N, Moehler D, Nair-Desai S, Nait El Barj H, Nyhan B, Paixao de Queiroz AC, Pan J, Settle J, Thorson E, Tromble R, Velasco Rivera C, Wittenbrink B, Wojcieszak M, Zahedian S, Franco A, Kiewiet de Jonge C, Stroud NJ, Tucker JA. The effects of Facebook and Instagram on the 2020 election: A deactivation experiment. Proc Natl Acad Sci U S A 2024; 121:e2321584121. [PMID: 38739793 PMCID: PMC11126999 DOI: 10.1073/pnas.2321584121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 03/08/2024] [Indexed: 05/16/2024] Open
Abstract
We study the effect of Facebook and Instagram access on political beliefs, attitudes, and behavior by randomizing a subset of 19,857 Facebook users and 15,585 Instagram users to deactivate their accounts for 6 wk before the 2020 U.S. election. We report four key findings. First, both Facebook and Instagram deactivation reduced an index of political participation (driven mainly by reduced participation online). Second, Facebook deactivation had no significant effect on an index of knowledge, but secondary analyses suggest that it reduced knowledge of general news while possibly also decreasing belief in misinformation circulating online. Third, Facebook deactivation may have reduced self-reported net votes for Trump, though this effect does not meet our preregistered significance threshold. Finally, the effects of both Facebook and Instagram deactivation on affective and issue polarization, perceived legitimacy of the election, candidate favorability, and voter turnout were all precisely estimated and close to zero.
Collapse
Affiliation(s)
- Hunt Allcott
- Department of Economics, Stanford University, Stanford, CA94305
| | | | | | | | | | | | | | | | - Drew Dimmery
- Meta, Menlo Park, CA94025
- Data Science Lab, Hertie School, Berlin10117, Germany
| | - Deen Freelon
- University of North Carolina Hussman School of Journalism and Media, University of North Carolina, Chapel Hill, NC27514
| | | | - Andrew M. Guess
- Department of Politics and School of Public and International Affairs, Princeton University, Princeton, NJ08540
| | - Young Mie Kim
- School of Journalism and Mass Communication, University of Wisconsin-Madison, Madison, WI53706
| | - David Lazer
- Northeastern University Lab of Texts, Maps, and Networks, Northeastern University, Boston, MA02115
| | - Neil Malhotra
- Graduate School of Business, Stanford University, Stanford, CA94305
| | | | | | | | - Brendan Nyhan
- Department of Government, Dartmouth College, Hanover, NH03755
| | | | - Jennifer Pan
- Department of Communication, Stanford University, Stanford, CA94305
| | - Jaime Settle
- Department of Government, William & Mary, Williamsburg, VA23185
| | - Emily Thorson
- Department of Political Science, Syracuse University, Syracuse, NY13244
| | - Rebekah Tromble
- School of Media and Public Affairs and Institute for Data, Democracy, and Politics, George Washington University, Washington, DC20052
| | | | | | - Magdalena Wojcieszak
- Department of Communication, University of California Davis, Davis, CA95616
- Amsterdam School of Communication Research, University of Amsterdam, Amsterdam15791, Netherlands
| | - Saam Zahedian
- Department of Economics, Stanford University, Stanford, CA94305
| | | | | | | | - Joshua A. Tucker
- Wilf Family Department of Politics and Center for Social Media and Politics, New York University, New York, NY10012
| |
Collapse
|
7
|
Baqir A, Galeazzi A, Zollo F. News and misinformation consumption: A temporal comparison across European countries. PLoS One 2024; 19:e0302473. [PMID: 38717975 PMCID: PMC11078435 DOI: 10.1371/journal.pone.0302473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 04/04/2024] [Indexed: 05/12/2024] Open
Abstract
The Internet and social media have transformed the information landscape, democratizing content access and production. While making information easily accessible, these platforms can also act as channels for spreading misinformation, posing crucial societal challenges. To address this, understanding news consumption patterns and unraveling the complexities of the online information environment are essential. Previous studies highlight polarization and misinformation in online discussions, but many focus on specific topics or contexts, often overlooking comprehensive cross-country and cross-topic analyses. However, the dynamics of debates, misinformation prevalence, and the efficacy of countermeasures are intrinsically tied to socio-cultural contexts. This work aims to bridge this gap by exploring information consumption patterns across four European countries over three years. Analyzing the Twitter activity of news outlets in France, Germany, Italy, and the UK, this study seeks to shed light on how topics of European significance resonate across these nations and the role played by misinformation sources. The results spotlight that while reliable sources predominantly shape the information landscape, unreliable content persists across all countries and topics. Though most users favor trustworthy sources, a small percentage predominantly consumes content from questionable sources, with even fewer maintaining a mixed information diet. The cross-country comparison unravels disparities in audience overlap among news sources, the prevalence of misinformation, and the proportion of users relying on questionable sources. Such distinctions surface not only across countries but also within various topics. These insights underscore the pressing need for tailored studies, crucial in designing targeted and effective countermeasures against misinformation and extreme polarization in the digital space.
Collapse
Affiliation(s)
- Anees Baqir
- Department of Environmental Sciences, Informatics and Statistics, Ca’ Foscari University of Venice, Venice, Italy
| | - Alessandro Galeazzi
- Department of Environmental Sciences, Informatics and Statistics, Ca’ Foscari University of Venice, Venice, Italy
| | - Fabiana Zollo
- Department of Environmental Sciences, Informatics and Statistics, Ca’ Foscari University of Venice, Venice, Italy
- The New Institute Centre for Environmental Humanities, Venice, Italy
| |
Collapse
|
8
|
Martel C, Mosleh M, Yang Q, Zaman T, Rand DG. Blocking of counter-partisan accounts drives political assortment on Twitter. PNAS NEXUS 2024; 3:pgae161. [PMID: 38779113 PMCID: PMC11110939 DOI: 10.1093/pnasnexus/pgae161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 03/29/2024] [Indexed: 05/25/2024]
Abstract
There is strong political assortment of Americans on social media networks. This is typically attributed to preferential tie formation (i.e. homophily) among those with shared partisanship. Here, we demonstrate an additional factor beyond homophily driving assorted networks: preferential prevention of social ties. In two field experiments on Twitter, we created human-looking bot accounts that identified as Democrats or Republicans, and then randomly assigned users to be followed by one of these accounts. In addition to preferentially following-back copartisans, we found that users were 12 times more likely to block counter-partisan accounts compared to copartisan accounts in the first experiment, and 4 times more likely to block counter-partisan accounts relative to a neutral account or a copartisan account in the second experiment. We then replicated these findings in a survey experiment and found evidence of a key motivation for blocking: wanting to avoid seeing any content posted by the blocked user. Additionally, we found that Democrats preferentially blocked counter-partisans more than Republicans, and that this asymmetry was likely due to blocking accounts who post low-quality or politically slanted content (rather than an asymmetry in identity-based blocking). Our results demonstrate that preferential blocking of counter-partisans is an important phenomenon driving political assortment on social media.
Collapse
Affiliation(s)
- Cameron Martel
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA 02142, USA
| | - Mohsen Mosleh
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA 02142, USA
- Management Department, University of Exeter Business School, Exeter EX4 4PU, UK
| | - Qi Yang
- Institute for Data, Systems, and Society (IDSS), Massachusetts Institute of Technology, Cambridge, MA 02142, USA
| | - Tauhid Zaman
- Yale School of Management, Yale University, New Haven, CT 06511, USA
| | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA 02142, USA
- Institute for Data, Systems, and Society (IDSS), Massachusetts Institute of Technology, Cambridge, MA 02142, USA
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| |
Collapse
|
9
|
Hong CS. Fake news virality: Relational niches and the diffusion of COVID-19 vaccine misinformation. SOCIAL SCIENCE RESEARCH 2024; 120:103004. [PMID: 38763539 DOI: 10.1016/j.ssresearch.2024.103004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 02/23/2024] [Accepted: 03/07/2024] [Indexed: 05/21/2024]
Abstract
This study explores why some fake news publishers are able to propagate misinformation while others receive little attention on social media. Using COVID-19 vaccine tweets as a case study, this study combined the relational niche framework with pooled and multilevel models that address the unobserved heterogeneity. The results showed that, as expected, ties to accounts with more followers were associated with more fake news tweets, retweets, and likes. However, more surprisingly, embedding with fake news publishers had an inverted U-shaped association with diffusion, whereas social proximity to mainstream media was positively associated. Although the effect of influential users is in line with opinion leader theory, the newly-identified effects of social proximity to reliable sources and embeddedness suggest that the key to fake news virality is to earn greater organizational status and modest, not overly, echo chambers. This study highlights the potential of dynamic media networks to shape the misinformation market.
Collapse
Affiliation(s)
- Chen-Shuo Hong
- Department of Sociology University of Massachusetts, 200 Hicks Way, 738 Thompson Hall, Amherst, MA, 01003, USA.
| |
Collapse
|
10
|
Huang K, Krafft PM. Performing Platform Governance: Facebook and the Stage Management of Data Relations. SCIENCE AND ENGINEERING ETHICS 2024; 30:13. [PMID: 38575812 PMCID: PMC10995037 DOI: 10.1007/s11948-024-00473-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Accepted: 02/12/2024] [Indexed: 04/06/2024]
Abstract
Controversies surrounding social media platforms have provided opportunities for institutional reflexivity amongst users and regulators on how to understand and govern platforms. Amidst contestation, platform companies have continued to enact projects that draw upon existing modes of privatized governance. We investigate how social media companies have attempted to achieve closure by continuing to set the terms around platform governance. We investigate two projects implemented by Facebook (Meta)-authenticity regulation and privacy controls-in response to the Russian Interference and Cambridge Analytica controversies surrounding the 2016 U.S. Presidential Election. Drawing on Goffman's metaphor of stage management, we analyze the techniques deployed by Facebook to reinforce a division between what is visible and invisible to the user experience. These platform governance projects propose to act upon front-stage data relations: information that users can see from other users-whether that is content that users can see from "bad actors", or information that other users can see about oneself. At the same time, these projects relegate back-stage data relations-information flows between users constituted by recommendation and targeted advertising systems-to invisibility and inaction. As such, Facebook renders the user experience actionable for governance, while foreclosing governance of back-stage data relations central to the economic value of the platform. As social media companies continue to perform platform governance projects following controversies, our paper invites reflection on the politics of these projects. By destabilizing the boundaries drawn by platform companies, we open space for continuous reflexivity on how platforms should be understood and governed.
Collapse
Affiliation(s)
- Karen Huang
- McCourt School of Public Policy, Georgetown University, Washington, DC, USA.
| | - P M Krafft
- Creative Computing Institute, University of the Arts London, London, UK
| |
Collapse
|
11
|
Avalle M, Di Marco N, Etta G, Sangiorgio E, Alipour S, Bonetti A, Alvisi L, Scala A, Baronchelli A, Cinelli M, Quattrociocchi W. Persistent interaction patterns across social media platforms and over time. Nature 2024; 628:582-589. [PMID: 38509370 PMCID: PMC11023927 DOI: 10.1038/s41586-024-07229-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2023] [Accepted: 02/22/2024] [Indexed: 03/22/2024]
Abstract
Growing concern surrounds the impact of social media platforms on public discourse1-4 and their influence on social dynamics5-9, especially in the context of toxicity10-12. Here, to better understand these phenomena, we use a comparative approach to isolate human behavioural patterns across multiple social media platforms. In particular, we analyse conversations in different online communities, focusing on identifying consistent patterns of toxic content. Drawing from an extensive dataset that spans eight platforms over 34 years-from Usenet to contemporary social media-our findings show consistent conversation patterns and user behaviour, irrespective of the platform, topic or time. Notably, although long conversations consistently exhibit higher toxicity, toxic language does not invariably discourage people from participating in a conversation, and toxicity does not necessarily escalate as discussions evolve. Our analysis suggests that debates and contrasting sentiments among users significantly contribute to more intense and hostile discussions. Moreover, the persistence of these patterns across three decades, despite changes in platforms and societal norms, underscores the pivotal role of human behaviour in shaping online discourse.
Collapse
Affiliation(s)
- Michele Avalle
- Department of Computer Science, Sapienza University of Rome, Rome, Italy
| | - Niccolò Di Marco
- Department of Computer Science, Sapienza University of Rome, Rome, Italy
| | - Gabriele Etta
- Department of Computer Science, Sapienza University of Rome, Rome, Italy
| | - Emanuele Sangiorgio
- Department of Social Sciences and Economics, Sapienza University of Rome, Rome, Italy
| | - Shayan Alipour
- Department of Computer Science, Sapienza University of Rome, Rome, Italy
| | - Anita Bonetti
- Department of Communication and Social Research, Sapienza University of Rome, Rome, Italy
| | - Lorenzo Alvisi
- Department of Computer Science, Sapienza University of Rome, Rome, Italy
| | | | - Andrea Baronchelli
- Department of Mathematics, City University of London, London, UK
- The Alan Turing Institute, London, UK
| | - Matteo Cinelli
- Department of Computer Science, Sapienza University of Rome, Rome, Italy.
| | | |
Collapse
|
12
|
Hosseinmardi H, Ghasemian A, Rivera-Lanas M, Horta Ribeiro M, West R, Watts DJ. Causally estimating the effect of YouTube's recommender system using counterfactual bots. Proc Natl Acad Sci U S A 2024; 121:e2313377121. [PMID: 38349876 PMCID: PMC10895271 DOI: 10.1073/pnas.2313377121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 12/13/2023] [Indexed: 02/15/2024] Open
Abstract
In recent years, critics of online platforms have raised concerns about the ability of recommendation algorithms to amplify problematic content, with potentially radicalizing consequences. However, attempts to evaluate the effect of recommenders have suffered from a lack of appropriate counterfactuals-what a user would have viewed in the absence of algorithmic recommendations-and hence cannot disentangle the effects of the algorithm from a user's intentions. Here we propose a method that we call "counterfactual bots" to causally estimate the role of algorithmic recommendations on the consumption of highly partisan content on YouTube. By comparing bots that replicate real users' consumption patterns with "counterfactual" bots that follow rule-based trajectories, we show that, on average, relying exclusively on the YouTube recommender results in less partisan consumption, where the effect is most pronounced for heavy partisan consumers. Following a similar method, we also show that if partisan consumers switch to moderate content, YouTube's sidebar recommender "forgets" their partisan preference within roughly 30 videos regardless of their prior history, while homepage recommendations shift more gradually toward moderate content. Overall, our findings indicate that, at least since the algorithm changes that YouTube implemented in 2019, individual consumption patterns mostly reflect individual preferences, where algorithmic recommendations play, if anything, a moderating role.
Collapse
Affiliation(s)
- Homa Hosseinmardi
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA 19104
- Annenberg School of Communication, University of Pennsylvania, Philadelphia, PA 19104
| | - Amir Ghasemian
- Yale Institute for Network Science, Yale University, New Haven, CT 06511
| | - Miguel Rivera-Lanas
- Heinz College of Information Systems and Public Policy, Carnegie Mellon University, Pittsburgh, PA 15213
| | - Manoel Horta Ribeiro
- School of Computer and Communication Sciences, École Polytechnique Fédérale de Lausanne, 1015 Ecublens, Switzerland
| | - Robert West
- School of Computer and Communication Sciences, École Polytechnique Fédérale de Lausanne, 1015 Ecublens, Switzerland
| | - Duncan J Watts
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA 19104
- Annenberg School of Communication, University of Pennsylvania, Philadelphia, PA 19104
- Operations, Information, and Decisions Department, University of Pennsylvania, Philadelphia, PA 19104
| |
Collapse
|
13
|
Bram C. Beyond partisan filters: Can underreported news reduce issue polarization? PLoS One 2024; 19:e0297808. [PMID: 38363749 PMCID: PMC10871475 DOI: 10.1371/journal.pone.0297808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 01/11/2024] [Indexed: 02/18/2024] Open
Abstract
While many news outlets aim for impartiality, 67% of Americans perceive their news sources as partisan, often presenting only one side of the story. This paper tests whether exposing individuals to news stories their political adversaries focus on can mitigate political polarization. In an experiment involving a real-world political newsletter-sent to participants who had opted to receive news that uncovers media biases-exposure to a specific story about refugee policy led respondents to reassess their positions. This reevaluation changed their stances on the issue and reduced the ideological distinctions they made between Democrats and Republicans. These findings underscore the need for future studies to untangle the specific circumstances where cross-partisan exposure can alter political attitudes.
Collapse
Affiliation(s)
- Curtis Bram
- The University of Texas at Dallas, Richardson, TX, United States of America
| |
Collapse
|
14
|
Geers M, Swire-Thompson B, Lorenz-Spreen P, Herzog SM, Kozyreva A, Hertwig R. The Online Misinformation Engagement Framework. Curr Opin Psychol 2024; 55:101739. [PMID: 38091666 DOI: 10.1016/j.copsyc.2023.101739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 11/01/2023] [Accepted: 11/10/2023] [Indexed: 01/28/2024]
Abstract
Research on online misinformation has evolved rapidly, but organizing its results and identifying open research questions is difficult without a systematic approach. We present the Online Misinformation Engagement Framework, which classifies people's engagement with online misinformation into four stages: selecting information sources, choosing what information to consume or ignore, evaluating the accuracy of the information and/or the credibility of the source, and judging whether and how to react to the information (e.g., liking or sharing). We outline entry points for interventions at each stage and pinpoint the two early stages-source and information selection-as relatively neglected processes that should be addressed to further improve people's ability to contend with misinformation.
Collapse
Affiliation(s)
- Michael Geers
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany; Department of Psychology, Humboldt University of Berlin, Rudower Ch. 18, 12489 Berlin, Germany.
| | - Briony Swire-Thompson
- Network Science Institute, Northeastern University, 177 Huntington Ave., Boston, MA, 02115, USA; Institute for Quantitative Social Science, Harvard University, 1737 Cambridge St., Cambridge, MA, 02138, USA
| | - Philipp Lorenz-Spreen
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany
| | - Stefan M Herzog
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany
| | - Anastasia Kozyreva
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany.
| | - Ralph Hertwig
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany
| |
Collapse
|
15
|
Avin C, Daltrophe H, Lotker Z. On the impossibility of breaking the echo chamber effect in social media using regulation. Sci Rep 2024; 14:1107. [PMID: 38212365 PMCID: PMC10784531 DOI: 10.1038/s41598-023-50850-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 12/27/2023] [Indexed: 01/13/2024] Open
Abstract
As scientists, we are proud of our role in developing the current digital age that enables billions of people to communicate rapidly with others via social media. However, when things go wrong, we are also responsible for taking an ethical stand and trying to solve problems, and this work aims to take a step in this direction. Our goal is to set the foundation for a mathematically formal study of how we might regulate social media and, in particular, address the problem of the echo chamber effect. An echo chamber is a closed system where other voices are excluded by omission, causing your beliefs to become amplified or reinforced. In turn, these bubbles can boost social polarization and extreme political views, and, unfortunately, there is strong evidence that echo chambers exist in social media. The fundamental question we try to answer is: how and can a regulation "break" or reduce the echo chamber effect in social media? Sadly, the paper's main result is an impossibility result: a general regulation function that achieves this goal (on our social media model) while obeying the core values of democratic societies (freedom of expression and user privacy) does not exist. This result leaves us with hard future choices to make.
Collapse
Affiliation(s)
- Chen Avin
- Ben-Gurion University of the Negev, Be'er Sheva, Israel.
| | | | | |
Collapse
|
16
|
Johnson NF, Sear R, Illari L. Controlling bad-actor-artificial intelligence activity at scale across online battlefields. PNAS NEXUS 2024; 3:pgae004. [PMID: 38264146 PMCID: PMC10805610 DOI: 10.1093/pnasnexus/pgae004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 12/06/2023] [Indexed: 01/25/2024]
Abstract
We consider the looming threat of bad actors using artificial intelligence (AI)/Generative Pretrained Transformer to generate harms across social media globally. Guided by our detailed mapping of the online multiplatform battlefield, we offer answers to the key questions of what bad-actor-AI activity will likely dominate, where, when-and what might be done to control it at scale. Applying a dynamical Red Queen analysis from prior studies of cyber and automated algorithm attacks, predicts an escalation to daily bad-actor-AI activity by mid-2024-just ahead of United States and other global elections. We then use an exactly solvable mathematical model of the observed bad-actor community clustering dynamics, to build a Policy Matrix which quantifies the outcomes and trade-offs between two potentially desirable outcomes: containment of future bad-actor-AI activity vs. its complete removal. We also give explicit plug-and-play formulae for associated risk measures.
Collapse
Affiliation(s)
- Neil F Johnson
- Dynamic Online Networks Laboratory, George Washington University, Washington, DC 20052, USA
| | - Richard Sear
- Dynamic Online Networks Laboratory, George Washington University, Washington, DC 20052, USA
| | - Lucia Illari
- Dynamic Online Networks Laboratory, George Washington University, Washington, DC 20052, USA
| |
Collapse
|
17
|
Massey PM, Murray RM, Chiang SC, Russell AM, Yudell MA. Social Media, Public Health Research, and Vulnerability: Considerations to Advance Ethical Guidelines and Strengthen Future Research. JMIR Public Health Surveill 2023; 9:e49881. [PMID: 38157235 PMCID: PMC10787325 DOI: 10.2196/49881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 09/23/2023] [Accepted: 11/17/2023] [Indexed: 01/03/2024] Open
Abstract
The purpose of this article is to build upon prior work in social media research and ethics by highlighting an important and as yet underdeveloped research consideration: how should we consider vulnerability when conducting public health research in the social media environment? The use of social media in public health, both platforms and their data, has advanced the field dramatically over the past 2 decades. Applied public health research in the social media space has led to more robust surveillance tools and analytic strategies, more targeted recruitment activities, and more tailored health education. Ethical guidelines when using social media for public health research must also expand alongside these increasing capabilities and uses. Privacy, consent, and confidentiality have been hallmarks for ethical frameworks both in public health and social media research. To date, public health ethics scholarship has focused largely on practical guidelines and considerations for writing and reviewing social media research protocols. Such ethical guidelines have included collecting public data, reporting anonymized or aggregate results, and obtaining informed consent virtually. Our pursuit of the question related to vulnerability and public health research in the social media environment extends this foundational work in ethical guidelines and seeks to advance research in this field and to provide a solid ethical footing on which future research can thrive.
Collapse
Affiliation(s)
- Philip M Massey
- Department of Community Health Sciences, Fielding School of Public Health, UCLA, Los Angeles, CA, United States
| | - Regan M Murray
- College of Education and Health Professions, University of Arkansas, Fayetteville, AR, United States
| | - Shawn C Chiang
- Department of Health Behavior, Texas A&M University, College Station, TX, United States
| | - Alex M Russell
- Recovery Research Institute, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Michael A Yudell
- College of Health Solutions, Arizona State University, Phoenix, AZ, United States
| |
Collapse
|
18
|
Lewandowsky S, Ecker UKH, Cook J, van der Linden S, Roozenbeek J, Oreskes N. Misinformation and the epistemic integrity of democracy. Curr Opin Psychol 2023; 54:101711. [PMID: 37944324 PMCID: PMC7615327 DOI: 10.1016/j.copsyc.2023.101711] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 10/03/2023] [Accepted: 10/16/2023] [Indexed: 11/12/2023]
Abstract
Democracy relies on a shared body of knowledge among citizens, for example trust in elections and reliable knowledge to inform policy-relevant debate. We review the evidence for widespread disinformation campaigns that are undermining this shared knowledge. We establish a common pattern by which science and scientists are discredited and how the most recent frontier in those attacks involves researchers in misinformation itself. We list several ways in which psychology can contribute to countermeasures.
Collapse
Affiliation(s)
- Stephan Lewandowsky
- University of Bristol, Bristol, UK; University of Potsdam, Germany; University of Western Australia, Australia.
| | | | | | | | | | | |
Collapse
|
19
|
Lazer D, Swire-Thompson B, Wilson C. A Normative Framework for Assessing the Information Curation Algorithms of the Internet. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023:17456916231186779. [PMID: 38010888 DOI: 10.1177/17456916231186779] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
It is critical to understand how algorithms structure the information people see and how those algorithms support or undermine society's core values. We offer a normative framework for the assessment of the information curation algorithms that determine much of what people see on the internet. The framework presents two levels of assessment: one for individual-level effects and another for systemic effects. With regard to individual-level effects we discuss whether (a) the information is aligned with the user's interests, (b) the information is accurate, and (c) the information is so appealing that it is difficult for a person's self-regulatory resources to ignore ("agency hacking"). At the systemic level we discuss whether (a) there are adverse civic-level effects on a system-level variable, such as political polarization; (b) there are negative distributional or discriminatory effects; and (c) there are anticompetitive effects, with the information providing an advantage to the platform. The objective of this framework is both to inform the direction of future scholarship as well as to offer tools for intervention for policymakers.
Collapse
Affiliation(s)
- David Lazer
- Khoury College of Computer Sciences, Northeastern University
- Department of Political Science, Northeastern University
- Network Science Institute, Northeastern University
| | - Briony Swire-Thompson
- Department of Political Science, Northeastern University
- Network Science Institute, Northeastern University
- Department of Psychology, Northeastern University
| | - Christo Wilson
- Khoury College of Computer Sciences, Northeastern University
- Network Science Institute, Northeastern University
| |
Collapse
|
20
|
Shaw A. Social media, extremism, and radicalization. SCIENCE ADVANCES 2023; 9:eadk2031. [PMID: 37647405 PMCID: PMC10468141 DOI: 10.1126/sciadv.adk2031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
Fears that YouTube recommendations radicalize users are overblown, but social media still host and profit from dubious and extremist content.
Collapse
Affiliation(s)
- Aaron Shaw
- Department of Communication Studies, Northwestern University, Evanston, IL, USA.
| |
Collapse
|
21
|
Abstract
Industry-academy collaboration explores the 2020 US election.
Collapse
Affiliation(s)
- Michael W Wagner
- School of Journalism and Mass Communication, University of Wisconsin-Madison, Madison, WI, USA
| |
Collapse
|
22
|
Tollefson J. Tweaking Facebook feeds is no easy fix for polarization, studies find. Nature 2023:10.1038/d41586-023-02420-z. [PMID: 37501002 DOI: 10.1038/d41586-023-02420-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
|
23
|
Garcia D. Influence of Facebook algorithms on political polarization tested. Nature 2023:10.1038/d41586-023-02325-x. [PMID: 37500994 DOI: 10.1038/d41586-023-02325-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
|