1
|
Free Speech or Free to Hate?: Anti-LGBTQ+ Discourses in LGBTQ+-Affirming Spaces on Gab Social. JOURNAL OF HOMOSEXUALITY 2024; 71:2030-2055. [PMID: 37505504 DOI: 10.1080/00918369.2023.2218959] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
This article presents a critical discourse analysis of groups on the free speech social media platform Gab that were intended to be LGBTQ+-friendly but became spaces of queerphobia. Results indicate that Gab users deployed discourses of difference to situate the platform as heteronormative and to denigrate the LGBTQ+ community. In particular, discourses utilized in the name of free speech were used to establish LGBTQ+ individuals as abominations, undergird hegemonic masculinity, and marginalize queer folk by reducing them to sex acts and sex organs. This study provides a better understanding of the (in)efficacy of "free speech" as a content moderation policy and unpacks how anti-LGBTQ+ hate speech spreads in digital spaces.
Collapse
|
2
|
Mapping the scientific knowledge and approaches to defining and measuring hate crime, hate speech, and hate incidents: A systematic review. CAMPBELL SYSTEMATIC REVIEWS 2024; 20:e1397. [PMID: 38686101 PMCID: PMC11056620 DOI: 10.1002/cl2.1397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 05/02/2024]
Abstract
Background The difficulties in defining hate crime, hate incidents and hate speech, and in finding a common conceptual basis constitute a key barrier toward operationalisation in research, policy and programming. Definitions disagree about issues such as the identities that should be protected, the types of behaviours that should be referred to as hateful, and how the 'hate element' should be assessed. The lack of solid conceptual foundations is reflected in the absence of sound data. These issues have been raised since the early 1990s (Berk, 1990; Byers & Venturelli, 1994) but they proved to be an intractable problem that continues to affect this research and policy domain. Objectives Our systematic review has two objectives that are fundamentally connected: mapping (1) original definitions and (2) original measurement tools of hate crime, hate speech, hate incidents and surrogate terms, that is, alternative terms used for these concepts (e.g., prejudice-motivated crime, bias crime, among many others). Search Methods We systematically searched over 19 databases to retrieve academic and grey literature, as well as legislation. In addition, we contacted 26 country experts and searched 211 websites, as well as bibliographies of published reviews of related literature, and scrutiny of annotated bibliographies of related literature. Inclusion Criteria This review included documents published after 1990 found in academic literature, grey literature and legislation. We included academic empirical articles with any study design, as well as theoretical articles that focused specifically on defining hate crime, hate speech, hate incidents or surrogate terms. We also reviewed current criminal or civil legislation that is intended to regulate forms of hate speech, hate incidents and hate crimes. Eligible countries included Canada, USA, UK, Ireland, Germany, France, Italy, Spain, Australia and New Zealand. For documents to be included in relation to research objective (1), they had to contain at least one original definition of hate speech, hate incidents or hate crimes, or any surrogate term. For documents to be included in relation to research objective (2), they had to contain at least one original measurement tool of hate speech, hate incidents or hate crimes, or any surrogate term. Documents could be included in relation to both research objectives. Data Collection and Analysis The systematic search covered 1 January 1990 to 31 December 2021, with searches of academic databases conducted between 8th March and 12th April 2022 yielding 35,191 references. We carried out country-specific searches for grey literature published in the same time period between 27th August and 2nd December 2021. These searches yielded a total of 2748 results. We coded characteristics of the definitions and measurement tools, including the protected characteristics, the approaches to categorise the 'hate element' and other variables. We used univariate and bivariate statistical methods for data analysis. We also carried out a social network analysis. Main Results We provide as annex complete lists of the original definitions and measurement tools that met our inclusion criteria, for the use of researchers and policy makers worldwide. We included 423 definitions and 168 measurement tools in academic and grey literature, and 83 definitions found in legislation. To support future research and policy work in this area, we included a synthetic assessment of the (1) the operationalisability of each definition and (2) the theoretical robustness and transparency of each measurement tool. Our mapping of the definitions and measurement tools revealed numerous significant trends, clusters and differences between and within definitions and measurement tools focusing on hate crime, hate speech and hate incidents. For example, definitions and measurement tools tend to focus more on ethnic and religious identities (e.g., racism, antisemitism, Islamophobia) compared to sexual, gender and disability-related identities. This gap is greater in the definitions and measurement tools of hate speech than hate crime. Our analysis showed geographical patterns: hate crime definitions and measurement tools are more likely to originate from Anglophonic countries, especially the USA, but hate speech definitions and measurement tools are more likely to originate from continental Europe. In terms of disciplinary fragmentation, our social network analysis revealed that the collaboration and exchange of conceptual frameworks and methodological tools between social sciences and computer science is limited, with most definitions and measurement tools clustering along disciplinary lines. More detailed findings are presented in the results section of the report. Authors' Conclusions There is an urgent need to close the research and policy gap between the protections of 'ethnic and religious identities' and other (less) protected characteristics such as gender and sexual identities, age and disability. There is also an urgent need to improve the quality of methodological and reporting standards in research examining hate behaviours, including transparency in methodology and data reporting, and discussion of limitations (e.g., bias in data). Many of the measurement tools found in the academic literature were excluded because they did not report transparently how they collected and analysed the data. Further, 41% of documents presenting research on hate behaviours did not provide a definition of what they were looking at. Given the importance of this policy domain, it is vital to raise the quality and trustworthiness of research in this area. This review found that researchers in different disciplinary areas (e.g., social sciences and computer science) rarely collaborate. Future research should attempt to build on existing definitions and measurement tools (instead of duplicating efforts), and engage in more interdisciplinary collaborations. It is our hope that that this review can provide a solid foundation for researchers, government, and other bodies to build cumulative knowledge and collaboration in this important field.
Collapse
|
3
|
Multilingual Hate Speech Detection: A Semi-Supervised Generative Adversarial Approach. ENTROPY (BASEL, SWITZERLAND) 2024; 26:344. [PMID: 38667898 PMCID: PMC11049309 DOI: 10.3390/e26040344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 04/10/2024] [Accepted: 04/17/2024] [Indexed: 04/28/2024]
Abstract
Social media platforms have surpassed cultural and linguistic boundaries, thus enabling online communication worldwide. However, the expanded use of various languages has intensified the challenge of online detection of hate speech content. Despite the release of multiple Natural Language Processing (NLP) solutions implementing cutting-edge machine learning techniques, the scarcity of data, especially labeled data, remains a considerable obstacle, which further requires the use of semisupervised approaches along with Generative Artificial Intelligence (Generative AI) techniques. This paper introduces an innovative approach, a multilingual semisupervised model combining Generative Adversarial Networks (GANs) and Pretrained Language Models (PLMs), more precisely mBERT and XLM-RoBERTa. Our approach proves its effectiveness in the detection of hate speech and offensive language in Indo-European languages (in English, German, and Hindi) when employing only 20% annotated data from the HASOC2019 dataset, thereby presenting significantly high performances in each of multilingual, zero-shot crosslingual, and monolingual training scenarios. Our study provides a robust mBERT-based semisupervised GAN model (SS-GAN-mBERT) that outperformed the XLM-RoBERTa-based model (SS-GAN-XLM) and reached an average F1 score boost of 9.23% and an accuracy increase of 5.75% over the baseline semisupervised mBERT model.
Collapse
|
4
|
The association between exposure to hate speech or perceived discrimination and mental health problems among Korean residents in Japan. Transcult Psychiatry 2024; 61:133-141. [PMID: 38297813 DOI: 10.1177/13634615231225127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/02/2024]
Abstract
This study evaluated the effect of perceived discrimination and racism on the mental health state of Korean residents in Japan, with a particular focus on the risk of post-traumatic stress disorder (PTSD), depression, and psychological distress. Surveys were sent to Korean residents in Japan and a total of 240 valid responses were received. The valid response rate was 27.1%. The participants answered several questionnaire items, including demographic information and questions pertaining to their experiences of perceived discrimination, along with three self-reported measures of mental health, i.e., the Japanese version of Impact of Event Scale-Revised, the Zung Self-rating Depression Scale (SDS), and the 12-item General Health Questionnaire (GHQ-12). The results indicated that Korean residents in Japan experience hate speech and discrimination with a markedly high frequency (92.9% and 100%, respectively), and that factors such as employment discrimination and exposure to hate speech via social networking services were significant predictors of probable PTSD and psychological distress.
Collapse
|
5
|
Editorial: Understanding malicious behaviors on digital platforms. Front Psychol 2024; 15:1384226. [PMID: 38500645 PMCID: PMC10945009 DOI: 10.3389/fpsyg.2024.1384226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 02/21/2024] [Indexed: 03/20/2024] Open
|
6
|
Validation of the multidimensional bystander responses to racist hate speech scale and its association with empathy and moral disengagement among adolescents. Aggress Behav 2024; 50:e22105. [PMID: 37490043 DOI: 10.1002/ab.22105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 06/03/2023] [Accepted: 07/06/2023] [Indexed: 07/26/2023]
Abstract
Our understanding of how bystanders respond to hate speech is limited. This may be due, in part, to the lack of available measurement tools. However, understanding adolescents' responses to hate speech is critical because this kind of research can support schools in empowering students to exhibit courageous moral behavior. Thus, the purpose of the present study was to investigate the psychometric properties of the newly developed Multidimensional Bystander Responses to Hate Speech Scale (MBRHS) and to explore demographic differences and correlates of bystander behavior in school hate speech. The sample consisted of 3225 seventh to ninth graders from Germany and Switzerland. Exploratory and confirmatory factor analyses supported a model with seven factors. We found that adolescents with immigrant background and boys showed particularly unfavorable response patterns. In addition, our study suggests that empathy is positively correlated with the factors comforting the victim, seeking help at school, and countering hate speech but negatively correlated with helplessness, revenge, reinforcing, and ignoring. Moral disengagement showed the opposite correlational pattern. The findings indicate that the MBRHS is a psychometrically valid and reliable measure that could aid in measuring varied responses to hate speech. In addition, this work highlights the relevance of empathy and moral engagement training in anti-hate speech prevention programs.
Collapse
|
7
|
Direct and indirect effects of social dominance orientation on hate speech perpetration via empathy and moral disengagement among adolescents: A multilevel mediation model. Aggress Behav 2024; 50:e22100. [PMID: 37405843 DOI: 10.1002/ab.22100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 06/21/2023] [Accepted: 06/21/2023] [Indexed: 07/06/2023]
Abstract
Although it is known that social dominance orientation directly affects hate speech perpetration, few studies have explored the mechanisms by which this effect takes place during adolescence. Based on the socio-cognitive theory of moral agency, we aimed to fill this gap in the literature by exploring the direct and indirect effects of social dominance orientation on hate speech perpetration in offline and online settings. The sample included seventh, eigth, and ninth graders (N = 3225) (51.2% girls, 37.2% with an immigrant background) from 36 Swiss and German schools who completed a survey about hate speech, social dominance orientation, empathy, and moral disengagement. A multilevel mediation path model revealed that social dominance orientation had a direct effect on offline and online hate speech perpetration. Moreover, social dominance also had indirect effects via low levels of empathy and high levels of moral disengagement. No gender differences were observed. Our findings are discussed regarding the potential contribution to preventing hate speech during adolescence.
Collapse
|
8
|
Desensitization to hate speech: Examination using heart rate measurement. Aggress Behav 2024; 50:e22118. [PMID: 37843924 DOI: 10.1002/ab.22118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 09/20/2023] [Accepted: 09/26/2023] [Indexed: 10/18/2023]
Abstract
Exposure to hate speech (HS) leads to desensitization of listeners. Yet, most evidence of this process has been obtained using self-report measures. In this paper, we examined desensitization to HS using an unobtrusive, psychophysiological measure. In an experimental electrocardiogram study (N = 56), we observed heart rate (HR) deceleration after reading comments that contained HS. This suggested a substantive psychophysiological reaction of participants to hateful comments. However, such HR deceleration was not observed among participants preexposed to HS. People exposed to hateful comments thus appeared to show different HR responses to HS compared to people who were not previously exposed to such comments. Consequently, not only does frequent exposure to HS influence an individual's beliefs as observed in earlier studies, but it also impacts psychophysiological reactions to derogatory language.
Collapse
|
9
|
A Systematic Review on Hate Speech among Children and Adolescents: Definitions, Prevalence, and Overlap with Related Phenomena. TRAUMA, VIOLENCE & ABUSE 2023; 24:2598-2615. [PMID: 35731198 PMCID: PMC10486144 DOI: 10.1177/15248380221108070] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Little is known about the current state of research on the involvement of young people in hate speech. Thus, this systematic review presents findings on a) the prevalence of hate speech among children and adolescents and on hate speech definitions that guide prevalence assessments for this population; and b) the theoretical and empirical overlap of hate speech with related concepts. This review was guided by the Cochrane approach. To be included, publications were required to deal with real-life experiences of hate speech, to provide empirical data on prevalence for samples aged 5 to 21 years and they had to be published in academic formats. Included publications were full-text coded using two raters (κ = .80) and their quality was assessed. The string-guided electronic search (ERIC, SocInfo, Psycinfo, Psyndex) yielded 1,850 publications. Eighteen publications based on 10 studies met the inclusion criteria and their findings were systematized. Twelve publications were of medium quality due to minor deficiencies in their theoretical or methodological foundations. All studies used samples of adolescents and none of younger children. Nine out of 10 studies applied quantitative methodologies. Eighteen publications based on 10 studies were included. Results showed that frequencies for hate speech exposure were higher than those related to victimization and perpetration. Definitions of hate speech and assessment instruments were heterogeneous. Empirical evidence for an often theorized overlap between hate speech and bullying was found. The paper concludes by presenting a definition of hate speech, including implications for practice, policy, and research.
Collapse
|
10
|
The relation of classroom climate to adolescents' countering hate speech via social skills: A positive youth development perspective. J Adolesc 2023; 95:1127-1139. [PMID: 37118915 DOI: 10.1002/jad.12180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 03/10/2023] [Accepted: 04/15/2023] [Indexed: 04/30/2023]
Abstract
INTRODUCTION Hate speech is a current challenge for schools around the globe. At the same time, students worldwide stand up to hate speech by countering it. Guided by a positive youth development perspective, the present study investigated the direct and indirect associations between classroom climate (environmental assets), social skills (personal assets), and countering hate speech (as a proxy of thriving) among adolescents. METHODS The sample included 3225 students in grades 7-9 (51.7% self-identified as female) from 40 schools in Germany (n = 1841) and Switzerland (n = 1384). Students completed self-report questionnaires that assessed classroom climate, three facets of social skills (i.e., perspective-taking, prosocial behavior, assertiveness), and counterspeech. RESULTS The results of the 2-(1-1-1)-1 multilevel mediation analysis revealed that classroom climate (L2) and the three facets of social skills (L1) had a direct positive effect on counterspeech (L1). Furthermore, classroom climate (L2) also had a direct positive effect on the three facets of social skills (L1). Finally, classroom climate (L2) had an indirect positive effect on counterspeech (L1) via all three aspects of social skills (L1). CONCLUSION The findings highlight that successful anti-hate speech programs may entail a combination of environmental and personal factors for increasing adolescents' active contribution to an inclusive and discrimination-free classroom environment where hate speech is not tolerated.
Collapse
|
11
|
Making an Impact: Evaluation of an Enduring Online Training Empowering Clinicians to Respond to Hate Speech. J Cogn Psychother 2023:JCP-2022-0024.R1. [PMID: 37369543 DOI: 10.1891/jcp-2022-0024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/29/2023]
Abstract
Mental health clinicians frequently experience hate speech during patient care, resulting in an ethical dilemma. This study evaluated a 1-hour webinar discussing the ethics of working with Veterans who use hate speech, motivations and intentions of hate speech, and guidance on how to respond. The webinar was offered through the virtual Community-Based Outpatient Clinic Mental Health Grand Rounds session at the Veterans Health Administration (VHA). Rural mental health clinicians were the target audience; however, all VHA clinicians could attend. Participants were VHA clinicians who completed the evaluation for the training and received one continuing education unit for this training (n = 668). They were highly satisfied with the training and would recommend it to others. They also reported the intention to talk with coworkers and trainees about responding to hate speech and requested additional training. The recorded training can be viewed for free at https://www.mirecc.va.gov/visn16/working-with-patients-who-use-hate-speech.asp.
Collapse
|
12
|
Disrupting hate: The effect of deplatforming hate organizations on their online audience. Proc Natl Acad Sci U S A 2023; 120:e2214080120. [PMID: 37276418 DOI: 10.1073/pnas.2214080120] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Accepted: 01/20/2023] [Indexed: 06/07/2023] Open
Abstract
How does removing the leadership of online hate organizations from online platforms change behavior in their target audience? We study the effects of six network disruptions of designated and banned hate-based organizations on Facebook, in which known members of the organizations were removed from the platform, by examining the online engagements of the audience of the organization. Using a differences-in-differences approach, we show that on average the network disruptions reduced the consumption and production of hateful content, along with engagement within the network among periphery members. Members of the audience closest to the core members exhibit signs of backlash in the short term, but reduce their engagement within the network and with hateful content over time. The results suggest that strategies of targeted removals, such as leadership removal and network degradation efforts, can reduce the ability of hate organizations to successfully operate online.
Collapse
|
13
|
Identifying Hate Speech and Attribution of Responsibility: An Analysis of Simulated WhatsApp Conversations during the Pandemic. Healthcare (Basel) 2023; 11:healthcare11111564. [PMID: 37297704 DOI: 10.3390/healthcare11111564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 05/22/2023] [Accepted: 05/23/2023] [Indexed: 06/12/2023] Open
Abstract
BACKGROUND During the COVID-19 pandemic, public confrontations between people who had agreed to be vaccinated and those who had not, highlighted the relevance of the deepening dissemination of violent and discriminatory expressions and determined a level of perception of hate discourses. METHOD A cross-sectional observational study was carried out, based on an innovative methodology: simulations of WhatsApp conversations. In addition, the following variables were considered among others: level of empathy, personality traits and conflict resolution. RESULTS The participants were 567 nursing students (413 females, 153 males and 1 person who did not identify with any gender). The results showed that, for the most part, the participants correctly identified hate speech, but were unable to discern the frame of reference. CONCLUSIONS It is necessary to implement intervention strategies to minimize the impact of hate speech, which continues to be used on many levels to harass others, justify violence or undermine rights, generating an environment of prejudice and intolerance that encourages discrimination and violent attacks against certain individuals or collectives.
Collapse
|
14
|
Who are the haters? A corpus-based demographic analysis of authors of hate speech. Front Artif Intell 2023; 6:986890. [PMID: 37275533 PMCID: PMC10235607 DOI: 10.3389/frai.2023.986890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Accepted: 04/28/2023] [Indexed: 06/07/2023] Open
Abstract
Introduction We examine the profiles of hate speech authors in a multilingual dataset of Facebook reactions to news posts discussing topics related to migrants and the LGBT+ community. The included languages are English, Dutch, Slovenian, and Croatian. Methods First, all utterances were manually annotated as hateful or acceptable speech. Next, we used binary logistic regression to inspect how the production of hateful comments is impacted by authors' profiles (i.e., their age, gender, and language). Results Our results corroborate previous findings: in all four languages, men produce more hateful comments than women, and people produce more hate speech as they grow older. But our findings also add important nuance to previously attested tendencies: specific age and gender dynamics vary slightly in different languages or cultures, suggesting that distinct (e.g., socio-political) realities are at play. Discussion Finally, we discuss why author demographics are important in the study of hate speech: the profiles of prototypical "haters" can be used for hate speech detection, for sensibilization on and for counter-initiatives to the spread of (online) hatred.
Collapse
|
15
|
Perception of Hate Speech by the Public and Experts: Insights into Predictors of the Perceived Hate Speech Towards Migrants. CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING 2023. [PMID: 37140462 DOI: 10.1089/cyber.2022.0191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Online hate speech is a matter of concern for social media platforms, regulators, researchers, and the public. Despite its widespread prevalence and contentious nature, little research has been done on the perception of hate speech and its psychosocial predictors. To address this gap, we conducted a study on the perception of hate speech toward migrants in online comments, analyzing the differences between a public group (NPublic = 649) and an expert group (NExperts = 27) and exploring the correlation between the proposed hate speech indicators and perceived hate speech in both groups. Additionally, we explored various predictors of hate speech perception, including demographic and psychological variables such as human values, prejudice, aggression, impulsiveness, social media behavior, attitudes toward migrants and migration, and trust in institutions. Our results show that the public and experts have differing sensitivities toward hate speech, with the expert group perceiving comments as more hateful and emotionally harmful compared with the general population, who tend to agree more with antimigrant hateful comments. The proposed hate speech indicators and especially their total scores have a strong correlation with both groups' perceptions of hate speech. Psychological predictors, such as the human values of universalism, tradition, security, and subjective social distance, were significant predictors of online hate speech sensitivity. Our findings emphasize the need for public and scholarly discussions, more robust educational policies, and intervention programs with specific measures to counter hate speech online.
Collapse
|
16
|
Content Moderation on Social Media: Does It Matter Who and Why Moderates Hate Speech? CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING 2023. [PMID: 37140448 DOI: 10.1089/cyber.2022.0158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Artificial intelligence (AI) has been increasingly integrated into content moderation to detect and remove hate speech on social media. An online experiment (N = 478) was conducted to examine how moderation agents (AI vs. human vs. human-AI collaboration) and removal explanations (with vs. without) affect users' perceptions and acceptance of removal decisions for hate speech targeting social groups with certain characteristics, such as religion or sexual orientation. The results showed that individuals exhibit consistent levels of perceived trustworthiness and acceptance of removal decisions regardless of the type of moderation agent. When explanations for the content takedown were provided, removal decisions made jointly by humans and AI were perceived as more trustworthy than the same decisions made by humans alone, which increased users' willingness to accept the verdict. However, this moderated mediation effect was only significant when Muslims, not homosexuals, were the target of hate speech.
Collapse
|
17
|
Roman Urdu Hate Speech Detection Using Transformer-Based Model for Cyber Security Applications. SENSORS (BASEL, SWITZERLAND) 2023; 23:3909. [PMID: 37112249 PMCID: PMC10143294 DOI: 10.3390/s23083909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 01/17/2023] [Accepted: 01/19/2023] [Indexed: 06/19/2023]
Abstract
Social media applications, such as Twitter and Facebook, allow users to communicate and share their thoughts, status updates, opinions, photographs, and videos around the globe. Unfortunately, some people utilize these platforms to disseminate hate speech and abusive language. The growth of hate speech may result in hate crimes, cyber violence, and substantial harm to cyberspace, physical security, and social safety. As a result, hate speech detection is a critical issue for both cyberspace and physical society, necessitating the development of a robust application capable of detecting and combating it in real-time. Hate speech detection is a context-dependent problem that requires context-aware mechanisms for resolution. In this study, we employed a transformer-based model for Roman Urdu hate speech classification due to its ability to capture the text context. In addition, we developed the first Roman Urdu pre-trained BERT model, which we named BERT-RU. For this purpose, we exploited the capabilities of BERT by training it from scratch on the largest Roman Urdu dataset consisting of 173,714 text messages. Traditional and deep learning models were used as baseline models, including LSTM, BiLSTM, BiLSTM + Attention Layer, and CNN. We also investigated the concept of transfer learning by using pre-trained BERT embeddings in conjunction with deep learning models. The performance of each model was evaluated in terms of accuracy, precision, recall, and F-measure. The generalization of each model was evaluated on a cross-domain dataset. The experimental results revealed that the transformer-based model, when directly applied to the classification task of the Roman Urdu hate speech, outperformed traditional machine learning, deep learning models, and pre-trained transformer-based models in terms of accuracy, precision, recall, and F-measure, with scores of 96.70%, 97.25%, 96.74%, and 97.89%, respectively. In addition, the transformer-based model exhibited superior generalization on a cross-domain dataset.
Collapse
|
18
|
Motives of Online Hate Speech: Results from a Quota Sample Online Survey. CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING 2023. [PMID: 37037039 DOI: 10.1089/cyber.2022.0188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/12/2023]
Abstract
Online hate speech (OHS) is a prevalent societal problem, but most studies investigating the reasons and causes of OHS focus on the perpetrators' side while ignoring the bystanders' and the victims' side. This is also true for the underlying theories. Therefore, we proposed a new Action-Theoretical Model of Online Hate Speech (ATMOHS), which assumes that OHS is a product of environmental, situational, and personal variables with three groups involved (perpetrators, bystanders, and victims) that each have their own set of motives, attitudes, traits, and norm beliefs that are impacting their behavior. The model was put to a first test with an online survey using a quota sample of the German online population (N = 1,791). The study at hand is a first analysis of these data that focus on the motives of OHS. Results show that wanting to be a role model for others is an important motive on the active bystanders' side. However, it could not be confirmed that any aggression motive is important on the perpetrators' side or that undeservingness is an important motive on the victims' side. Future studies could investigate if there are other motives for the victims' side that are in-line with the underlying theory of learned helplessness, or if there is a better theory for modeling the victims' side. Future studies could also develop a better scale for aggression motives. In practice, prevention programs could focus on being a role model for others as a relevant motive for becoming an active bystander.
Collapse
|
19
|
On the rise of fear speech in online social media. Proc Natl Acad Sci U S A 2023; 120:e2212270120. [PMID: 36877833 PMCID: PMC10089164 DOI: 10.1073/pnas.2212270120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 11/29/2022] [Indexed: 03/08/2023] Open
Abstract
Recently, social media platforms are heavily moderated to prevent the spread of online hate speech, which is usually fertile in toxic words and is directed toward an individual or a community. Owing to such heavy moderation, newer and more subtle techniques are being deployed. One of the most striking among these is fear speech. Fear speech, as the name suggests, attempts to incite fear about a target community. Although subtle, it might be highly effective, often pushing communities toward a physical conflict. Therefore, understanding their prevalence in social media is of paramount importance. This article presents a large-scale study to understand the prevalence of 400K fear speech and over 700K hate speech posts collected from Gab.com. Remarkably, users posting a large number of fear speech accrue more followers and occupy more central positions in social networks than users posting a large number of hate speech. They can also reach out to benign users more effectively than hate speech users through replies, reposts, and mentions. This connects to the fact that, unlike hate speech, fear speech has almost zero toxic content, making it look plausible. Moreover, while fear speech topics mostly portray a community as a perpetrator using a (fake) chain of argumentation, hate speech topics hurl direct multitarget insults, thus pointing to why general users could be more gullible to fear speech. Our findings transcend even to other platforms (Twitter and Facebook) and thus necessitate using sophisticated moderation policies and mass awareness to combat fear speech.
Collapse
|
20
|
Is hate speech detection the solution the world wants? Proc Natl Acad Sci U S A 2023; 120:e2209384120. [PMID: 36848573 PMCID: PMC10013846 DOI: 10.1073/pnas.2209384120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/01/2023] Open
Abstract
The machine learning (ML) research community has landed on automated hate speech detection as the vital tool in the mitigation of bad behavior online. However, it is not clear that this is a widely supported view outside of the ML world. Such a disconnect can have implications for whether automated detection tools are accepted or adopted. Here we lend insight into how other key stakeholders understand the challenge of addressing hate speech and the role automated detection plays in solving it. To do so, we develop and apply a structured approach to dissecting the discourses used by online platform companies, governments, and not-for-profit organizations when discussing hate speech. We find that, where hate speech mitigation is concerned, there is a profound disconnect between the computer science research community and other stakeholder groups-which puts progress on this important problem at serious risk. We identify urgent steps that need to be taken to incorporate computational researchers into a single, coherent, multistakeholder community that is working towards civil discourse online.
Collapse
|
21
|
Associations Between Classroom Climate, Empathy, Self-Efficacy, and Countering Hate Speech Among Adolescents: A Multilevel Mediation Analysis. JOURNAL OF INTERPERSONAL VIOLENCE 2023; 38:5067-5091. [PMID: 36148972 PMCID: PMC9900686 DOI: 10.1177/08862605221120905] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Although hate speech is widely recognized as an online phenomenon, very few studies have investigated hate speech among adolescents in offline settings (e.g., schools). At the same time, not much is known about countering hate speech (counterspeech) among adolescents and which factors are associated with it. To this end, the present study used the socio-ecological framework to investigate the direct and indirect links among one contextual factor (i.e., classroom climate) and two intrapersonal factors (i.e., empathy for victims of hate speech, self-efficacy regarding intervention in hate speech) to understand counterspeech among adolescents. The sample is based on self-reports of 3,225 students in Grades 7 to 9 (51.7% self-identified as female) from 36 schools in Germany and Switzerland. Self-report questionnaires were administered to measure classroom climate, empathy, self-efficacy, and counterspeech. After controlling for adolescents' grade, gender, immigrant background, and socioeconomic status (SES), the 2-(1-1)-1 multilevel mediation analysis showed that classroom climate (L2), empathy for victims of hate speech (L1), and self-efficacy toward intervention in hate speech (L1) had a positive effect on countering hate speech (L1). Classroom climate (L2) was also positively linked to empathy for victims of hate speech (L1), and self-efficacy toward intervention in hate speech (L1). Furthermore, classroom climate (L2) was indirectly associated with countering hate speech (L1) via greater empathy (L1) and self-efficacy (L1). The findings highlight the need to focus on contextual and intrapersonal factors when trying to facilitate adolescents' willingness to face hate speech with civic courage and proactively engage against it.
Collapse
|
22
|
Hatemongers ride on echo chambers to escalate hate speech diffusion. PNAS NEXUS 2023; 2:pgad041. [PMID: 36926221 PMCID: PMC10011877 DOI: 10.1093/pnasnexus/pgad041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 01/24/2023] [Accepted: 01/31/2023] [Indexed: 02/10/2023]
Abstract
Recent years have witnessed a swelling rise of hateful and abusive content over online social networks. While detection and moderation of hate speech have been the early go-to countermeasures, the solution requires a deeper exploration of the dynamics of hate generation and propagation. We analyze more than 32 million posts from over 6.8 million users across three popular online social networks to investigate the interrelations between hateful behavior, information dissemination, and polarized organization mediated by echo chambers. We find that hatemongers play a more crucial role in governing the spread of information compared to singled-out hateful content. This observation holds for both the growth of information cascades as well as the conglomeration of hateful actors. Dissection of the core-wise distribution of these networks points towards the fact that hateful users acquire a more well-connected position in the social network and often flock together to build up information cascades. We observe that this cohesion is far from mere organized behavior; instead, in these networks, hatemongers dominate the echo chambers-groups of users actively align themselves to specific ideological positions. The observed dominance of hateful users to inflate information cascades is primarily via user interactions amplified within these echo chambers. We conclude our study with a cautionary note that popularity-based recommendation of content is susceptible to be exploited by hatemongers given their potential to escalate content popularity via echo-chambered interactions.
Collapse
|
23
|
Development of a COVID-19-Related Anti-Asian Tweet Data Set: Quantitative Study. JMIR Form Res 2023; 7:e40403. [PMID: 36693148 PMCID: PMC9976773 DOI: 10.2196/40403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 11/10/2022] [Accepted: 11/11/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Since the advent of the COVID-19 pandemic, individuals of Asian descent (colloquial usage prevalent in North America, where "Asian" is used to refer to people from East Asia, particularly China) have been the subject of stigma and hate speech in both offline and online communities. One of the major venues for encountering such unfair attacks is social networks, such as Twitter. As the research community seeks to understand, analyze, and implement detection techniques, high-quality data sets are becoming immensely important. OBJECTIVE In this study, we introduce a manually labeled data set of tweets containing anti-Asian stigmatizing content. METHODS We sampled over 668 million tweets posted on Twitter from January to July 2020 and used an iterative data construction approach that included 3 different stages of algorithm-driven data selection. Finally, we found volunteers who manually annotated the tweets by hand to arrive at a high-quality data set of tweets and a second, more sampled data set with higher-quality labels from multiple annotators. We presented this final high-quality Twitter data set on stigma toward Chinese people during the COVID-19 pandemic. The data set and instructions for labeling can be viewed in the Github repository. Furthermore, we implemented some state-of-the-art models to detect stigmatizing tweets to set initial benchmarks for our data set. RESULTS Our primary contributions are labeled data sets. Data Set v3.0 contained 11,263 tweets with primary labels (unknown/irrelevant, not-stigmatizing, stigmatizing-low, stigmatizing-medium, stigmatizing-high) and tweet subtopics (eg, wet market and eating habits, COVID-19 cases, bioweapon). Data Set v3.1 contained 4998 (44.4%) tweets randomly sampled from Data Set v3.0, where a second annotator labeled them only on the primary labels and then a third annotator resolved conflicts between the first and second annotators. To demonstrate the usefulness of our data set, preliminary experiments on the data set showed that the Bidirectional Encoder Representations from Transformers (BERT) model achieved the highest accuracy of 79% when detecting stigma on unseen data with traditional models, such as a support vector machine (SVM) performing at 73% accuracy. CONCLUSIONS Our data set can be used as a benchmark for further qualitative and quantitative research and analysis around the issue. It first reaffirms the existence and significance of widespread discrimination and stigma toward the Asian population worldwide. Moreover, our data set and subsequent arguments should assist other researchers from various domains, including psychologists, public policy authorities, and sociologists, to analyze the complex economic, political, historical, and cultural underlying roots of anti-Asian stigmatization and hateful behaviors. A manually annotated data set is of paramount importance for developing algorithms that can be used to detect stigma or problematic text, particularly on social media. We believe this contribution will help predict and subsequently design interventions that will significantly help reduce stigma, hate, and discrimination against marginalized populations during future crises like COVID-19.
Collapse
|
24
|
Offensive, hateful comment: A networked discourse practice of blame and petition for justice during COVID-19 on Chinese Weibo. DISCOURSE STUDIES 2023; 25:3-24. [PMID: 38603137 PMCID: PMC9551666 DOI: 10.1177/14614456221129485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/13/2024]
Abstract
Using data from user comments to the official social networking account of the Hubei Red Cross Foundation on a participatory web platform, this study attends to the offensive and hateful comments produced by ordinary Internet users to blame the elite authorities for their malfeasance in managing the donation during the COVID-19 in China. Drawing on Discursive Psychology, we focus on the rhetorical strategies that users employ to legitimise their actions as well-founded evidential blame against a norm-breaking act rather than radical extremist speech. The associated hatred among discussants are moral, social judgements. That said, hate speech also helps construct the moral standards of a normalised society.
Collapse
|
25
|
Moralized language predicts hate speech on social media. PNAS NEXUS 2022; 2:pgac281. [PMID: 36712927 PMCID: PMC9837664 DOI: 10.1093/pnasnexus/pgac281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 12/02/2022] [Indexed: 12/12/2022]
Abstract
Hate speech on social media threatens the mental health of its victims and poses severe safety risks to modern societies. Yet, the mechanisms underlying its proliferation, though critical, have remained largely unresolved. In this work, we hypothesize that moralized language predicts the proliferation of hate speech on social media. To test this hypothesis, we collected three datasets consisting of N = 691,234 social media posts and ∼35.5 million corresponding replies from Twitter that have been authored by societal leaders across three domains (politics, news media, and activism). Subsequently, we used textual analysis and machine learning to analyze whether moralized language carried in source tweets is linked to differences in the prevalence of hate speech in the corresponding replies. Across all three datasets, we consistently observed that higher frequencies of moral and moral-emotional words predict a higher likelihood of receiving hate speech. On average, each additional moral word was associated with between 10.76% and 16.48% higher odds of receiving hate speech. Likewise, each additional moral-emotional word increased the odds of receiving hate speech by between 9.35 and 20.63%. Furthermore, moralized language was a robust out-of-sample predictor of hate speech. These results shed new light on the antecedents of hate speech and may help to inform measures to curb its spread on social media.
Collapse
|
26
|
Playing by the Rules? An Investigation of the Relationship Between Social Norms and Adolescents' Hate Speech Perpetration in Schools. JOURNAL OF INTERPERSONAL VIOLENCE 2022; 37:NP21143-NP21164. [PMID: 34866450 PMCID: PMC9554370 DOI: 10.1177/08862605211056032] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Hate speech is often discussed and investigated as an online phenomenon, while hate speech among adolescents in the real world (e.g., in schools) has rarely been researched. Consequently, not much is known about potential interpersonal correlates and theoretical frameworks that might help us better understand why adolescents engage in hate speech in offline environments. To add to the literature, this study investigates hate speech perpetration among young people by utilizing the Social Cognitive Theory; more specifically, the association between witnessing and perpetrating hate speech in schools, and whether this relation is weakened or strengthened by anti-hate speech injunctive norms and deviant peer pressure. The sample consists of 1719 young people (45.1% boys, 53.6% girls, 1.3% gender diverse) between 11 and 18 years old (Mage = 13.96; SD = .98) from 22 schools in Switzerland. Self-report questionnaires were administered to measure hate speech involvement and adolescents' perception of social norms (i.e., anti-hate speech injunctive norms and deviant peer pressure). Multilevel regression analyses revealed that witnessing and perpetrating hate speech were positively associated. Moreover, injunctive anti-hate speech norms were negatively related and deviant peer pressure positively related to hate speech perpetration. Finally, increasing levels of injunctive anti-hate speech norms weakened the relation between witnessing and perpetrating hate speech, whereas higher levels of deviant peer pressure strengthened this association. The findings demonstrate that the Social Cognitive Theory is a useful framework for understanding hate speech perpetration in schools. The results also highlight the importance of taking into account social norms and interpersonal relationships (e.g., within the class) when designing anti-hate speech prevention programs and not focusing solely on intrapersonal factors.
Collapse
|
27
|
Online Hate Speech Victimization and Depressive Symptoms Among Adolescents: The Protective Role of Resilience. CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING 2022; 25:416-423. [PMID: 35639126 DOI: 10.1089/cyber.2022.0009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Online hate speech has become a widespread problem in the daily life of adolescents. Despite growing societal and academic interest in this online risk, not much is known about the relationship between online hate speech victimization (OHSV) and adolescents' mental well-being. In addition, potential factors influencing the magnitude of this relationship remain unclear. To address these gaps in the literature, this study investigated the relationship between OHSV and depressive symptoms and the buffering effects of resilience in this relationship. The sample consists of 1,632 adolescents (49.1% girls) between 12 and 18 years old (Mage = 13.83, SDage = 1.23), recruited from nine schools across Spain. Self-report questionnaires were administered to assess OHSV, depressive symptoms, and resilience. Regression analyses revealed that OHSV was positively linked to depressive symptoms. In addition, victims of online hate speech were less likely to report depressive symptoms when they reported average or high levels of resilience (i.e., social competence, personal competence, structured style, social resources, and family cohesion) compared with those with low levels of resilience. Our findings highlight the need for the development of intervention programs and the relevance of focusing on internal and external developmental assets to mitigate negative outcomes for victims of online hate speech.
Collapse
|
28
|
Associations between Coping Strategies and Cyberhate Involvement: Evidence from Adolescents across Three World Regions. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19116749. [PMID: 35682330 PMCID: PMC9180730 DOI: 10.3390/ijerph19116749] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 05/27/2022] [Accepted: 05/28/2022] [Indexed: 02/01/2023]
Abstract
Cyberhate represents a risk to adolescents' development and peaceful coexistence in democratic societies. Yet, not much is known about the relationship between adolescents' ability to cope with cyberhate and their cyberhate involvement. To fill current gaps in the literature and inform the development of media education programs, the present study investigated various coping strategies in a hypothetical cyberhate scenario as correlates for being cyberhate victims, perpetrators, and both victim-perpetrators. The sample consisted of 6829 adolescents aged 12-18 years old (Mage = 14.93, SD = 1.64; girls: 50.4%, boys: 48.9%, and 0.7% did not indicate their gender) from Asia, Europe, and North America. Results showed that adolescents who endorsed distal advice or endorsed technical coping showed a lower likelihood to be victims, perpetrators, or victim-perpetrators. In contrast, if adolescents felt helpless or endorsed retaliation to cope with cyberhate, they showed higher odds of being involved in cyberhate as victims, perpetrators, or victim-perpetrators. Finally, adolescents who endorsed close support as a coping strategy showed a lower likelihood to be victim-perpetrators, and adolescents who endorsed assertive coping showed higher odds of being victims. In conclusion, the results confirm the importance of addressing adolescents' ability to deal with cyberhate to develop more tailored prevention approaches. More specifically, such initiatives should focus on adolescents who feel helpless or feel inclined to retaliate. In addition, adolescents should be educated to practice distal advice and technical coping when experiencing cyberhate. Implications for the design and instruction of evidence-based cyberhate prevention (e.g., online educational games, virtual learning environments) will be discussed.
Collapse
|
29
|
The Online Hate Speech Cycle of Violence: Moderating Effects of Moral Disengagement and Empathy in the Victim-to-Perpetrator Relationship. CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING 2022; 25:223-229. [PMID: 35172113 DOI: 10.1089/cyber.2021.0159] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Adolescents around the globe are increasingly exposed to online hate speech (OHS). And yet little is known about the varying roles of involvement and the determinants of adolescents' hate speech perpetration. Building on previous research, this study aims to test the cycle of violence hypothesis for OHS and to analyze whether moral disengagement (MD) and empathy moderate the victim-to-perpetrator relationship. The sample consists of 3,560 seventh to ninth graders (52.1 percent girls), recruited from 40 schools across Germany and Switzerland. Self-report questionnaires were administered to assess OHS involvement, MD, and empathy. Multilevel analyses revealed that victims of OHS were more likely to report OHS perpetration. In addition, victims of OHS were more likely to report OHS perpetration when they reported higher levels of MD than those with lower levels of MD. Finally, victims of OHS were less likely to report OHS perpetration when they reported higher levels of empathy than those with lower levels of empathy. The findings extend the cycle of violence hypothesis to OHS and highlight the need to address MD and empathy in hate speech prevention. Implications for future research will be discussed.
Collapse
|
30
|
Photovoice as Counterspeech. Health Promot Pract 2022; 23:230-234. [PMID: 35285325 DOI: 10.1177/15248399211059129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
A notorious hate group purchased anti-Muslim advertisements on buses operated by the San Francisco Municipal Transit Authority. The San Francisco Human Rights Commission engaged members of the Arab, Middle Eastern, Muslim, and South Asian communities in a photovoice project to explore the cultural identities, challenges, and resilience of community members coping with discrimination. The project provided a case example of photovoice as counterspeech and demonstrated the way in which counterspeech empowers affected communities to push back against harmful and threatening expression with resilience, cultural pride, and self-determination. Women and men in the photovoice participant group represented a wide range of backgrounds and ethnicities: Palestinian, Indian, Pakistani, and Lebanese. Religious affiliations included Muslim, Sikh, Christian, nondenominational, and agnostic. The exhibit was presented to the public in three major venues and was made available online.
Collapse
|
31
|
Empathy-based counterspeech can reduce racist hate speech in a social media field experiment. Proc Natl Acad Sci U S A 2021; 118:2116310118. [PMID: 34873046 PMCID: PMC8685915 DOI: 10.1073/pnas.2116310118] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/02/2021] [Indexed: 11/18/2022] Open
Abstract
Despite heightened awareness of the detrimental impact of hate speech on social media platforms on affected communities and public discourse, there is little consensus on approaches to mitigate it. While content moderation-either by governments or social media companies-can curb online hostility, such policies may suppress valuable as well as illicit speech and might disperse rather than reduce hate speech. As an alternative strategy, an increasing number of international and nongovernmental organizations (I/NGOs) are employing counterspeech to confront and reduce online hate speech. Despite their growing popularity, there is scant experimental evidence on the effectiveness and design of counterspeech strategies (in the public domain). Modeling our interventions on current I/NGO practice, we randomly assign English-speaking Twitter users who have sent messages containing xenophobic (or racist) hate speech to one of three counterspeech strategies-empathy, warning of consequences, and humor-or a control group. Our intention-to-treat analysis of 1,350 Twitter users shows that empathy-based counterspeech messages can increase the retrospective deletion of xenophobic hate speech by 0.2 SD and reduce the prospective creation of xenophobic hate speech over a 4-wk follow-up period by 0.1 SD. We find, however, no consistent effects for strategies using humor or warning of consequences. Together, these results advance our understanding of the central role of empathy in reducing exclusionary behavior and inform the design of future counterspeech interventions.
Collapse
|
32
|
Linguistic Patterns for Code Word Resilient Hate Speech Identification. SENSORS 2021; 21:s21237859. [PMID: 34883861 PMCID: PMC8659976 DOI: 10.3390/s21237859] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 11/22/2021] [Accepted: 11/22/2021] [Indexed: 11/16/2022]
Abstract
The permanent transition to online activity has brought with it a surge in hate speech discourse. This has prompted increased calls for automatic detection methods, most of which currently rely on a dictionary of hate speech words, and supervised classification. This approach often falls short when dealing with newer words and phrases produced by online extremist communities. These code words are used with the aim of evading automatic detection by systems. Code words are frequently used and have benign meanings in regular discourse, for instance, "skypes, googles, bing, yahoos" are all examples of words that have a hidden hate speech meaning. Such overlap presents a challenge to the traditional keyword approach of collecting data that is specific to hate speech. In this work, we first introduced a word embedding model that learns the hidden hate speech meaning of words. With this insight on code words, we developed a classifier that leverages linguistic patterns to reduce the impact of individual words. The proposed method was evaluated across three different datasets to test its generalizability. The empirical results show that the linguistic patterns approach outperforms the baselines and enables further analysis on hate speech expressions.
Collapse
|
33
|
Should We Protect Animals from Hate Speech? OXFORD JOURNAL OF LEGAL STUDIES 2021; 41:1149-1172. [PMID: 34876882 PMCID: PMC8643609 DOI: 10.1093/ojls/gqab013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Laws against hate speech protect members of certain human groups. However, they do not offer protection to nonhuman animals. Using racist hate speech as our primary example, we explore the discrepancy between the legal response to hate speech targeting human groups and what might be called anti-animal or speciesist hate speech. We explore two sets of possible defences of this legal discrepancy drawn from the philosophical literature on hate speech-non-consequentialist and harm-based-and find both wanting. We thus conclude that, absent a compelling alternative argument, there is no in-principle reason to support the censure of racist hate speech but not the censure of speciesist hate speech.
Collapse
|
34
|
Artificial intelligence against hate: Intervention reducing verbal aggression in the social network environment. Aggress Behav 2021; 47:260-266. [PMID: 33469962 DOI: 10.1002/ab.21948] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Revised: 12/16/2020] [Accepted: 12/28/2020] [Indexed: 01/09/2023]
Abstract
This article presents a quasi-experimental intervention study designed to reduce the level of verbal aggression on a social networking service (Reddit). The interventions were based on three psychological mechanisms: induction of a descriptive norm, induction of a prescriptive norm, and empathy induction. Each intervention was generated using a communicating bot. Participants exposed to these interventions were compared with a control group that received no intervention. The bot-generated normative communications (both the ones priming descriptive and the ones priming prescriptive norms), as well as the empathizing intervention, reduced the proportion of verbal aggression posted by Reddit accounts. All three interventions proved effective in reducing verbal violence when compared with the control condition.
Collapse
|
35
|
Hate Speech, Emotions, and Gender Identities: A Study of Social Narratives on Twitter with Trainee Teachers. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18084055. [PMID: 33921402 PMCID: PMC8070138 DOI: 10.3390/ijerph18084055] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 04/03/2021] [Accepted: 04/09/2021] [Indexed: 11/16/2022]
Abstract
The objective of this study is, on the one hand, to analyse emotional responses to the construction of hate speech relating to gender identity on Twitter. On the other hand, the objective is to evaluate the capabilities of trainee primary education teachers at constructing alternative counter-narratives to this socially alive issue, surrounding the approval of the Ley de Identidad de Género [Gender Identity Law] in Chile, in 2018. With this two-fold objective in mind, quantitative, descriptive, and inferential analysis and qualitative analysis techniques are all applied. The results inform us of the influence of socially constructed emotions and feelings that are expressed in social narratives. However, the narratives of the participants neither appeared to reach satisfactory levels of reflection on the social issues that stirred their own emotional responses, nor on the conflict between reason and the value judgements that they expressed in the digital debate (counter-narratives). These results point to the need to consider both emotions and feelings, as categories of social analysis, and to reflect on their forms of expression within the framework of education for inclusive democratic citizenship.
Collapse
|
36
|
Social Media Use by Young People Living in Conflict-Affected Regions of Myanmar. CYBERPSYCHOLOGY BEHAVIOR AND SOCIAL NETWORKING 2020; 23:876-888. [PMID: 33326325 DOI: 10.1089/cyber.2020.0131] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Since Myanmar's transition to civilian rule in 2011, the use of smartphones has exploded. The ubiquitous use of social media, in particular Facebook, has had a well-publicized dark side as a platform for disseminating fake news and hate speech. With poor digital literacy skills, Myanmar users are susceptible to disinformation campaigns and other online dangers, particularly young people, the most prevalent users. The current study collected benchmarking data related to usage patterns and perceptions of social media among Myanmar youth living in conflict-affected areas, including internally displaced Rohingya Muslims whose voice is yet to be captured in social media research. A total of 231 participants from Rakhine and Kayah States aged 13-35 were surveyed about their use of social media and its impact on Myanmar society and their own lives. More than half of the young people sampled were heavy smartphone users, particularly Rohingya Muslims, who rely on social media for contact with society outside of their camps. Facebook was by far the most used app, however, private group chat-based platforms were also popular. A majority said that they check the accuracy of news they read on social media, however, most do so by checking other Facebook pages or asking friends or family. Overall, most young people thought that social media has both a positive impact and negative impact on their lives. This study adds to an emerging body of research into how Myanmar youth engage with social media, particularly regarding how they consume news of current events.
Collapse
|
37
|
Detection of Hate Speech in COVID-19-Related Tweets in the Arab Region: Deep Learning and Topic Modeling Approach. J Med Internet Res 2020; 22:e22609. [PMID: 33207310 PMCID: PMC7725497 DOI: 10.2196/22609] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Revised: 10/10/2020] [Accepted: 11/16/2020] [Indexed: 01/20/2023] Open
Abstract
Background The massive scale of social media platforms requires an automatic solution for detecting hate speech. These automatic solutions will help reduce the need for manual analysis of content. Most previous literature has cast the hate speech detection problem as a supervised text classification task using classical machine learning methods or, more recently, deep learning methods. However, work investigating this problem in Arabic cyberspace is still limited compared to the published work on English text. Objective This study aims to identify hate speech related to the COVID-19 pandemic posted by Twitter users in the Arab region and to discover the main issues discussed in tweets containing hate speech. Methods We used the ArCOV-19 dataset, an ongoing collection of Arabic tweets related to COVID-19, starting from January 27, 2020. Tweets were analyzed for hate speech using a pretrained convolutional neural network (CNN) model; each tweet was given a score between 0 and 1, with 1 being the most hateful text. We also used nonnegative matrix factorization to discover the main issues and topics discussed in hate tweets. Results The analysis of hate speech in Twitter data in the Arab region identified that the number of non–hate tweets greatly exceeded the number of hate tweets, where the percentage of hate tweets among COVID-19 related tweets was 3.2% (11,743/547,554). The analysis also revealed that the majority of hate tweets (8385/11,743, 71.4%) contained a low level of hate based on the score provided by the CNN. This study identified Saudi Arabia as the Arab country from which the most COVID-19 hate tweets originated during the pandemic. Furthermore, we showed that the largest number of hate tweets appeared during the time period of March 1-30, 2020, representing 51.9% of all hate tweets (6095/11,743). Contrary to what was anticipated, in the Arab region, it was found that the spread of COVID-19–related hate speech on Twitter was weakly related with the dissemination of the pandemic based on the Pearson correlation coefficient (r=0.1982, P=.50). The study also identified the commonly discussed topics in hate tweets during the pandemic. Analysis of the 7 extracted topics showed that 6 of the 7 identified topics were related to hate speech against China and Iran. Arab users also discussed topics related to political conflicts in the Arab region during the COVID-19 pandemic. Conclusions The COVID-19 pandemic poses serious public health challenges to nations worldwide. During the COVID-19 pandemic, frequent use of social media can contribute to the spread of hate speech. Hate speech on the web can have a negative impact on society, and hate speech may have a direct correlation with real hate crimes, which increases the threat associated with being targeted by hate speech and abusive language. This study is the first to analyze hate speech in the context of Arabic COVID-19–related tweets in the Arab region.
Collapse
|
38
|
Youth Exposure to Hate in the Online Space: An Exploratory Analysis. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17228531. [PMID: 33212999 PMCID: PMC7698507 DOI: 10.3390/ijerph17228531] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Revised: 11/09/2020] [Accepted: 11/11/2020] [Indexed: 11/17/2022]
Abstract
Today’s youth have extensive access to the internet and frequently engage in social networking activities using various social media platforms and devices. This is a phenomenon that hate groups are exploiting when disseminating their propaganda. This study seeks to better understand youth exposure to hateful material in the online space by exploring predictors of such exposure including demographic characteristics (age, gender, and race), academic performance, online behaviors, online disinhibition, risk perception, and parents/guardians’ supervision of online activities. We implemented a cross-sectional study design, using a paper questionnaire, in two high schools in Massachusetts (USA), focusing on students 14 to 19 years old. Logistic regression models were used to study the association between independent variables (demographics, online behaviors, risk perception, parental supervision) and exposure to hate online. Results revealed an association between exposure to hate messages in the online space and time spent online, academic performance, communicating with a stranger on social media, and benign online disinhibition. In our sample, benign online disinhibition was also associated with students’ risk of encountering someone online that tried to convince them of racist views. This study represents an important contribution to understanding youth’s risk factors of exposure to hateful material online.
Collapse
|
39
|
Stigmatization in social media: Documenting and analyzing hate speech for COVID-19 on Twitter. PROCEEDINGS OF THE ASSOCIATION FOR INFORMATION SCIENCE AND TECHNOLOGY. ASSOCIATION FOR INFORMATION SCIENCE AND TECHNOLOGY 2020; 57:e313. [PMID: 33173820 PMCID: PMC7645876 DOI: 10.1002/pra2.313] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
As the COVID-19 pandemic has unfolded, Hate Speech on social media about China and Chinese people has encouraged social stigmatization. For the historical and humanistic purposes, this history-in-the-making needs to be archived and analyzed. Using the query "china+and+coronavirus" to scrape from the Twitter API, we have obtained 3,457,402 key tweets about China relating to COVID-19. In this archive, in which about 40% of the tweets are from the U.S., we identify 25,467 Hate Speech occurrences and analyze them according to lexicon-based emotions and demographics using machine learning and network methods. The results indicate that there are substantial associations between the amount of Hate Speech and demonstrations of sentiments, and state demographics factors. Sentiments of surprise and fear associated with poverty and unemployment rates are prominent. This digital archive and the related analyses are not simply historical, therefore. They play vital roles in raising public awareness and mitigating future crises. Consequently, we regard our research as a pilot study in methods of analysis that might be used by other researchers in various fields.
Collapse
|
40
|
Are Online Haters Psychopaths? Psychological Predictors of Online Hating Behavior. Front Psychol 2020; 11:553. [PMID: 32292374 PMCID: PMC7121332 DOI: 10.3389/fpsyg.2020.00553] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2019] [Accepted: 03/09/2020] [Indexed: 01/07/2023] Open
Abstract
Despite growing prevalence of derogatory online behaviors, still little is known about psychological factors underlying this negative phenomenon. In the present study, we aimed to compare characteristics of persons who post hating and non-hating comments about Polish sports players during Winter Olympic Games in Pyeongchang (2018) on the Internet. Ninety-four Internet users (41% women) participated in the study, among which 46 posted hating comments. After 1 month, participants were invited to take part in a psychological survey, and filled the Dark Triad questionnaire, the Satisfaction with Life Scale, the Scale of Frustration, and the Scale of Envy. Results showed that high scores in Psychopathy subscale were significant predictors of posting hating comments online; high scores on the Envy Scale were marginally significant. Our findings provide initial evidence that persons who engage in derogatory online behavior have a high level of Psychopathy, but, contrary to previous studies, do not have elevated levels of other traits, commonly associated with disruptive behavior. Our research is one of the first to establish a psychological background of online haters, while setting a clear line between online hating and other derogatory online behaviors (e.g., trolling, cyber-bullying, or hatred speech).
Collapse
|
41
|
The Datafication of Hate: Expectations and Challenges in Automated Hate Speech Monitoring. Front Big Data 2020; 3:3. [PMID: 33693378 PMCID: PMC7931925 DOI: 10.3389/fdata.2020.00003] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Accepted: 01/14/2020] [Indexed: 11/22/2022] Open
Abstract
Hate speech has been identified as a pressing problem in society and several automated approaches have been designed to detect and prevent it. This paper reports and reflects upon an action research setting consisting of multi-organizational collaboration conducted during Finnish municipal elections in 2017, wherein a technical infrastructure was designed to automatically monitor candidates' social media updates for hate speech. The setting allowed us to engage in a 2-fold investigation. First, the collaboration offered a unique view for exploring how hate speech emerges as a technical problem. The project developed an adequately well-working algorithmic solution using supervised machine learning. We tested the performance of various feature extraction and machine learning methods and ended up using a combination of Bag-of-Words feature extraction with Support-Vector Machines. However, an automated approach required heavy simplification, such as using rudimentary scales for classifying hate speech and a reliance on word-based approaches, while in reality hate speech is a linguistic and social phenomenon with various tones and forms. Second, the action-research-oriented setting allowed us to observe affective responses, such as the hopes, dreams, and fears related to machine learning technology. Based on participatory observations, project artifacts and documents, interviews with project participants, and online reactions to the detection project, we identified participants' aspirations for effective automation as well as the level of neutrality and objectivity introduced by an algorithmic system. However, the participants expressed more critical views toward the system after the monitoring process. Our findings highlight how the powerful expectations related to technology can easily end up dominating a project dealing with a contested, topical social issue. We conclude by discussing the problematic aspects of datafying hate and suggesting some practical implications for hate speech recognition.
Collapse
|
42
|
Associations between Witnessing and Perpetrating Online Hate in Eight Countries: The Buffering Effects of Problem-Focused Coping. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2019; 16:ijerph16203992. [PMID: 31635408 PMCID: PMC6843940 DOI: 10.3390/ijerph16203992] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Revised: 10/14/2019] [Accepted: 10/17/2019] [Indexed: 11/17/2022]
Abstract
Online hate is a topic that has received considerable interest lately, as online hate represents a risk to self-determination and peaceful coexistence in societies around the globe. However, not much is known about the explanations for adolescents posting or forwarding hateful online material or how adolescents cope with this newly emerging online risk. Thus, we sought to better understand the relationship between a bystander to and perpetrator of online hate, and the moderating effects of problem-focused coping strategies (e.g., assertive, technical coping) within this relationship. Self-report questionnaires on witnessing and committing online hate and assertive and technical coping were completed by 6829 adolescents between 12 and 18 years of age from eight countries. The results showed that increases in witnessing online hate were positively related to being a perpetrator of online hate. Assertive and technical coping strategies were negatively related with perpetrating online hate. Bystanders of online hate reported fewer instances of perpetrating online hate when they reported higher levels of assertive and technical coping strategies, and more frequent instances of perpetrating online hate when they reported lower levels of assertive and technical coping strategies. In conclusion, our findings suggest that, if effective, prevention and intervention programs that target online hate should consider educating young people about problem-focused coping strategies, self-assertiveness, and media skills. Implications for future research are discussed.
Collapse
|
43
|
The Moderation of Online Disinhibition and Sex on the Relationship Between Online Hate Victimization and Perpetration. CYBERPSYCHOLOGY BEHAVIOR AND SOCIAL NETWORKING 2019; 22:300-306. [PMID: 30945942 DOI: 10.1089/cyber.2018.0551] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Online hatred based on attributes, such as origin, race, gender, religion, or sexual orientation, has become a rising public concern across the world. Past research on aggressive behavior suggests strong associations between victimization and perpetration and that toxic online disinhibition and sex might influence this relationship. However, no study investigated both the relationship between online hate victimization and perpetration and the potential moderation effects of toxic online disinhibition on this relationship. To this end, the present study was conducted. The sample consists of 1,480 7th to 10th graders from Germany. Results revealed positive associations between online hate victimization and perpetration. Furthermore, the results support the idea that toxic online disinhibition and sex, by way of moderator effects, affect the relationship between online hate victimization and perpetration. Victims of online hate reported more online hate perpetration when they reported higher levels of online disinhibition and less frequent online hate perpetration when they reported lower levels of toxic online disinhibition. Additionally, the relationship between online hate victimization and perpetration was significantly greater among boys than among girls. Taken together, our results extend previous findings to online hate involvement among adolescents and substantiate the importance to conduct more research on online hate. In addition, our findings highlight the need for prevention and intervention programs that help adolescents deal with the emerging issue of online hate.
Collapse
|
44
|
Associations between Bystanders and Perpetrators of Online Hate: The Moderating Role of Toxic Online Disinhibition. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2018; 15:ijerph15092030. [PMID: 30227666 PMCID: PMC6163978 DOI: 10.3390/ijerph15092030] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2018] [Revised: 09/12/2018] [Accepted: 09/15/2018] [Indexed: 11/16/2022]
Abstract
Hatred directed at members of groups due to their origin, race, gender, religion, or sexual orientation is not new, but it has taken on a new dimension in the online world. To date, very little is known about online hate among adolescents. It is also unknown how online disinhibition might influence the association between being bystanders and being perpetrators of online hate. Thus, the present study focused on examining the associations among being bystanders of online hate, being perpetrators of online hate, and the moderating role of toxic online disinhibition in the relationship between being bystanders and perpetrators of online hate. In total, 1480 students aged between 12 and 17 years old were included in this study. Results revealed positive associations between being online hate bystanders and perpetrators, regardless of whether adolescents had or had not been victims of online hate themselves. The results also showed an association between toxic online disinhibition and online hate perpetration. Further, toxic online disinhibition moderated the relationship between being bystanders of online hate and being perpetrators of online hate. Implications for prevention programs and future research are discussed.
Collapse
|
45
|
Who Decides What Is Acceptable Speech on Campus? Why Restricting Free Speech Is Not the Answer. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2018; 13:299-323. [PMID: 29716456 DOI: 10.1177/1745691618767324] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Recent protests on dozens of campuses have led to the cancellation of controversial talks, and violence has accompanied several of these protests. Psychological science provides an important lens through which to view, understand, and potentially reduce these conflicts. In this article, we frame opposing sides' arguments within a long-standing corpus of psychological research on selective perception, confirmation bias, myside bias, illusion of understanding, blind-spot bias, groupthink/in-group bias, motivated skepticism, and naive realism. These concepts inform dueling claims: (a) the protestors' violence was justified by a higher moral responsibility to prevent marginalized groups from being victimized by hate speech, versus (b) the students' right to hear speakers was infringed upon. Psychological science cannot, however, be the sole arbiter of these campus debates; legal and philosophical considerations are also relevant. Thus, we augment psychological science with insights from these literatures to shed light on complexities associated with positions supporting free speech and those protesting hate speech. We conclude with a set of principles, most supported by empirical research, to inform university policies and help ensure vigorous freedom of expression within the context of an inclusive, diverse community.
Collapse
|
46
|
Exposure to hate speech increases prejudice through desensitization. Aggress Behav 2018; 44:136-146. [PMID: 29094365 DOI: 10.1002/ab.21737] [Citation(s) in RCA: 81] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2016] [Revised: 09/06/2017] [Accepted: 09/07/2017] [Indexed: 11/07/2022]
Abstract
In three studies (two representative nationwide surveys, N = 1,007, N = 682; and one experimental, N = 76) we explored the effects of exposure to hate speech on outgroup prejudice. Following the General Aggression Model, we suggest that frequent and repetitive exposure to hate speech leads to desensitization to this form of verbal violence and subsequently to lower evaluations of the victims and greater distancing, thus increasing outgroup prejudice. In the first survey study, we found that lower sensitivity to hate speech was a positive mediator of the relationship between frequent exposure to hate speech and outgroup prejudice. In the second study, we obtained a crucial confirmation of these effects. After desensitization training individuals were less sensitive to hate speech and more prejudiced toward hate speech victims than their counterparts in the control condition. In the final study, we replicated several previous effects and additionally found that the effects of exposure to hate speech on prejudice were mediated by a lower sensitivity to hate speech, and not by lower sensitivity to social norms. Altogether, our studies are the first to elucidate the effects of exposure to hate speech on outgroup prejudice.
Collapse
|
47
|
Dehumanization: its operations and its origins. JOURNAL OF LAW AND THE BIOSCIENCES 2016; 3:178-184. [PMID: 27774239 PMCID: PMC5033426 DOI: 10.1093/jlb/lsv040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Murrow and Murrow offer a novel account of dehumanization, by synthesizing data which suggest that where subject S has a dehumanized view of group G, S's neural mechanisms of empathy show a dampened response to the suffering of members of G, and S's judgments about the humanity of members of G are largely non-conscious. Here I examine Murrow and Murrow's suggestions about how identity-based hate speech bears responsibility for dehumanization in the first place. I identify a distinction between (i) accounts of the nature of the harm effected by identity prejudice, and (ii) accounts of how hate speech contributes to the harms of identity prejudice. I then explain why Murrow and Murrow's proposal is more aptly construed as an account of type (i), and explain why accounts of this type, even if they're plausible and evidentially well-supported, have limited implications in relation to justifications for anti-hate speech law.
Collapse
|
48
|
A valid question: Could hate speech condition bias in the brain? JOURNAL OF LAW AND THE BIOSCIENCES 2016; 3:196-201. [PMID: 27774242 PMCID: PMC5033431 DOI: 10.1093/jlb/lsw009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
|
49
|
Us and them: identifying cyber hate on Twitter across multiple protected characteristics. EPJ DATA SCIENCE 2016; 5:11. [PMID: 32355598 PMCID: PMC7175598 DOI: 10.1140/epjds/s13688-016-0072-6] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2015] [Accepted: 03/15/2016] [Indexed: 06/11/2023]
Abstract
Hateful and antagonistic content published and propagated via the World Wide Web has the potential to cause harm and suffering on an individual basis, and lead to social tension and disorder beyond cyber space. Despite new legislation aimed at prosecuting those who misuse new forms of communication to post threatening, harassing, or grossly offensive language - or cyber hate - and the fact large social media companies have committed to protecting their users from harm, it goes largely unpunished due to difficulties in policing online public spaces. To support the automatic detection of cyber hate online, specifically on Twitter, we build multiple individual models to classify cyber hate for a range of protected characteristics including race, disability and sexual orientation. We use text parsing to extract typed dependencies, which represent syntactic and grammatical relationships between words, and are shown to capture 'othering' language - consistently improving machine classification for different types of cyber hate beyond the use of a Bag of Words and known hateful terms. Furthermore, we build a data-driven blended model of cyber hate to improve classification where more than one protected characteristic may be attacked (e.g. race and sexual orientation), contributing to the nascent study of intersectionality in hate crime.
Collapse
|
50
|
A hypothetical neurological association between dehumanization and human rights abuses. JOURNAL OF LAW AND THE BIOSCIENCES 2015; 2:336-364. [PMID: 27774198 PMCID: PMC5034371 DOI: 10.1093/jlb/lsv015] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Dehumanization is anecdotally and historically associated with reduced empathy for the pain of dehumanized individuals and groups and with psychological and legal denial of their human rights and extreme violence against them. We hypothesize that 'empathy' for the pain and suffering of dehumanized social groups is automatically reduced because, as the research we review suggests, an individual's neural mechanisms of pain empathy best respond to (or produce empathy for) the pain of people whom the individual automatically or implicitly associates with her or his own species. This theory has implications for the philosophical conception of 'human' and of 'legal personhood' in human rights jurisprudence. It further has implications for First Amendment free speech jurisprudence, including the doctrine of 'corporate personhood' and consideration of the potential harm caused by dehumanizing hate speech. We suggest that the new, social neuroscience of empathy provides evidence that both the vagaries of the legal definition or legal fiction of 'personhood' and hate speech that explicitly and implicitly dehumanizes may (in their respective capacities to artificially humanize or dehumanize) manipulate the neural mechanisms of pain empathy in ways that could pose more of a true threat to human rights and rights-based democracy than previously appreciated.
Collapse
|