Academics at Royal Holloway have published a new report on how hate speech is still very much prevalent on unregulated, but well popular, platforms.

While mainstream sites have taken steps to moderate harmful content, less regulated platforms, such as 4chan, continue to foster a murky community of online hate speech.
The 4chan platform is a Japanese owned image-based bulletin board used world-wide, where people can post comments and share images, with upwards of 22 million unique visitors a month.
It is infamous for its anonymous nature and minimal moderation, making it a unique case study for analysing hate speech.
Platforms such as these remain breeding grounds for extreme ideologies, with the latest study delving into the troubling prevalence of hate speech on 4chan’s politically incorrect discussion board, known as /pol/, using cutting-edge deep and machine AI learning techniques.
The posts on 4chan also includes various forms of hate including against women and girls.
The disturbing findings revealed that 11.2% of posts contained hate speech, targeting various communities based on race, religion, gender, and sexual orientation. Among the different categories:
- Racism was the most prevalent, making up 35.9% of hateful posts.
- Religious hate followed closely, accounting for 23.3%.
- Sexual orientation hate comprised 16.5%.
- Sexism was found in 12% of hateful discussions.
Using topic modelling techniques in the research, recurring themes were identified within these hateful discussions.
Racial hate speech often incorporated conspiracy theories and dehumanising rhetoric. Religious hate was largely directed at Jewish and Muslim communities, filled with stereotypes and aggressive language. Meanwhile, sexism on /pol/ displayed strong misogynistic tendencies, reducing women to derogatory labels and objectification.
The research, funded by the AGENCY Project, conducted employed state-of-the-art Natural Language Processing (NLP) models, including RoBERTa and Detoxify, to measure the extent and types of hate speech found on politically incorrect platforms.
With a dataset of half a million posts collected over time, the study aimed to quantify the scale of harmful discourse and uncover its hidden patterns.
The key objectives included measuring the prevalence of different forms of hate speech on /pol/, assessing the toxicity of discussions and their impact on digital safety and identifying recurring topics within hate speech to better understand its context.
Dr Adrian Bermudez-Villalva, lead author of the paper, from the Department of Information Security at Royal Holloway, said: “The findings from our study expose the dangers of unmoderated digital spaces and their role in spreading hate speech.
“While freedom of expression is a fundamental right, it must be balanced against the risks of online harm.
“We found discussions involving sexual orientation and racism were among the most toxic, with nearly 99% of flagged posts classified as highly offensive.
Dr Maryam Mehrnezhad, co-author, and Reader in Information Security Department adds: “As part of the AGENCY Project’s ongoing mission, we aim to develop actionable solutions that empower individuals to navigate online spaces safely while holding platforms accountable for fostering inclusive digital environments.”
The report highlights the complexity of online hate, where discussions are not limited to overt slurs but extend into coded language, political rhetoric, and ideological indoctrination.
The study has profound implications for policymakers, platform developers, and researchers. Policymakers need to understand the nature and prevalence of online hate to craft more effective regulations, such as the UK Online Safety Act and EU content moderation policies.
The findings stress the importance for tech developers of advanced AI-driven moderation tools that can detect not only explicit hate speech but also nuanced and coded language and society as a whole need digital literacy initiatives to educate them on identifying and countering online hate speech.
The published paper is called ‘Measuring Online Hate on 4chan using Pre-trained Deep Learning Models, IEEE Transactions on Technology and Society, 2025.’