Can Social Media Algorithms be Regulated to Prevent Online Radicalization in Developing Countries?

We are excited to launch our Job Market Paper Series blog for 2024-2025, beginning with our very first blog post by Aarushi Kalra.

 Aarushi Kalra is a Ph.D. candidate in Economics at Brown University.

As social media algorithms continue to influence user engagement, the question of regulating these algorithms to limit the spread of harmful content is more pressing than ever. This blog summarizes my study conducted on a popular multilingual social media platform with 200 million users in India, shedding light on whether these interventions can truly curb the spread of harmful narratives—and if platforms would willingly adopt such changes. This question is crucial, as the spread of misinformation and hate speech has been linked to real-world violence against minorities, particularly in India, which is the focus of my job market paper.

Understanding the Experiment: Testing Algorithm-Free Feeds

To answer these questions, my paper, “Hate in the Time of Algorithms: Evidence from a Large Scale Experiment on Online Behavior” conducted a novel large-scale experiment by replacing algorithm-driven feeds with random content for one million treated users. For eleven months, the feed-ranking algorithm was disabled, and users received content randomly selected from the platform’s vast library. This intervention helps in measuring the causal effect of algorithmic curation on online engagement, focusing on interactions with harmful content.

The main outcomes I analyze include time spent on the platform, total number of posts viewed and shared off the platform, as well as the number of ‘’toxic’’ posts engaged with. I define toxicity according to a Google-developed algorithm tailored to measure harm a post can potentially cause to vulnerable groups. Perspective API is used by organizations like the New York Times to filter out abusive comments as it defines a toxic comment as a “rude, disrespectful, or unreasonable comment that is likely to make someone leave a discussion.’’ In particular, I examine the intersection of toxic and political post that are shown to verbally attack India’s Muslim minority.

Key Findings: Reduced Exposure but Mixed Results on User Behavior

The results were significant but paint a nuanced picture of required regulatory framework:

  • Reduced Exposure to Toxic Content: The random feed led to a 27% drop in exposure to toxic content. Users, especially those who previously consumed high levels of such content, viewed fewer toxic posts.
  • Low Elasticity of Toxic Sharing: Strikingly, the impact of the policy is blunted as the corresponding decrease in the total number of toxic posts shared is less than proportionate. In fact, I find that users share a larger proportion of the toxic posts they view after the algorithm is shut off.
Figure 1: Evidence on Inelasticity in Toxic Sharing and Seeking Out Behavior, by User Type. Notes: The axis corresponding to the bottom plots shows the magnitude of the treatment effects (as coefficient plots), while the top panel is scaled according to the control mean of the outcomes for each quantile. All regressions are run at the user level with robust standard errors.
  • Engagement Decline: User engagement decreased by 35%, a substantial blow to the platform’s revenue model, which relies on high user activity. This suggests that profit-focused platforms may be reluctant to adopt such interventions independently.
Figure 2: Treatment Effects on Viewing Behavior, by User Type. Notes: This figure shows that the total number of posts viewed changes by treatment status and user type. In fact, the treatment effect on the total number of posts viewed is larger (in absolute terms) for more toxic users due to lower exposure to toxic content and the disengagement effect. The axis corresponding to the bottom plots shows the magnitude of the treatment effects (as coefficient plots), while the top panel is scaled according to the control mean of the outcomes for each quantile. All regressions are run at the user level with robust standard errors.

Therefore, user behavior blunted the intended effects of the policy to reduce engagement with toxic content. While exposure to toxic content dropped, the decrease in sharing of toxic content was less pronounced, with many users actively seeking and sharing toxic content at a higher rate relative to what they saw. Users with a higher interest in toxic content at baseline drive this result as they seek out content the algorithm does not show them. This indicates that users are not passive recipients but instead show agency, gravitating towards content aligned with their interests—even when it’s harder to find.

Implications for Policymakers

This study provides several critical takeaways. First, there is reluctance to self-regulate: platforms may resist adopting diversified feeds due to large financial losses from running the intervention for one month. Second, behavioral targeting is key: interventions aimed at users predisposed to sharing toxic content could be more effective in reducing harmful engagement. This is true also because such users reduced the amount of time they spent on the platform upon being treated. Third, there are limitations to imposing blanket and piece-meal regulations: users can circumvent algorithmic content delivery, regulations focusing solely on algorithms may not be enough. Survey evidence shows that the subset of users driving the main effects of the intervention were more likely to report spending more time on other similar platforms. This makes cross-platform regulation necessary.

A Balanced Solution? Combining Diversified and Personalized Feeds

Together with my behavioral model that enables simulation of counterfactual policies, the findings suggest that a balanced approach might be possible: blending random content with some personalization. Such a hybrid feed could mitigate toxic exposure without drastically reducing engagement, offering a compromise between public welfare and platform profitability.

Toward a Multi-Pronged Strategy

While this experiment provides valuable insights, long-term solutions require a broader approach. Educational initiatives, media literacy programs, and efforts to promote critical thinking must accompany any algorithmic interventions to effectively counteract online radicalization.

Conclusion

This study highlights the challenges in regulating social media algorithms while emphasizing the need for policies that consider both user agency and platform interests. Future research should continue exploring cross-platform impacts and the role of diversified content feeds in combating online harm in the long run.

 

Feature Image Created by Generative AI.

Leave a Reply

Your email address will not be published. Required fields are marked *