On April 25, following several weeks of speculation, Twitter announced that it had reached an agreement to sell the company to Tesla CEO and multi-billionaire Elon Musk. In mid-April, Musk made public his desire to acquire Twitter, make it a private company and overhaul its moderation policies.

Citing ideals of free speech, Musk claimed that “Twitter has become kind of the de facto town square, so it is just really important that people have the, both the reality and the perception that they are able to speak freely within the bounds of the law.”

While making Twitter free for all “within the bounds of the law” seems like a way to ensure free speech in theory, in practice, this action would actually serve to suppress the speech of Twitter’s most vulnerable users.

Play
CBC’s 'The National' looks at Elon Musk’s attempt at a hostile takeover of Twitter.

My team’s research into online harassment shows that when platforms fail to moderate effectively, the most marginalised people may withdraw from posting to social media as a way to keep themselves safe.

Withdrawal responses

In various research projects since 2018, we have interviewed scholars who have experienced online harassment, surveyed academics about their experiences with harassment, conducted in-depth reviews of literature detailing how knowledge workers experience online harassment, and reached out to institutions that employ knowledge workers who experience online harassment.

Overwhelmingly, throughout our various projects, we have noticed some common themes:

  • Individuals are targeted for online harassment on platforms like Twitter simply because they are women or members of a minority group (racialised, gender non-conforming, disabled or otherwise marginalised). The topics people post about matter less than their identities in predicting the intensity of online harassment people are subjected to.
  • Men who experience online harassment, often experience a different type of harassment than women or marginalised people. Women, for example, tend to experience more sexualised harassment, such as rape threats.
  • When people experience harassment, they seek support from their organisations, social media platforms and law enforcement, but often find the support they receive is insufficient.
  • When people do not receive adequate support from their organisations, social media platforms and law enforcement, they adopt strategies to protect themselves, including withdrawing from social media.

This last point is important, because our data shows that there is a very real risk of losing ideas in the unmoderated Twitter space that Musk said he wants to build in the name of free speech.

Or in other words, what Musk is proposing would likely make speech on Twitter less free than it is now, because people who cannot rely on social media platforms to protect them from online harassment tend to leave the platform when the consequences of online harassment become psychologically or socially destructive.

Arenas for debate

Political economist John Stuart Mill famously wrote about the marketplace of ideas, suggesting that in an environment where ideas can be debated, the best ones will rise to the top. This is often used to justify opinions that social media platforms like Twitter should do away with moderation in order to encourage constructive debate.

This implies that bad ideas should be taken care of by a sort of invisible hand, in which people will only share and engage with the best content on Twitter, and the toxic content will be a small price to pay for a thriving online public sphere.

The assumption that good ideas would edge out the bad ones is both counter to Mill’s original writing, and the actual lived experience of posting to social media for people in minority groups.

Mill advocated that minority ideas be given artificial preference in order to encourage constructive debate on a wide range of topics in the public interest. Importantly, this means that moderation of online harassment is key to a functioning marketplace of ideas.

Regulation of harassment

The idea that we need some sort of online regulation of harassing speech is borne out by our research. Our research participants repeatedly told us that the consequences of online harassment were extremely damaging. These consequences ranged from burnout or inability to complete their work, to emotional and psychological trauma, or even social isolation.

When targets of harassment experienced these outcomes, they often also experienced economic impacts, such as issues with career progression after being unable to complete work. Many of our participants tried reporting the harassment to social media platforms. If the support they received from the platform was dismissive or unhelpful, they felt less likely to engage in the future.

When people disengage from Twitter due to widespread harassment, we lose those voices from the very online public sphere that Musk says he wants to foster. In practice, this means that women and marginalised groups are most likely to be the people who are excluded from Musk’s free speech playground.

Given that our research participants have told us that they already feel Twitter’s approach to online harassment is limited at best, I would suggest that if we really want a marketplace of ideas on Twitter, we need more moderation, not less. For this reason, I am happy that the Twitter Board of Directors is attempting to resist Musk’s hostile takeover.

Jaigris Hodson is Associate Professor of Interdisciplinary Studies at Royal Roads University.

This article first appeared on The Conversation.