On 6 January, two weeks before the new US President was to be sworn in, a violent mob stormed the US Capitol while electoral college votes were being counted to confirm President-elect Joe Biden’s victory in the 2020 US presidential election. Five people died, and dozens were injured in the melee. That same day, some social media platforms – Facebook, Instagram, and Snapchat – temporarily suspended sitting President Trump’s social media accounts. On 8 January, the President posted two tweets calling his voters ‘patriots’ while announcing that he wouldn’t be attending Biden’s inauguration. Trump was summarily permanently suspended from his major social media platforms, including his favourite, Twitter. His loyal supporters flocked to Parler – a Twitter-rival favoured by those on the alt-right. This was then subsequently removed from the cloud by Amazon Web Services, as well as from the Google and Apple app stores.
On one hand, the move by the big tech titans seems appropriate. Trump’s comments incited the mob, which in turn resulted in rioting and the loss of life at the heart of the US Capitol. The likes of Twitter and Facebook were right to censor him and put a stop to the incitement of further chaos. However, this move has prompted discussion around the sheer power that big tech companies wield. They were able to popularly de-platform a sitting, elected US President. As the American Civil Liberties Union (ACLU) pointed out, Trump was able to turn to news channels that have historically favoured him, such as Fox News, to communicate with the public; but that luxury isn’t available to everyone.
We must consider carefully whether big tech companies should be at the forefront of the defence against hate speech, or whether this move presents a perilous affront to free speech. We must also determine the best way of preventing further incitement to avoid another incident like the one seen in Washington DC on 6 January.
Concerns arise that social media platforms seem to have the ability to police the internet by banning individuals from what is effectively the modern, public sphere of debate. We must not immediately dismiss comparisons to state-censorship in other more authoritarian nation-states. We’re all familiar with the lack of free speech in such countries, but we seldom stop to consider whether we are veering towards a similar slippery slope in the West. One also cannot ignore the fact that in 2020, internet firms spent $59 million on lobbying Capitol Hill. Though a section of US Democrats is focused on bringing in anti-trust laws against big tech firms, how should governments ensure that big tech lives up to its responsibilities and fairly police free speech?
A high-profile objection to President Trump’s de-platforming came from German chancellor Angela Merkel, who criticised Twitter’s actions as a breach of the right to freedom of expression. She argued that the US government should not leave regulation up to social media platforms and that legislation should instead censor online incitement. In 2018, Germany introduced one of the West’s toughest restrictions to online content in the Western world. The Network Enforcement Act mandates sites to delete potential hate speech within 24-hours of being informed. If they don’t, they face fines of up to €50 million. This is unlike the US, where social media platforms are protected under Section 230 of the Communications Decency Act, affording them immunity for content generated by users.
It begs the question, who determines what qualifies as hate speech. By leaving that decision up to legislators, we risk merely passing on oversight to the government, which in most cases, raises the subsequent question – who regulates the regulators?
This may not even be a viable solution. The US Constitution’s First Amendment explicitly prevents Congress from passing laws that prohibit the exercise of free speech. The very reason that tech companies can skirt this principle as it stands is that they are private companies. Indeed, there have been lawsuits in the US where users have argued that they have the right to free speech on such platforms. But users have always lost these cases. Recent proposals by Democrats to abolish Section 230 follows the usual norm of legislative action by democratically elected representatives, thus taking away the arbitrary power of regulating speech that tech firms currently enjoy.
The US should mirror aspects of their policies overseeing financial services through the use of transparency as a regulatory tool. Big tech firms should be made the subject of open, clear reporting of how and why individual site codes of conduct prevent certain posts. These policies should be applied indiscriminately. That way, neither big tech nor the government have the opportunity to overstep. It would also ensure that the public are made aware of a fair system for filtering out hate speech or any posts that instigate violence. Just as President Trump was given his chances, big tech should have the opportunity to remain in control, so long as they listen carefully to concerns.
We should rid 21st-century political discourse of hate speech. But how do we do that without risking tech companies or the government going too far? One thing is clear – we must strive to reach a solution that upholds free speech whilst protecting against the incitement of violence. European-style regulations frankly won’t cut it. Regulators need to bear in mind the US legal landscape and borrow from rules regulating financial services to prevent hate speech, whilst not overreaching into the fundamental right to free speech.
Opinion by Harrison McQueen and Yashshri Soman