Regulating Freedom of Speech on Social Media
Freedom of speech in the UK and US is a constitutional and human right, exemplified in Article 10 of the European Convention of Human Rights (incorporated into UK law through the Human Rights Act 1998) and the First Amendment of the US Constitution. However, this is caveated (in UK law at least) as ‘subject to formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society’. In our daily life, our freedom of speech is subdued by laws prohibiting hate speech and defamation, to regulate democracy. However, what about our online counterparts? Does, and should, social media platforms regulate concerns surrounding the abuse of freedom of speech? And how will this field be navigated in the future?
Social media shows itself to be difficult to be regulated, as it is much more efficient than physical interactions. The power of one person’s voice is amplified on social media, where a simple Tweet can rack up millions of Retweets and Likes. Social media is still a relatively new entity, especially in the legal sphere, where regulation is menial (the General Data Protection Regulation (GDPR) only having been implemented in 2018 in the EU). However, platforms are already beginning to question their complicity in abuse of freedom of speech, (perhaps questionably) treading the uncertain grounds that is regulation.
The Freedom of Forum Institute, an American-based organisation focusing on the First Amendment, has noted the wide-ranging approaches of various popular social media platforms to regulation and censorship. Reddit has come out as one of the most lenient platforms on regulating hate speech, permitting posts unless content “encourages or incites violence, threatens, harasses, bullies, or encourages others to do so”, whilst YouTube has held the heaviest hand in censoring hate speech, arguing that it is a “delicate balancing act, but if the primary purpose is to attack a protected group, the content crosses the line”. Between these extremes lies Twitter and Facebook, two platforms causing uproar in their moves towards regulation in the past year.
Late 2019, Twitter released that it would be banning political adverts on its platform, based on a ‘belief that political message reach should be earned, not bought’. This move banned all adverts on its platform referencing or by a candidate or political party, government official, election or referendum, legislation, etc, with an exemption criteria for news publishers running ads potentially referencing political content. This may have been a reaction to the Cambridge Analytica scandal, where Facebook adverts were used to target voters during the 2016 US Presidential Election and UK Brexit Referendum.
Following this, in 2020, Twitter has implemented several warnings on the US President’s tweets, including during the Black Lives Matter protests stating that said tweet ‘glorified violence’. This received virulent backlash from Trump and his supporters for attacking free speech, specifically, the President’s free speech. However, CEO of Twitter Jack Dorsey tweeted the platform’s reasoning, stating “We’ll continue to point out incorrect or disputed information about elections globally. And we will admit to and own any mistakes we make,” further adding, “This does not make us an ‘arbiter of truth’. Our intention is to connect the dots of conflicting statements and show the information in dispute so people can judge for themselves.”.
This latter assertion responded to Facebook co-founder and CEO, Mark Zuckerberg’s statements surrounding the platform not fact-checking political adverts. Here, Zuckerberg stated that no private company should be “the arbiter of truth”, citing the right to freedom of speech. Historically, Facebook has held a strong stance as not heavily regulating (especially political) posts on the site, however recently, a new direction seems to have been taken. On 18th June 2020, Facebook announced that Trump re-election adverts had been taken down for violating the platform’s policy against organised hate; non-profit group Media Matters clarifies that this was most likely in response to use of a Nazi symbol on adverts. This incited intense criticism by the US President as ‘censoring’ the 2020 election.
These steps reflect a change in social media companies’ understanding of their own role in upholding the right (and caveats) of freedom of speech. However, what is the future of social media regulation, and is it right to censor social media?
Governments globally seemingly are pushing for stricter regulation of social media content; whether this is positive or negative however is debatable. On 27 May 2020, the White House announced it would be signing an executive order potentially rolling back the protections afforded to social media companies from liability over user-generated content, following Twitter’s fact-checks on the President’s tweets. This will target Section 230 of the Communications Decency Act (US law), which states “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. Trump seeks to strip platforms of this protection if user-generated content or access is removed without adequate notice or opportunity to respond, or when done inconsistently with their terms of service. Cybersecurity Law Professor, Jeff Kosseff has advocated for this move, arguing that the two purposes of this legislation was to allow free speech and innovation, and ensure tech companies maintained control over their content, stating that “they have utterly failed on that job and did not realise that it was a two-way contract,” citing social media’s weaponisation, calling for platforms to “put on their big boy pants and behave responsibly.”
Similar (yet less grating) calls for heavier regulation have been pioneered by New Zealand, where Prime Minister Jacinda Ardern has argued tech companies should be considered “the publisher not just the postman”, alongside Australia’s Prime Minister stating “it is unacceptable to treat the internet as an ungoverned space.”.
Furthermore, the UK government near the beginning of 2020 announced further power to be provided to watchdog Ofcom, to force social media companies to take fuller responsibility for content posted on their platforms.
These moves towards further regulation seem agreeable, with critics such as Jonathan Wareham arguing that “it is incorrect… to claim that [social media platforms] do not exercise editorial control over the content”. Platforms hold immense responsibility as ‘narrowcasters’, who hold the power to use algorithms and user-generated data to influence masses. Wareham believes that this “excessive social polarisation” caused through hate speech and politically-centred adverts “erodes the democratic institutions that protect free speech and other basic rights”, stating that “Without some basic consensus on the common objectives of social welfare, democracies weaken and become dysfunctional or corrupt.”.
Where the regulation of freedom of speech is a slippery slope, several questions remain unanswered. The issue is lesser so should we regulate – it is pertinent that we should – but how do we regulate effectively, without censoring society too heavily?