Fair Tech and Democracy: Thoughts on Content, Misinformation and Elections

Posted on 11th May 2020


Tiernan Kenny
Policy Manager, UK & Europe

The stratospheric growth of the Internet over the last ten years has led to a range of challenges for governments to deal with. One of the most pressing for European liberal democracies is the ability of bad actors to use online platforms to manipulate public discourse, up to and including electoral meddling and the perceived unwillingness of Silicon Valley giants to act decisively to halt it.

Online, people can access huge amounts of information instantly with minimal filters to prevent misinformation, illegal or harmful content or state-sponsored propaganda. Additionally, large social media platforms and search engines personalise the content they provide to their users, meshing organic, promoted and advertising content together. This sometimes makes it difficult for users to pick out objective information, understand why they see some content or why information is presented in a particular order.

Politicians have been quick to jump onto Facebook and Twitter, in some cases seeing the benefits of bypassing traditional media and tough questions from journalists to simply broadcast political messaging to their supporters. However, the comparatively freewheeling nature of social media sites and historic light-touch moderation compared to strict rules for traditional media (print and broadcast) has opened up a fertile breeding ground for disinformation and a ready-made network, allowing it to spread in an opaque manner.

Political communications and electoral meddling are just one of a whole host of problems politicians have been urging online platforms to tackle, albeit with a limited arsenal of pre-Internet legislation offering limited enforcement options. In the absence of appropriate legislative tools, online platforms have found themselves subject to fierce political criticism and outdated legislation being interpreted by courts and applied to realities its creators never could have conceptualised. A good example is the so-called “right to be forgotten” case faced by Google in Spain.

In this case, after a long run of litigation, Europe’s top court eventually elicited a “right to be forgotten” enforceable against search engines. This requires them to remove links to private information when asked, provided the information is no longer relevant, based on the applicable privacy legislation at the time – the 1995 Data Protection Directive. The court ruling did not oblige any entity to remove information from the Internet, instead only forcing search engines to remove it from their results. While the court may have been limited by the constraints of the legislation dating from 1995, it seems like a blunt tool to enforce privacy rights, as any information can still be found by people who stumble across it for other reasons. Meanwhile, Google was forced to hire thousands of staff to deal with a tsunami of deletion requests of varying veracity.

As concern over the role of social media platforms in electoral campaigning rose to prominence in Europe through 2016 and 2017, a patchwork of solutions emerged. This ranged from Facebook’s commitment to creating a library of political advertisements to banning advertising content from groups based outside the country where an election is taking place, all the way to attempts to ban political advertising outright from some platforms.

These steps allowed social media companies to show that they understood what was happening on their platforms and that they wanted to find solutions. However, European politicians will have taken note of Facebook’s recent decision not to fact-check ads or posts from politicians in the United States, a country with a free-speech tradition that’s very different to Europe’s. This is a clear contrast to Facebook’s efforts to flag fake or suspicious news to its users and a concern for mainstream politicians facing increasing challenges from populists at both ends of the political spectrum.

The scale and success of social media companies is based on their ability to provide a seamless consistent experience across the globe. Carving out different approaches to political content across different geographies obstructs this approach and can be extremely challenging, not least because malicious actors can use a range of technologies to spoof their location. Thus far, most companies have only provided localised solutions under threat of heavy fines or worse, such as Google’s choice to only enforce the right to be forgotten in Europe and Twitter’s German service, which blocks hate speech content based on the NetzDG law.

What is clear amidst the ever-shifting sands of technology, is that social media is a firm part of the media landscape and an increasingly important source of information for many people. There therefore needs to be a sustainable and appropriate balance which recognises the role of social media platforms in disseminating content but gives them a solid legal framework with a clear distinction between public and private creation and enforcement of regulation. There is also a broader debate in the context of Section 230 of the US Communications Decency Act and the EU’s e-Commerce Directive as to whether social media companies should be treated as publishers. However, the challenge of disinformation during electoral periods can be dealt with separately as most countries have existing legislation on electoral spending and communications.

So, what does this framework look like?

While there may be short-term temptations for politicians to harangue social media companies and accuse them of not doing enough, such rhetoric will inhibit the emergence of a long-term, sustainable solution. Equally, social media companies will need to genuinely and proactively engage, instead of reacting to overt political pressure or obvious examples of interference.


Back to document archive