Cyber risks to election integrity | Fieldfisher
Skip to main content
Insight

Cyber risks to election integrity

22/01/2018
Annabelle Gold-Caution of Fieldfisher analyses the impact of targeted social and technical influence on recent significant political events across the globe, comparing state responses to fake news, social media propaganda and cyber espionage.

Reports of nation states influencing the results of foreign elections via the internet are fast becoming commonplace. For several years commentators have been reporting practices ranging from the defacing of campaign websites and accessing of opponents’ donor databases, to the hacking of vote counting machines and smartphones, the creation of clone webpages, the sending of mass emails and texts and the use of automated bots to control activity on social media. Annabelle Gold-Caution of Fieldfisher analyses the impact of targeted social and technical influence on recent significant political events across the globe, comparing state responses to fake news, social media propaganda and cyber espionage.

This article first appeared in the December 2017 issue of Cyber Security Practitioner: (http://bit.ly/2F4cv6q

Cyber ops
Cyber criminals have done their best to disrupt national elections, including by hacking officials’ emails, but none have gone as far as breaching a country’s online voting system to manipulate a result - at least, not yet.

Participants in the #votingvillage at DEF CON 2017 found and took advantage of vulnerabilities in 30 commonly used voting terminals, sometimes within minutes. Some believe this risk is low - in the US because terminals aren’t connected directly to main servers and in the UK because voting is paper based. This hasn’t stopped statesponsored hackers trying, though. In September, the US Government revealed that 21 states’ electoral systems were targeted by Russian hackers before last year’s presidential election, leading to leaks of personal information and two voter registration systems being temporarily suspended.

Equally, infrastructure other than voting machinery can be vulnerable to threats including distributed denial of service (‘DDoS’) attacks, spear phishing, insider threats, fake emails and malware. This year the Kenyan Supreme Court was forced to annul Kenya’s general election result following allegations of vote rigging via the electoral commission’s IT systems. Political parties are particularly vulnerable to cyber attacks because they store huge volumes of data across multiple devices, including communication strategies, membership information, donor details and financial data.

A key difficulty in taking action against foreign cyber attacks is establishing the culprit. Proving whether hackers had state backing is extremely difficult, particularly for countries with inadequate resources, and while there are laws in the UK allowing for prosecution of cyber crime (under the Computer Misuse Act 1990) and cyber-enabled crimes (e.g. under the law of fraud), combating foreign interference by its nature necessitates international legal and political cooperation.

Few international agreements exist governing the use of nation state cyber attacks. This year Microsoft’s President appealed to governments to form an international consensus akin to the Geneva Convention to protect against, and a neutral international organisation to investigate, state-sponsored cyber attacks.

In 2015 the US and China pledged to refrain from hacking companies in order to steal intellectual property. A similar deal was forged among the G20. But 13 years of UN negotiations came to an abrupt end in June due to a row over the right of self defence against cyber attacks, reportedly split along Cold War lines with Russia, China and Cuba pitted against the West.

Despite this, many cyber security authorities believe the risk of active and direct cyber attack on elections is low, primarily because there are easier result swaying methods.

Fake news
Historically, the UK and US’s concept of cyber has been technical and computer network oriented. But both countries are now being forced to react to the ‘weaponisation of information’ via algorithms, automation and computational propaganda as an act of political subversion.

Responding to a 1999 UN Secretary General call for comments on technology and international security, Russia spoke of concern around “information weapons” being used to “undermine the political, economic and social system of other states, or to engage in the psychological manipulation of a population in order to destabilise society.”

The use of propaganda to shape public opinion and undermine dissent is an established phenomenon with a long-standing history. But this kind of action over the internet is much more insidious, difficult to defend against and difficult to attribute. On social media, bots (fake accounts automatically repeating the same content) have been a popular vehicle for manipulation. In the US, falseaccounts have bolstered the apparent grassroots support of candidates. Syrian and Turkish activists have been subject to bot spamming campaigns attempting to drown out oppositional political speech. Social media activity from bots has featured prominently in Mexican elections for several years.

            More sophisticated efforts, such as China’s‘50 Cent Army,’ have focused on the ability for human operators to manage multiple plausible personas on social media.

The UK’s EU Referendum did notescape this phenomenon. Just two days after Theresa May delivered a speech accusing Russia of using cyber attacks and online propaganda to “undermine free societies” and “sow discord in the West,” it emerged that more than 150,000 Russian language Twitter accounts posted tens of thousands of messages in English before the Referendum urging Britain to leave the EU.

More sophisticated efforts, such as China’s ‘50 Cent Army,’ have focused on the ability for human operators to manage multiple plausible personas on social media. Facebook disclosed this year that the Kremlin linked Internet Research Agency had paid more than $100,000 for divisive adverts in the run-up to the 2016 Presidential Election.

Concerns of this kind have become so commonplace that the Spanish Government has cited Russian interference as a key factor in the push for Catalan independence.

Russian interference in Europe’s politics and its information space is not new, of course. Its roots lie in KGB disinformation methods, now actively combined with new technologies - a combination that is a familiar influence in the Caucasus. The Georgian NGO Detector Media published a ‘Kremlin Influence Index’ to counteract this effect in the region.

An industry has been built around the international sale and hire of botnets and it has become clear that commercial ‘political consultancy’ has in many cases extensively utilised these tools. This is occurring against an evolving social background - research now suggests that voters often trust what they see as spontaneous expressions of real people on social media over expert opinion.

There is some action being taken. For the first time since it was set up in 2015, EU anti-propaganda unit East Stratcom will now receive funds directly from the EU budget rather than relying on Member States’ contributions. The new funding emerged after the European Council’s president publicly warned of “cyber attacks, fake news, hybrid war.”

Domestic progress has been slower. Although the UK Parliament has requested information from social media companies, it has been argued that the Government has little appetite for an inquiry that could muddy its mandate in the midst of Brexit negotiations. Social media companies have minimal incentives to volunteer information about the exploitation of their own platforms. And the predominantly right-wing and pro- Brexit British press have little enthusiasm for undermining the Referendum’s validity.

Where next?
Propaganda and espionage existed before computers, and will surely exist as long as humans are around. However, the right combination of top-down and bottom-up efforts will place us in a much stronger position to address these threats’ political impact. There is much to be said for equipping citizens with the skills to critically analyse what they see on the internet. The BBC, Britain’s national public service broadcaster, has launched a secondary school scheme for 2018 that will aim to teach children how to navigate the web with discernment, a move that follows a year long study looking at how well
children can spot false information.Perhaps arming the electorate with the knowledge they need to be informed voters should be our priority in this era of mobile, social and partisan news.

Germany largely avoided cyber meddling in its federal elections in September 2017. It also put huge effort into proving that lessons were learnt from the France and US elections. Its major political parties entered into a ‘gentleman’s agreement’ not to exploit any information leaked following a cyber attack.

The Federal Office for Information Security ran penetration tests looking for vulnerabilities in the federal election authority’s systems. The Bundestag and political parties obtained expert cyber security advice. And major news outlets established teams of fact checkers to counteract fake news.

Although the extent to which Germany’s actions spared it from the onslaught suffered during other countries’ elections is unclear, there is a strong argument to be made that this kind of preparation, alongside grassroots strategies like the BBC’s schools initiative, should now be standard practice.

This article first appeared in the December 2017 issue of Cyber Security Practitioner: (http://bit.ly/2F4cv6q

Relevant URLs
1. https://obamawhitehouse.archives.gov/the-press-office/2015/09/25/fact-sheet-president-xi-jinpings-state-visit-united-states
2. http://g20.org.tr/g20-leaders-commenced-the-antalya-summit/
3. https://usun.state.gov/remarks/7880
4. https://ccdcoe.org/sites/default/files/documents/UN-000710-ITISreply.pdf

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE