There has never been a better time to be a politician – unless, of course, you can be a machine learning engineer working for a politician.
Throughout modern history, political candidates have tended to have limited tools to gauge the sentiments and opinions of the electorate. More often than not, they had to rely on instinct rather than insight when running for office.
The advent of big data and its application in political election campaigns changed that. Most prominently, it was the 2008 US presidential election that first relied on large-scale analysis of social media data, which was used to improve fundraising efforts and to coordinate volunteers. Now, the next level of this digital transformation involves the integration of artificial intelligence (AI) systems into election campaigns alongside nearly all other aspects of political life.
Already today, machine learning systems can predict which US congressional bills are likely to pass. Algorithmic assessments are being implemented in the British criminal justice system. And most strikingly, machine intelligence solutions are now being carefully deployed in election campaigns to engage voters and help them be more informed about key political issues.
But as we approach an election climate in which everything from voter intelligence to voter targeting and conversational engagement can be automated, we need to ask ourselves: are we putting our democracy at risk by putting too much trust into AI systems? How far should we go in the integration of machines into the human side of democracy?
These ethical questions are especially pertinent, given the recent press coverage investigating the dark side of campaign technologies in the Brexit referendum and the 2016 US presidential election. In particular, there is evidence to suggest that AI-powered technologies have been systematically misused to manipulate citizens. And some people claim that they were a decisive factor in the referendum and election results. This is a disquieting trend.
Attack of the bots
First, the use of AI to manipulate public opinion: massive swarms of political bots were used to spread propaganda and fake news on social media. Bots are autonomous accounts that are programmed, in the political arena, to spread one-sided political messages creating the illusion of public support.
Typically disguised as ordinary human accounts, bots have been responsible for spreading misinformation and contributing to an acrimonious political climate on sites like Twitter and Facebook. They are very effective at attacking voters from the opposing camp and even discouraging them from going to the voting booth.
For example, bots regularly infiltrated the online spaces used by pro-Clinton campaigners to spread highly automated content, generating a quarter of Twitter traffic about the election. With a massive storm of messages, they were able to choke off dissent from social media and thereby support the Trump campaign.
Bots were also largely responsible for taking #MacronLeaks to social media just days before the French presidential election. They swarmed Facebook and Twitter with leaked information that was mixed with falsified reports, to build a narrative that Emmanuel Macron was a fraud and a hypocrite – a common tactic when bots are used to push trending topics and dominate social feeds.
The dark side of political AI
Second, the use of AI to manipulate individual voters: during the US presidential election, an extensive advertising campaign was rolled out that targeted persuadable voters based on their individual psychology. This highly sophisticated micro-targeting operation relied on big data and machine learning to influence people’s emotions.
The problem with this approach is not the technology itself, but rather the covert nature of the campaign and the blatant insincerity of its political message. Different voters received different messages based on predictions about their susceptibility to different arguments.
A presidential candidate with flexible campaign promises was, of course, particularly well suited to this tactic. Every voter could receive a tailored message that emphasised a different side of the argument. There was a different Trump for different voters. The key was just finding the right emotional triggers for each person to drive them to action.
The damage to democracy
Informational warfare is obviously not a new phenomenon. For instance, medieval pamphlet wars are one of the earliest examples of large-scale propaganda campaigns. But the nature and scale of computational propaganda is simply unprecedented. In fact, the nefarious application of AI in elections raises much larger questions about the stability of the political system we live in.
A representative democracy depends on free and fair elections in which citizens can vote with their conscience, free of intimidation or manipulation. Yet for the first time ever, we are in real danger of undermining fair elections – if this technology continues to be used to manipulate voters and promote extremist narratives.
Towards human-centred AI
It is easy to blame AI technology for the world’s wrongs (or for lost elections), but there’s the rub: the underlying technology is not inherently harmful in itself. The same algorithmic tools used to mislead, misinform and confuse can be repurposed to support democracy and increase civic engagement. After all, human-centred AI in politics needs to work for the people with solutions that serve the electorate.
There are many examples of how AI can enhance election campaigns in ethical ways. For example, we can program political bots to step in when people share articles that contain known misinformation. We can deploy micro-targeting campaigns that help to educate voters on a variety of political issues and enable them to make up their own minds. And most importantly, we can use AI to listen more carefully to what people have to say and make sure their voices are being clearly heard by their elected representatives.
An alternative scenario for restricting computational propaganda is through greater regulation. Stricter rules on data protection and algorithmic accountability could also reduce the extent to which machine learning can be abused in political contexts.
But regulation always moves slower than technology. When regulators finally start discussing the legal frameworks for AI in politics, let’s hope we have some democratically elected leaders left.
The Centre for Public Impact is investigating the way in which artificial intelligence (AI) can improve outcomes for citizens.
Are you working in government and interested in how AI applies to your practice? Or are you are an AI practitioner who thinks your tools can have an application in government? If so, please get in touch.
FURTHER READING
- Algorithm and blues. Algorithm problems affecting Australia’s Centrelink agency should not prevent policymakers from seeking to harness the wider benefits of AI, says Joel Tito
- The six things you need to know about how AI might reshape governments. Adrian Brown and Danny Buerkli examine how AI is poised to impact policymaking around the world
- Government and the approaching AI horizon. Danny Buerkli considers how AI is likely to reshape the business of government
- From the machines of today, to the artificial intelligence of tomorrow. IP Australia is pioneering the use of artificial intelligence in the Australian government. Its general manager of Business Futures, Rob Bollard, tells us how services – and citizens – benefit from its deployment
- Mapping the future: how governments can manage the rise of AI. How can policymakers control and steer the future trajectory of Artificial Intelligence? Cyrus Hodes and Nicolas Miailhe, co-founders of Harvard’s AI Initiative, offer up some suggestions
- March of the machines: how governments can benefit from AI. The rapid advances of artificial intelligence are poised to reshape our world, says Philip Evans. He explains why governments should embrace, and not retreat from, this upcoming revolution
- Changing times: Why it’s time to plan, not panic for AI. Although artificial intelligence can lead to many positive results, policymakers should be vigilant about the implications and direction of this fast moving technology, says Sebastian Farquhar. He explains why clear goals and objectives are paramount
- How AI could improve access to justice. With the law never having been less accessible, Joel Tito explains how the smart deployment of artificial intelligence can help
- Data detectives: how AI can solve problems large and small. Richard Sargeant is putting the knowledge gained as one of the founders of the UK’s Government Digital Service to full use by exploring how artificial intelligence can help governments address systemic challenges – he tells us how