The world is now a place where truth is becoming increasingly hard to distinguish from fiction across social media platforms and controlling the spread of fake news is too slow to stop its negative impact.
In the run-up to the 2024 elections, there is the looming threat of AI-generated disinformation pervading our political and corporate spheres, and potentially influencing the results. Bad actors are now able to produce content at speed and allow fake news, deep fakes, and manipulated visuals to spread virally across multiple platforms. Unscrupulous characters might also find convenient cover for dirty deeds with generative AI on the scene. ‘It’s a deep fake’ is all one needs to cry when someone provides evidence of your corruption.
In practice this means we are likely to see more political deepfakes like the recent audio of Sir Keir Starmer appearing to berate a staff member, released on the first day of Labour conference. The post on X secured over a million views in a short space of time, but the incident never took place and the audio was proven to not be real.
It followed a similar event in the final hours of Slovakia’s election campaign, when an audio recording was released on Facebook that appeared to show the leader of the opposition Progressive Slovakia party talking about his plans to rig the election. Both occurrences underline the threat posed by deepfake technology and AI in politics.
Political and wider disinformation poses a risk not only to elections but also to public perception and trust. How can we know what is real vs fake? How quickly can it be identified and taken down before it has an impact on people’s thoughts and behaviour? Who do we approach to clarify or enforce the taking of content down?
This poses the obvious question, who is bringing the rules to bear on this quickly evolving area? The UK Government, through its White Paper and global safety summit, is seeking to take a lead on AI, but the EU and US are already ahead in seeking to bring both regulation and tech companies to bear in their respective territory.
For starters, our political parties must lead by example. We need them to play an active role in safeguarding the integrity of our democratic processes. Their first step should be to develop and publish policies on how they will use generative AI in campaigning, committing to transparency with voters about where and how it is used.
They also need to be more vocal in calling out and condemning the use and dissemination of damaging deep fakes, even where these are attacking their political opponents. In a world of AI, political parties need to demonstrate a new commitment to truth and integrity in our political discourse and to rise above party politics to make that the case.
It’s clear that the public want to see reassurances on this from their political leaders. As recent polling* shows a vast majority of both the public and MPs are concerned about AI’s impact on electoral integrity, with nearly two-thirds of MPs worried AI will increase the risk of misinformation and disinformation. 78% of the public support political parties being required to disclose the use of AI in the generation of any campaigning materials. With an election not far off, there needs to be some quick thinking on this.
As Rishi reflects on his discussions with both tech and political leaders, he will need to consider what practical measures will help voters keep faith in our system. Yes, more pressure needs to be applied to tech firms and platforms where this content is shared. There is for example, much, much more they could be doing to embrace and rollout technical mechanisms (such as watermarking or labelling) to ensure that users know when content is AI generated.
He would also do well to focus on what is in his control in terms of regulation – he’d have support from both the public and politicians on this. 73% of the public and nearly half of MPs (48%) believe there should be a new UK body to regulate AI.
AI is already here and disrupting how we live, work and perceive the world around us. As we stand on this cusp, the opportunities feel infinite, but so do the risks. So, we turn to our political leaders to recognise the threat it poses and ask them to take action to protect what we hold dear.
by James Bird, Managing Director – Innovation and Insight at Cavendish Consulting and Lucy Bush, Director of Research and Participation at Demos
* New research, commissioned by Cavendish Consulting and undertaken by YouGov, shows that a majority of MPs and the public are concerned about the impact of artificial intelligence (AIs) on electoral integrity.
Polling Headlines:
● Two thirds of MPs (70%) and three quarters of the public (75%) believe AI will increase the spread of misinformation
● Less than a quarter of MPs (22%) believe the UK is moving fast enough on the regulation of AI
● 56% of the public believe AI technology will have a negative impact on the integrity of elections in the UK, with 57% of MPs believing AI could interfere with electoral integrity
● 78% of the public support political parties being required to disclose the use of AI in the generation of any campaigning materials
● 73% of the public and nearly half of MPs (48%) believe there should be a new UK body to regulate AI, with 74% of Labour MPs backing the move, compared to 32% of Conservative MPs