The Power of the Deepfake: Misinformation in the UK Election
Table Of Contents:
Artificial intelligence (AI)-powered deepfake technology is not a new phenomenon. The term ‘deepfake’ has been used to describe synthetic media since as far back as 2017, and many examples of deepfakes have been widely publicised. Viral examples include an AI-generated video of Mark Zuckerberg talking about the nefarious power of Facebook, a TikTok account dedicated to sharing deepfake videos of actor Keanu Reeves, and even a video of US House Speaker Nancy Pelosi, doctored to make her appear drunk.
Deepfake technology continues to advance at pace, and it’s more complicated than ever to tell what is and isn’t real. With the UK general election underway, have deepfakes shaped the way constituents view political candidates – and even impacted how they vote?
Deepfakes in the UK General Election
Last month, a doctored video of Labour Party politician Wes Streeting was released on X (formerly Twitter), purportedly showing him call fellow Labour Member of Parliament Diane Abbott a “silly woman”. The BBC reported that the video and many of the comments in response on X came from a network of accounts that create and share clips of politicians and then post misleading comments to reinforce the impression they are real.
Is the network aiming to undermine and discredit the targeted politicians, whether for political purposes or simply for entertainment? The BBC report found that the same accounts posted deepfake clips of Labour’s Luke Akehurst as well as Reform UK’s Nigel Farage, spanning the UK’s political spectrum. In an unrelated event, satirical deepfake videos of UK Prime Minister Rishi Sunak also went viral recently after he announced National Service proposals.
The impact of AI isn’t limited to social media, either: there’s even an AI Independent candidate called AI Steve, an avatar of real-life businessman Steven Endacott.
A study by YouGov found that 43% of Britons get their news during the general election from social media, second only to television (58%). In total, 77% get their news online. So, while many applications of deepfake technology can be harmless and entertaining, the consequences of videos such as the Wes Streeting clip could be politically damaging. As Britons cast their votes, for some, deepfakes will likely impact their voting choices.
GenAI Continues to Impact Businesses, Too
Threat actors also continue to target businesses and their customers. Last week, Australian news channel 7News found its YouTube hacked by threat actors, who used it to livestream an AI-generated video of Elon Musk touting a cryptocurrency scam. The Sydney Morning Herald reported that the deepfake encouraged viewers to scan a Tesla-branded QR code and deposit money into the cryptocurrency scheme to “double their crypto assets”.
The hack shows that threat actors are getting increasingly savvy in their approach to compromising businesses to access critical data – and customers. The 7News YouTube channel has over a million subscribers, allowing the scammers behind the attack to reach 7News’ significant audience: CryptoNews reported that the live stream was watched by around 60,000 viewers, and another simultaneous stream was viewed by around 45,000 people.
Deepfakes also allow threat actors to enact business email compromise (BEC) style attacks on organisations, using AI-generated audio, imagery and video to impersonate senior staff members. In April, LastPass shared that a threat actor had impersonated CEO Karim Toubba using WhatsApp, sending an employee calls, texts and voicemails featuring an audio deepfake. In this case, the employee reported the incident and the threat was mitigated by LastPass. Still, there are many recent incidents in which this mode of attack has been successful, leading to financial loss.
Advances in AI Technology are Leading to Increased Cyber Threats
AI is becoming an increasingly accessible technology for threat actors, leading to an upsurge in attacks on individuals and businesses. Our State of Information Security Report surveyed over 1,500 information security professionals and found that almost a third (30%) of companies have experienced a deepfake attack in the last 12 months.
However, the majority of respondents (76%) also agreed that AI and machine learning (ML) technology is improving information security, offering new ways to automate, analyse and enhance security measures. 62% expect to increase their spending on AI and ML security applications in the coming 12 months.
With AI-powered technology offering businesses both challenges and incredible opportunities, how can we address and prevent deepfake incidents?
Reducing the Risk of Deepfake Incidents
For businesses, employee training and awareness will be vital to reducing the risk of deepfake incidents. Standards like ISO 27001, the international information security standard, provide a framework for building, maintaining and continuously improving an information security management system (ISMS). This includes implementing an information security policy, which will guide your employees in reporting a suspected incident, such as a BEC attack, via email or in the form of a deepfake.
In the UK general election, the general public must be vigilant when considering whether videos and audio snippets posted to social media platforms are real or manipulated. Several visual and aural clues can help to identify a deepfake.
Visual clues include:
- Unnatural eye movement (or a lack of eye movement)
- Inconsistent facial expressions
- Disjointed or jerky body movements
- Teeth that don’t look real or an absence of outlines of individual teeth
- Blurring or misaligned visuals.
Aural clues include:
- Robotic-sounding noises
- Audio glitches
- Inconsistent audio
- Strange pronunciations of words
- Lip sync errors with discrepancies between words being spoken and lip movements.
Combating the Influence of Synthetic Media
It will be hard to measure the impact of AI-generated deepfakes on the UK general election. With 43% of Brits getting their news from social media, many constituents will likely have seen a form of synthetic media related to a political party – whether they’re aware of it or not.
However, users on X regularly identify potential deepfakes and alert other users through the community notes feature. The X platform itself now has a manipulated media policy stating that ‘you may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm’. The account that originally posted the Wes Streeting manipulated video has now been suspended, but its impact remains to be seen.
With deepfake incidents on the rise, businesses should also consider how their security posture can be improved. Developing a robust ISMS aligned with the ISO 27001:2022 standard helps organisations bolster their defences against cyber threats, reduce the risk of successful attacks, and ensure employees are aware of their information security roles and responsibilities.