Will the UK’s AI Growth Plans Also “Mainline” Cyber Threats?
Table Of Contents:
The UK has, for over a decade, been an economic underachiever. Since the financial crisis of 2008-9, productivity has barely risen, increasing the country’s exposure to economic shocks, fomenting popular discontent and leading to deteriorating public services. The new government thinks it has an answer: AI. A major new AI Opportunities Action Plan unveiled in January is designed to deliver a “decade of national renewal” by “mainlining AI into the veins of this enterprising nation”.
Yet, as experts have warned, it also introduces significant new opportunities for threat actors to steal, sabotage, extort, and disrupt. The key will be ensuring the organisations signed up to make the plan a reality design their AI infrastructure and systems with security in mind from the very start.
What’s In The Plan?
The plan itself, which the government claims it will adopt in full, is certainly not short on ambition. There are eight key elements:
Laying the foundations with AI infrastructure
This includes building a dedicated AI Research Resource (AIRR) of advanced AI-capable computers and establishing “AI Growth Zones” to fast-track private sector data centre construction.
Unlocking data assets
Developing a National Data Library (NDL), which will make public datasets available “securely and ethically” for AI researchers and innovators. These plans will also involve building public sector data collection infrastructure, financing the creation of new high-value data sets, and incentivising industry to “curate and unlock private data sets.”
Training and skills
Assessing the AI skills gap, improving diversity in the talent pool, supporting higher education to increase the number of AI graduates, and attracting skilled workers from abroad.
Regulation, safety and assurance
Developing the AI Safety Institute, making the “UK text and data mining regime” more competitive, “urgently” funding regulators to enhance their AI expertise, and ensuring all sponsor departments prioritise “safe AI innovation”. The plan also cites pro-innovation initiatives like regulatory sandboxes and building government-backed “high-quality assurance tools” to assess AI safety.
Adopt a “Scan > Pilot > Scale” approach
Anticipating future AI developments, consistent and rapid piloting and prototyping, and a focus on scaling to tens or hundreds of millions of citizen interactions across the UK. This stage also references the need for infrastructure interoperability, code reusability and open sourcing.
Public and private sector reinforcement
The government plans to use its heft and newly built digital infrastructure to create new opportunities for innovators, create an AI knowledge hub and drive private sector interest in the Scan > Pilot > Scale approach for quick wins.
Address user adoption barriers
Improve public and private sector adoption via sector-specific AI champions and a new industrial strategy.
Advance AI
Create a new government unit with the power to partner with the private sector to maximise the UK’s stake in frontier AI.
More Than Lip Service?
As is typical with major government announcements, many of the details remain to be worked out. So while “security” is mentioned 14 times in the plan, it is only in the vaguest terms, such as that the government “is committed to building cutting-edge, secure and sustainable AI infrastructure” or that it “will responsibly, securely and ethically unlock the value of public sector data assets”.
Yet, there is good reason to be concerned about the implications. According to the World Economic Forum (WEF) ‘s Global Risks Report 2025, “adverse outcomes from AI technologies” was ranked by business leaders and experts as the sixth most severe risk over the next decade. These outcomes could come from poorly designed models or malicious actions like data/model poisoning. In the latter scenario, threat actors gain access to AI systems to corrupt the training data or manipulate model parameters to either sabotage or cause specific, undesirable outputs.
They could use the same access to AI infrastructure to steal sensitive corporate and customer training data, or even a large language model (LLM) itself, if it has been finely tuned by an organisation for a specific purpose and therefore represents valuable IP in its own right.
Unfortunately, research reveals that the key components of next-gen AI approaches like retrieval augmented generation (RAG) and autonomous “agentic AI” are riddled with security flaws. One report claims to have found multiple vulnerabilities in LLM hosting tools and platforms like llama.cpp and Ollama, as well as vector databases such as ChromaDB. It also reveals scores of publicly exposed servers and instances associated with such tools, many of which require no authentication. That’s not to mention the risk posed by compromised credentials.
“A high percentage of cyber-attacks use stolen credentials – so attackers log in rather than hack in,” SoSafe CSO Andrew Rose tells ISMS.online. “Many firms create AI chatbots to assist their staff, but few have thought through the scenario of their chatbot becoming an accomplice in an attack by aiding the attacker to collect sensitive data, identify key individuals and useful corporate insight.”
Experts Raise Red Flags
Other security experts have also raised the alarm over the government’s plans. Michael Adjei, director of systems engineering at Illumio, warns of a “hidden layer” of proprietary and insufficiently scrutinised AI technology that threat actors could target via the adversarial data poisoning attacks explained above.
“The challenge is that the hidden layers of AI operate through ‘learned representations’, which is difficult for security teams to interpret and monitor for vulnerabilities. This makes it harder to detect tampered AI models, particularly in systems that function autonomously or in real-time,” he tells ISMS.online.
“The AI supply chain poses further risks. Compromised third-party data, training environments, software, or hardware can jeopardise entire AI systems. For example, attackers could inject malicious data into training datasets, creating biases or vulnerabilities in AI models.”
The government plan references open-source software, a particular supply chain concern given the industry’s well-documented security challenges.
Bridewell CTO Martin Riley has a background in data centre design and operation. He warns that the “data centre market is not well regulated, and the maturity of cybersecurity around these facilities is somewhat lacking.” He also raises a red flag about the NDL.
“The NDL will primarily aim to ensure the private sector can innovate to support the public sectors, so the rigour around the data, anonymisation and protection of individuals is going to create several cybersecurity challenges,” Riley says. “What are the cybersecurity requirements going to be for those that are looking to access the NDL and use its data?”
SoSafe’s Rose wants to see a greater focus on governance.
“I’d hope to see the government reiterating that AI must comply with existing regulations, such as data privacy standards. Insisting on quality and control for input data would be wise to ensure a quality output that lacks bias, but this becomes a challenge when the data sets become vast and wide-ranging, sourced from many places,” he explains.
“The key is insisting that firms who embrace AI set up a governance and oversight committee. It should require an inventory of where AI is used, and the scope of its responsibilities, supported by risk assessments of the potential harm caused by erroneous output or failure, and recovery paths for any failures or breaches.”
ISO 42001 To The Rescue
Regulators will play a significant role in ensuring the government’s ambitions are realised safely, securely and ethically. The AI Safety Institute will become a statutory body, sponsor departments will be funded to “scale up their AI capabilities”, and “safe AI innovation” will be emphasised in guidance to these regulators.
However, while it remains to be seen what new rules may be implemented as a result, there are already best practice standards that can help AI developers and users navigate whatever regulatory frameworks come their way. ISO 42001, for example, is designed to drive responsible use and management of AI management systems.
“It provides guidelines for secure AI use, helping developers implement mechanisms to detect unusual behaviour or unexpected outputs, reducing susceptibility to manipulation,” says Illumio’s Adjei.
SoSafe’s Rose agrees.
“ISO 42001 is an effective methodology for helping organisations to adopt AI mindfully. It drives a careful approach to assessing and controlling the implementation of AI, ensuring that there is sufficient insight and oversight of associated risks,” he concludes.
“Like ISO 27001, it doesn’t make you secure. However, it creates a path for continual assessment and improvement, which increases the likelihood of creating a resilient solution.”
We’ll know much more about the government’s plans in spring, although with the UK economy in poor shape, it remains to be seen whether the Treasury will block or dilute many of these initiatives. Whatever happens, let’s hope that built-in security and enhanced AI governance are non-negotiable.