ai regulation uk

Is the UK Taking the Right Approach to AI Regulation?

In mid-March, the European Parliament finally approved the Artificial Intelligence Act, in an effort to ensure the safe and compliant use of AI, while boosting innovation. It’s widely regarded as a world-first in AI regulation, and stands in stark contrast to the UK’s more hands-off approach. But which approach is likely to benefit business and society more? And will the UK eventually change its tune?

The First of its Kind

Endorsed by 523 members of the European Parliament – with only 46 voting against it – the AI Act is intended to:

  • Enhance the EU’s competitiveness in strategic sectors
  • Create a safe and trustworthy society to counter disinformation
  • Promote digital innovation
  • Ensure human oversight and trustworthy and responsible use of AI
  • Set safeguards and ensure transparency when using AI

 

Brando Benifei, MEP and co-rapporteur of the Internal Market Committee, described the legislation as “the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency.”

Although it is the first regulation directly related to AI, a number of actions had already been taken under the GDPR in this arena, including banning chatbots, and a temporary suspension of Google’s Bard AI tool in the EU after intervention from the Irish data watchdog.

Jonathan Armstrong, lawyer and educator at L-EV8, calls the EU AI Act “pretty comprehensive”, saying it will ultimately set the standard globally for other regulations. Although it won’t be adopted by the UK, as it was introduced well after Brexit, it could still have an impact on developers in the country, he argues.

“It is important to understand that there are UK laws which already apply to AI, including data protection law which has been used by regulators already,” he tells ISMS.online. “It’s also important to remember that since AI is mostly cross-border, British AI developers and users are also likely to have to comply with the EU regime too once that’s in force.”

What Is the UK Doing?

When it comes to the UK’s own approach to AI regulation, Armstrong believes the government is playing catch up, although there is an opportunity in some areas for the UK to introduce a different regulatory regime.

“However, I am not sure the current government is fleet of foot enough to do that and clearly has already set its stall out with self-regulation in the short term,” he argues.

The UK government’s AI white paper was launched last year, and intends “to guide the use of artificial intelligence in the UK, to drive responsible innovation and maintain public trust in this revolutionary technology.” Its instruction is for existing regulators “to interpret and apply within their remits in order to drive safe, responsible AI innovation.” Earlier this year, a government response to the EU AI Act claimed regulators were independently implementing these principles.

“We remain committed to a context-based approach that avoids unnecessary blanket rules that apply to all AI technologies, regardless of how they are used”, it noted, adding “this is the best way to ensure an agile approach that stands the test of time.”

The idea here is to provide a “pro-innovation approach to AI regulation” that will embrace the opportunities that AI presents, while encouraging “safe, responsible innovation” being “pro-safety” and ensuring AI is a force for public good. However, Armstrong argues that the current government may not understands AI as well as it should.

More or Less Regulation?

So is the UK’s approach better for business? Or is more regulation a positive for organisations? Neil Thacker, CISO at Netskope, says he likes the way the UK has gone about this.

“I know there is a slight contrast between the UK and the EU at the moment,” he tells ISMS.online. “So you’ve got the EU rolling out a regulation with penalties to ensure that organisations apply a form of safety and security control to AI, whereas the UK is almost ‘we are going to take a stance on this with a framework, but it’s non-statutory’.”

He argues that there will likely be some kind of UK legislation on AI eventually, “but it’s trying to work out when is the right time”. In the meantime, Thacker argues that CISOs are “self-regulating” to make sure they have controls in place, while society tries to understand the implications of the technology.

“One of biggest challenges is companions and assistants are bundled into every software,” he says. What CISOs therefore need is visibility into where AI is present and being used. And they could also benefit from the kind of rigorous approach endorsed by the EU to ensure systems are safe and effective

“It’s about looking up the providers, the deployers, the importers, the distributors of these services, the manufacturers of AI systems,” he concludes. “It’s making sure that they are accountable for ensuring that lot of the basic fundamentals of what the EU AI Act stands for are being applied to their systems.”

Wait and See

In the meantime, the government’s approach seems to be “wait and see”. Although the National Cyber Security Centre (NCSC) has developed what it claims to be “world-first” guidelines for secure AI development, there’s little sign of any regulatory power to enforce such best practices. The government appears content to grab headlines with little substance to back them up.

A much-publicised global event, held by the government at Bletchley Park in November 2023, was tasked with considering the risks of AI, especially at the frontier of development, and discussing how they can be mitigated through internationally coordinated action. However, it’s described by L-EV8’s Armstrong “as something of a big tech love-in rather than a considered look at regulation.”

In the meantime, UK businesses will look forward with interest to seeing what the next administration has in store.

Explore ISMS.online's platform with a self-guided tour - Start Now