Businesses Urged to Track ‘Fast-Evolving’ AI Regulations
Table Of Contents:
Artificial Intelligence (AI) is transforming the information technology sector at a dizzying rate. AI has transformed applications, including data analysis, customer service, and even software development. Businesses that are slow to adopt AI risk huge competitive disadvantages. But AI technology deployments don’t come without risk.
For example, flaws in an AI system designed to tackle childcare benefit fraud plunged families in the Netherlands into financial distress. Amazon had to scrap an AI recruiting tool that showed bias against female candidates.
The use of AI for data collection and analysis also raises privacy issues and the potential for data breaches, particularly in sensitive sectors of the economy. For example, many banks have banned staff from using tools such as ChatGPT and other AI virtual assistants because of concerns that the nature of queries might leak information about business secrets such as planned acquisitions or mergers.
A recent report by the Centre for Long-Term Resilience (CLTR) called for the UK to set up a system to log AI misuse or malfunction incidents. The think tank argues that problems with AI technology need to be treated in the same way that the Air Accidents Investigation Branch investigates plane crashes.
An incident reporting system for AI problems offers the opportunity to develop best practices in areas such as risk management, learn lessons and shape regulations, according to the CLTR.
Regulatory Progress
While regulation is still catching up with the use of AI within businesses, a wait-and-see approach to compliance is far from wise.
David Corlette, VP of product management at VIPRE Security Group, told ISMS.online that AI regulation is evolving (near) concurrently with the technology itself.
“While we are yet to see a comprehensive framework for AI – notable progress is being made,” according to Corlette. “There’s the NIST AI Risk Management Framework (AI RMF), and of course, the ISO 42001, which offers a promising approach to AI governance. The ISO 42001 will bring a level of consistency and reliability across borders.”
Laying Down a Foundation
Frameworks such as ISO 42001 can help establish robust foundations and lighten the load of achieving compliance as and when new regulations are introduced.
ISO 42001 outlines how businesses can establish a framework for establishing and maintaining an Artificial Intelligence Management System within their organisation. Risk management is one of the core components of the framework.
According to Glenn Chisholm, co-founder and CEO of Obsidian Security, “The ISO 42001 standard emphasises risk management and can be applied to include risks associated with AI, including ethical considerations, risk and impact assessments, data privacy, bias, and continuous improvement.”
Chisholm added, “While it doesn’t automatically ensure compliance with other standards, ISO 42001 shares many attributes with standards like the EU AI Act, NIST AI RMF, and others.”
Peter Wood, chief technical officer at Spectrum Search, added: “By adopting ISO 42001, organisations can simplify compliance with forthcoming regulations through a proactive rather than reactive approach. This allows organisations to effortlessly adapt to changing regulatory landscapes, minimising the risk and potential penalties of non-compliance.”
Worldwide Initiatives
International standards are rapidly evolving, and multiple countries, such as the United States, China, India, and Australia, are all in the process of establishing their regulations.
According to Chisholm, “Many of these standards will likely borrow from each other, so the advantage of aligning with one will likely flow into the other.”
Legal and compliance experts say adherence to the recently introduced EU AI Act ought to be a priority for UK businesses, given that they will often do business with the EU.
“Many UK businesses operate in or with the EU; therefore, alignment with the EU AI Act ensures continued access to this significant market,” Becky White, senior data protection and privacy solicitor at Harper James, told ISMS.online. “Non-compliance could result in barriers to entry or penalties, affecting business operations and competitiveness.”
The EU AI Act focuses on high-risk applications and datasets. Its core principles emphasise transparency and accountability.
“The EU AI Act is a good place to start as its core principles of transparency, safety, and data governance are likely to be fundamental for any AI regulation in any region,” according to VIPRE’s Corlette. “This legislation is the first of its kind and is practically laying down the framework for an emerging industry that has no prior enforceable standards.”
Corlette concluded: “History would suggest that regulations developed in other regions would closely resemble this budding EU legislation too. An example is the EU’s GDPR.”
While alignment with the EU AI Act ensures smooth business operations across borders and can avoid potential regulatory conflicts, UK businesses should keep a close eye on regulatory developments closer to home.
Spectrum Search’s Wood advised “hedging bets by closely observing UK-specific regulations and maintaining the flexibility to adapt to both UK and EU requirements.”
Experts advised that UK organisations should position themselves to adapt quickly if and when differences arise between regulatory regimes in different regions.
Harper James’ White commented: “Full compliance with multiple regulatory regimes at the same time can be resource-intensive; therefore, this type of ‘hedging’ strategy allows businesses to allocate resources effectively, balancing compliance with innovation and operational efficiency and by maintaining some flexibility, businesses can pivot and adapt to changes in both UK and EU regulatory environments as and when required.”
Privacy and Governance
Governance and privacy concerns arising from enterprise use of AI go beyond compliance and regulatory matters.
“The training of Gen AI models and databases will often involve processing vast datasets which contain significant amounts of personal data, which can lead to significant privacy and governance risks for UK businesses,” Harper James’ White explained. “This information can sometimes include sensitive or special category data about individuals, the processing of which can inadvertently perpetuate or even exacerbate biases, leading to unfair or discriminatory outcomes. The misuse of the generated content could violate privacy rights.”
Algorithmic bias, where AI systems may unintentionally echo existing biases present in the training data, can be mitigated by regulated audits and diverse data sets.
According to White, Businesses can help mitigate these risks by implementing robust data governance practices and focusing on how their employees use AI.
“This includes employing anonymisation techniques, establishing stringent access controls, conducting regular bias monitoring and ensuring that the data used to train the AI models is accurate, complete and representative,” White advised.