how states are taking a lead on ai regulation banner

How States Are Taking a Lead on AI Regulation

The past year has been one of AI’s biggest since scientists first proposed the concept in the late 1940s. AI vendors have continued to innovate, with more powerful AI models appearing regularly, and tech vendors are busily folding them into their core offerings. While the White House has issued an Executive Order to try and put some guardrails around this powerful technology, it is limited without congressional support, and lawmakers have done very little to help. There’s still no overarching AI regulation at the federal level.

This has left state governments to fill the vacuum, and they’ve been stepping up to the challenge. According to the Business Software Alliance, state legislators introduced 440% more AI-related bills in 2023 than they did in the prior year. These laws have explored a variety of AI-related issues, including the potential for bias, transparency around what data is used to train AI systems, and specific threats such as deepfakes.

A flood of state measures

Regulatory measures taken by states vary widely. Some, such as Texas, have focused on the use of AI by the state itself. Its House Bill 2060, signed into law last year, established an AI advisory council to review the state’s use of AI systems and assess the need for a code of ethics.

Other AI regulations encompass the private sector use of these systems, often nestled within consumer privacy laws. Oregon’s SB619, which comes into effect on July 1 this year, includes an opt-out provision for data used in automated profiling. Montana’s SB384 consumer privacy legislation, introduced in February 2023, carries a similar provision, as does Virginia’s Consumer Data Protection Act and New Hampshire’s SB255. All of these have already been enacted, as has Tennessee’s HB1181, which mandates data protection assessments for data profiling.

Some focus on generative AI. Utah’s SB149, signed into law in March this year, mandates that individuals be informed if they are interacting with AI a regulated occupation (one that requires them to have a license or a state certificate).

Other states have attempted to go beyond profiling opt-out provisions with further AI-related protections. Connecticut, which enforced such a provision in its Connecticut Privacy Act last year, attempted another bill – SB2 – which would have regulated the development of automated decision tools and AI systems themselves. It would have demanded comprehensive documentation of such systems and their behavior, along with the data used during development (this would have included training data). It would also have required risk assessments around their deployment, along with transparency.

Connecticut’s state senate passed SB2 but it failed to reach a house vote by its May deadline, due in part to Connecticut governor Ned Lamont’s promise to veto the bill.

However, Colorado had better luck pushing the boundaries of AI regulation. A week after SB2 fell at the final hurdle, the Colorado state legislature passed SB24-205, which specifically regulates AI. The law introduces an ethical framework for the development of high-risk AI systems, forces disclosure of their use, and gives consumers the chance to challenge their results and correct any personal data that the AI system used. High-risk AI systems, which the law defines as those leading to “consequential decisions,” will also be subject to a risk assessment and annual review.

Other states have zeroed in on specific uses of AI. In 2020, Illinois passed 820 ILCS 42/1 (the Artificial Intelligence Video Interview Act), which forces potential employers to get consent from job candidates if they use AI to analyse their video interviews.

There are other aggressive bills in the hopper. In May, California’s senate passed SB1047 by 32 votes to one. This bill, which has a deadline of August 31 to pass the state assembly, echoes some of the measures in the White House’s Executive Order on AI. Notably, it imposes safety measures such as red-teaming and cybersecurity testing for large AI models. It would create a distinct office to regulate AI, along with a public cloud for training AI models, and would ensure a “kill switch” was built into AI models to disable them should anything go wrong.

State laws pushing the boundaries of AI regulation will continue to emerge as long as federal lawmakers sit on their hands. There have been some promising moves, such as the introduction of Senator Schumer’s SAFE Innovation Framework to investigate the responsible use of AI. However, that initiative is moving at a glacial pace. A proposed Federal Artificial Intelligence Risk Management Act would also force federal agencies to adopt the NIST AI Risk Management Framework and create AI acquisition guidelines for agencies. Right now, however, the states are where the action is.

How can you prepare?

How can companies begin to prepare for what is already becoming a patchwork of state-level regulation around AI? The ISO 42001 standard serves as a useful reference. It outlines the requirements for an Artificial Intelligence Management System (AIMS) that includes policies, procedures, and objectives as part of a governance structure for the use of AI systems. It also urges transparency and accountability in AI-based decision making, while helping organizations to identify and mitigate AI-related risks.

As state rules proliferate, an ISO standard is a yardstick that demonstrates good practice and forethought during an uncertain time for AI regulation.

Explore ISMS.online's platform with a self-guided tour - Start Now