ai risky business blog

Why AI is a Risky Business – and What to Do About it

It’s 2023, and AI seems to be everywhere. ChatGPT, the AI produced by Elon Musk-founded institute OpenAI, is writing student essays, churning out books, and even challenging parking fines. In one incident that proves we’re living in an episode of Black Mirror, a particularly soulless university employee even used it to console students after a mass shooting.

How ChatGPT works

Privacy and security experts might be wondering how AI could help them too. Could it, for example, write your privacy policy? The answer is yes – sort of.

To unpack the question more thoroughly, it’s worth understanding how ChatGPT’s newer brand of AI works. It’s based on something called a large language model (LLM). This is an AI model trained using a large corpus of text, typically scraped from the internet. The training process breaks this text down into tokens, which might be parts of words. Then, it uses vast amounts of computing power to find patterns in the language that help it understand how those tokens are related.

When someone prompts the AI with a request (say, “explain cold fusion in the style of Sherlock Holmes”), the AI uses its vast training model to take chunks of words and predict which ones will likely come next. This is how it can produce convincing language that appears to follow the concepts outlined in the prompt.

Confident and dumb: A bad combination

The problem with this is approach is that the AI doesn’t know what it’s talking about. Researchers have likened it to a ‘stochastic parrot’, which strings together random words without understanding them. A paper describing this, On the Dangers of Stochastic Parrots, points out that “languages are systems of signs, i.e. pairings of form and meaning.” When someone says “rabbit,” you understand the concept of a rabbit and how it relates to various other things (Easter, spring, pets, tasty stew, etc.). “But the training data for LMs is only form,” say the researchers. “They do not have access to meaning.”

That creates problems for LLMs that you’re relying on for factual content. Because they’re sourcing words based on statistical probability rather than knowledge, they tend to write things that aren’t true. In AI jargon, this is known as ‘hallucinating’, and it’s a problem if you want to rely on them for factual content.

People have experimented with using ChatGPT to write a privacy policy, with questionable results. They found, predictably, that the outcomes were poorer when the LLM had to fill in more blanks.

Simple AI prompts returned policies that were not based on specific privacy laws and didn’t relate to specific business practices. The privacy policy improved as testers added more information, to the point where it produced an impressive result. However, by that point, “you will need to spend hours, days, or even weeks first determining what privacy laws apply to you, analyzing the disclosure requirements of those laws and then providing this information to ChatGPT, along with your specific business practices,” she says. Which begs the question: why not just do the whole thing yourself in the first place?

Others have described ChatGPT and similar LLMs as “calculators for words”. They’re only really useful for writing if you put in a lot of work yourself and carefully verify the results.

People’s use of AI is often also both confident and dumb

We can’t trust AI tools, but we can trust people to misuse them. We’ve already seen the fallout from poorly-conceived and inappropriately-applied AI, leading to significant fallout that affects real lives.

Authorities have used AI-powered systems shown to be biased when recommending judicial sentences. For example, the Dutch tax authority’s misuse of AI software led the agency to unfairly accuse parents of fraud—blind belief in flawed AI-powered facial recognition algorithms led to the wrongful arrest of innocent people. Bias also showed up in Amazon’s AI-powered recruitment system, which was axed after it kept failing to recommend qualified women for technology jobs.

How can we approach this problem? Some experts have already called for a six-month moratorium on AI development to get our heads around the whole sorry mess before things get even more out of hand.

No shortage of guidelines

We already have plenty of responsible AI guidelines to choose from, and they say similar things. Advocacy group Algorithm Watch keeps an inventory of AI guidelines from various institutions, including the EU’s Ethics Guidelines for Trustworthy AI. One that wasn’t included at the time of writing was the National Institute of Standards and Technology (NIST’s) AI Risk Management Framework, released in January this year. Like most other guidelines, the NIST framework sets out to provide some rails for the development and application of AI. It also echoes several common principles in other ethical guidelines, such as transparency, accountability, privacy, security, and resilience.

Per the OEDC, there are over 800 AI policy initiatives across almost 70 countries at the time of writing, but there’s a big difference between binding and non-binding policy. Compliance with most initiatives, including NISTs, are voluntary.

A look at companies like Clearview AI shows the importance of legally enforcing responsible AI policies rather than just asking nicely. The startup, which sells facial recognition services to law enforcement, powers them via millions of photographs pilfered from social media networks without people’s consent.

Not enough regulation

Countries have been divided in their approach to strictly regulating their policies. All face the same challenge: too little regulation, and they risk unpredictable and irresponsible uses of the technology. Too much, and they might force potentially lucrative AI innovators to move away.

The US has not regulated AI yet. Congress proposed the Algorithmic Accountability Act in 2022, but it died without a vote. The FTC has promised to use its existing powers to protect US residents from the egregious use of AI. It has also issued an advanced notice of proposed rule-making that could herald sweeping new trade rules to deal with automated decision-making and surveillance systems.

The EU has been more aggressive, proposing the AI Act, which advocates hope will pass this year. It would cover all AI products and services, classifying them according to their perceived risk level and applying rigorous assessments and data compliance rules to the higher-risk ones. Transgressors would face hefty fines.

The UK has taken what it euphemistically calls a “pro-innovation approach“. Instead of appointing a single new single AI regulator, it will divide responsibility for regulating AI between existing regulators and won’t initially give them any new legal powers to do so.

As politicians walk the tightrope between ethics and economic benefit, the future of AI hangs in the balance. One thing is certain: leaving it up to technology companies alone to act responsibly would be…irresponsible.

Explore ISMS.online's platform with a self-guided tour - Start Now