online safety bill blog

Online Safety Bill – A Cleaner Internet at What Price?

Since the proposition of the Online Safety Bill, debates have been held in and out of Westminster on its various contents. Dan Raywood looks at its current state.

It is around two years since the first draft of a Bill was introduced, proposed by then-UK Prime Minister Boris Johnson to regulate online content better.

With a primary intention to control harmful and offensive content, the Online Safety Bill follows 2019’s Online Harms White Paper to address harmful content and content that harms individual users. This case has been especially prevalent in the UK following the high-profile case of Molly Russell, a 14-year-old who took her own life in 2017 after viewing suicide and self-harm content online.

The intention of the Bill is to create a duty of care for websites and online platforms and require them to take action against both illegal and legal but harmful content, with fines of up to £18 million or 10% of annual turnover, whichever is higher, to those who do not comply.

Why The Controversy

All in all, this sounds reasonably acceptable. It intends to clean up the internet and better protect vulnerable people from potentially harmful and offensive content, so why has it caused such controversy?

Well, on one side, this is a proposal of a tool for adults to have greater control over the types of content they see and who they engage with online, and help reduce the likelihood that they will encounter certain types of content that will be set out in the Bill. The Bill will already protect children from seeing this content.

On the other hand, this has been called an attempt to rein in content generally agreed to cause harm, even though it is not against the law.

While the UK has laws prohibiting hate speech and threats, the Bill will defer determination to Ofcom to clarify whether large tech platforms suitably fulfil their duty of care by conducting risk assessments and explaining how they mitigate risk.

Human Rights Organization Article 19 says in its blog on the Bill that the “outsourcing” of decisions on illegality will require companies to assess and decide whether their users’ speech is legal or not.

“This is deeply problematic as only independent judicial authorities should be given the power to make such a determination,” it says. “In addition to the legitimate concerns of outsourcing decisions on the legality of users’ speech to private actors, we note that in most cases, these assessments are extremely complex and context-dependant and should therefore be made by trained individuals.”

Ultimately, this raises questions on who determines what is illegal and not, as online platforms deploy algorithmic moderation systems, such as automated hash-matching and predictive machine learning tools, to conduct content moderation. Due to a lack of sophistication in distinguishing legal from illegal content in a reliable manner, “they routinely identify content as illegal and remove vast amounts of legitimate content.”

Breaking End to End Encryption – The Data Privacy Conundrum

The other sticking point for the Online Safety Bill has been declarations about breaking end-to-end (E2E) encryption or at least having access to the conversations that take place via these secure applications.

The debate about ‘backdoors’ into apps like WhatsApp and Signal is hardly new. Predecessors to Boris Johnson, such as David Cameron, were discussing this concept in the middle of the last decade. However, this could now be in a government Bill and, for some organisations, has proved to be the main talking point.

According to Article19, section 103(2)(b) will permit Ofcom the powers to order a provider of a user-to-user service to use ‘accredited technology’ to identify child sexual exploitation and abuse (CESA) content – whether such content is communicated publicly or privately.

Article19 claims: “The complete failure of the Bill to make any meaningful distinction between the requirements on public platforms as opposed to private messaging services means that there is a real risk that offering end-to-end encryption will constitute a violation of the Bill.”

Secure Communications Under Threat

The use of secure communications has been well cited, with human rights groups, journalists, whistle-blowers, victims of domestic abuse and individuals from minority groups, all regular users of the likes of WhatsApp, Signal and Telegram, speaking out on the Bill, saying they would rather be blocked in the UK than undermine its technology. They could even stop providing services in the UK if the Bill required them to scan messages.

A policy brief from the Open Rights Group claims Ofcom could force chat platforms to use government-accredited content moderation technology. While the Bill is silent regarding the precise implementation, it is generally understood to mean a form of client-side scanning, where the software would reside on users’ smartphones.

Dr Monica Horten, policy manager for freedom of expression at the Open Rights Group, said: “We are therefore looking at measures that will result in mass surveillance of communications services used by more than two-thirds of the UK population for private messaging, including video and voice calls. They will interfere with UK citizens’ privacy and freedom of expression.”

Horten claims, “Parliament is being asked to legislate for disproportionately intrusive measures, affecting our privacy and freedom of expression, without any specific information about the impact on either users or providers.” People and companies have a right to know what the measures are and how they can take action to avoid penalties before the Bill moves forward.

Achieving Data Privacy And Online Safety – Who Has The Answers

Undoubtedly, the rollout of the Online Safety Bill has been tricky – many argue that a better internet, which is free from offensive material, is a good thing. It is the way these factors are being deployed that is causing so much concern, as on the one hand, the NSPCC is calling for “effective action” against abusive material and called on (WhatsApp and Facebook parent company) Meta to pause plans to pause E2E encryption of Facebook and Instagram messenger services, but also stated that “the Online Safety Bill should be seen as an opportunity to encourage companies to invest in technological solutions to end-to-end encryption that protects adult privacy and keep children safe.” So some middle ground that doesn’t impact users’ privacy and doesn’t insist on breaking encryption technology but still works for online safety?

Of course, the debate on privacy is the loudest, and human rights organisations have led the opposition to breaking E2E encryption. Still, advocates of the Bill will claim that those platforms are most used to exchange those abuse material.

Determining the end and who sits on the right side is problematic. Since it was first introduced, the Bill has caused new discussions and continued other debates, and it will be interesting to see how this is resolved and concluded.

Explore ISMS.online's platform with a self-guided tour - Start Now