The Online Safety Bill: what you need to know
It has been six years since Professor Lorna Woods and William Perrin penned their framework for what would become the Online Safety Bill on the back of a Pret à Manger sandwich wrapper.
Since then, and its introduction as a white paper in 2019, the framework has undergone numerous rewrites and has garnered notoriety as one of the more controversial pieces of recent legislation, causing heated debates and multiple amendments. Despite these difficulties, it was announced on 19 September that the bill has passed through the House of Lords without further amendment and will be receiving Royal Assent.
Targeted content
At face value, and as the Government has been reminding the public, the bill’s overarching principles are to protect UK citizens and, more importantly, their children from the more dangerous aspects of the internet.
While the “legal but harmful” content that originally fell under the scope of the bill was removed last year after fierce debate, the legislation still grants the Government extraordinarily wide powers to investigate and curb illegal content on the internet. This will include material that promotes child exploitation, illegal immigration and human trafficking, animal cruelty, and terrorism.
Content that promotes self-harm or suicide, especially that aimed at children and teenagers, has also come under the bill’s powers, following a long and well publicised campaign by parents and survivors who have demanded that more is done to protect the most vulnerable in our society.
Websites will need to create stronger age verification procedures, with social media companies liable for checking that users are not underage. Businesses will also have to increase scrutiny of their advertisers, with fines being meted out to businesses for allowing any fraudulent advertising on their websites.
Social media companies have been quick to highlight that this may be placing a considerable burden on businesses since any oversight, no matter how understandable, in a climate in which websites often have hundreds of advertisers sometimes contracted through third parties, can be met with a punitive fine or criminal prosecution.
The bill sets out a three-pronged protection plan: ensuring any illegal content is removed; placing a legal responsibility on companies to enforce the terms of the bill; and allowing users to be able to filter content they do not wish to see online.
New provisions also strengthen existing “revenge porn” legislation, making it easier for victims to secure a conviction against those who share intimate material without consent and further criminalising the non-consensual creation and sharing of intimate “deepfakes”.
Censorship risk?
The legislation will still make a distinction between category one companies, which are seen as higher-risk entities and will thus receive the most scrutiny, and category two companies, which will be monitored and controlled but to a lesser extent. However, in both categories, the obligation to police and filter content will lie solely with the companies.
Ofcom has been confirmed as the regulating body, with the ability to issue fines of up to £18 million or 10% of the offending company’s annual revenue. Companies and, in more serious cases, individual executives, will be able to be held criminally liable. In the most extreme instances, Ofcom will have the power to ban the offending platform from operating within the United Kingdom. This has led many to speculate that the true scope of the bill will only be realised in the courts through precedents. This is one of the many concerns that have been raised.
The harsh penalties, coupled with the wide scope and the uncertainty of how such legislation will work in practice, should spur companies to begin preparing now.
While the Government has worked hard to ensure the powers granted by the bill are clearly defined, many have argued that in an effort to “future proof” it, the bill is simply too wide reaching.
Groups, such as the charity Wikimedia, have voiced concerns that the bill could mean that much of their educational material would fall behind an age verification wall or be blocked altogether, due to the possibility of it being classified as promoting explicit material to children.
While the Government has assured such groups that educational materials would be exempt, Lucy Compton-Reid, CEO of Wikimedia, has noted that citizens should be concerned about relying only on future goodwill and interpretation of legislation.
These concerns are exacerbated by recent trends of further Government involvement in traditionally independent regulatory bodies, and the explicit powers set out in the bill for the Secretary of State to direct Ofcom in the exercise of its functions.
This direct Government involvement has also raised concerns that the bill may lead to censorship, disguised as protecting citizens from harmful material, and would limit educational and information access for UK citizens.
Warnings
While the bill has been championed by groups such as the NSPCC and the Center for Digital Hate, many have warned of the dangers such wide-reaching legislation may have, especially when it concerns technology that even the developers themselves are still fully realising the effects of.
Criticisms from the likes of Apple and Meta may be easy to dismiss as self-interested parties resisting further oversight. However, more notice should perhaps be taken when the executive director of the Open Rights Group – an NGO committed to monitoring privacy, censorship and surveillance – has warned that the bill is "a huge threat to freedom of expression with tech companies expected to decide what is and isn't legal, and then censor content before it's even been published”.
Ciaran Martin, former head of the UK’s National Cyber Security Centre, has accused the Government of “magical thinking” and asserted that the bill, and the infrastructure the Government is demanding around it, will only build “a door that doesn’t currently exist, not into the app… but into devices, which could be used or misused by people who aren’t interested in protecting children”.
The fear is that not only will companies be forced to implement legislation written by those who may be removed from the realities of the industry, but that they will also have to develop new methods and technology to comply with legislation, or find themselves suffering very real consequences. This is not to mention the wider privacy concerns that the bill has also raised, as it may give the Government unfettered access to a host of media and the devices that host them.
The wide-ranging reactions to a bill that is ostensibly meant to protect the most vulnerable from the darker areas of the internet underline the nuanced and multi-faceted issues that face the data and technological legislative landscape.
Coupled with apprehensions over the practicalities of the bill, concerns have been raised regarding the powers of the Government to force platforms that offer end-to-end encryption to aid investigators in monitoring encrypted messages. Doing so, would, by definition, mean the messages are no longer encrypted, a guarantee upon which platforms such as WhatsApp and Signal have built their reputation and consumer trust.
Work to be done
The Government has acknowledged these concerns, and much work has been done to work with companies to try to find a way forward. A statement released by Lord Parkinson on 9 September restated the Government’s commitment to privacy and legal encryption, stating that Ofcom will work closely with technology platforms to find solutions that will preserve privacy as much as possible.
Furthermore, assurances have been made that only technology that has been accredited and meets a minimum standard of accuracy will be used to scan encrypted messages, and only in an effort to uncover child exploitation and abuse.
Perhaps most importantly, Lord Parkinson added that if no technology that is able to preserve the privacy of all unrelated encrypted messages exists, then no such monitoring will occur until appropriate technology has been developed to a standard that satisfies both the Government and technology companies and other stakeholders.
It has become cliché to say that technology moves too quickly to fully grasp, let alone legislate. The Online Safety Bill is no exception. As the UK attempts to position itself as a technology hub for the world, a balance must be struck between protecting citizens and fostering innovation. While the bill creates a much-needed framework to begin policing the worst excesses of the internet, it has, even by the Government’s own admission, left many crucial details to be determined. While some of these will undoubtedly be left for the courts to refine, businesses will have to work closely and meticulously with knowledgeable and experienced lawyers as they navigate the complexities of the Online Safety Bill.
Regulars
Perspectives
Features
Briefings
- Criminal court: CPO breach application not out of time
- Licensing: The future of minimum unit pricing
- Insolvency: Who gets the benefit?
- Tax: Raising revenue with Holyrood’s devolved powers
- Immigration: When is Home Office support “adequate”?
- Scottish Solicitors' Discipline Tribunal: October 2023
- In-house: Public service – so many paths