The Online Safety Bill is currently going back to Report Stage in the Commons on 16 th January, and is widely expected to be in the Lords for the end of the month, or beginning of February. We anticipate it could complete its legislative passage
by June.
At the end of last year, a widely publicised change to the Online Safety Bill took out the so-called "legal but harmful" clauses for adults. The government has claimed this is protecting free speech.
However, in their place, new clauses have been shunted in that create a regime for state-mandated enforcement of tech companies' terms and conditions. It raises new concerns around embedded power for the tech companies and a worrying
lack of transparency around the way that the regulator, Ofcom, will act as enforcer-in-chief.
Whatever they say goes
It is not a good look for free speech. It does not alter the underlying framework
of the Bill that establishes rules by which private companies will police our content. On the other hand, it does create a shift in emphasis away from merely "taking down" troublesome content, and towards "acting against users".
For policy geeks, the change removed Clauses 12 and 13 of the Bill, concerning "content harmful to adults". The clauses regarding harmful content for children, Clauses 10 and 11, remain.
The two
deleted clauses have been replaced by five new clauses addressing the terms of service of the tech companies. If their terms of service say they will "act against" content of "a particular kind", then they will follow through and do
so. This will be enforced by Ofcom.
The new clauses emphatically refer to "restricting users' access" as well as taking down their content, or banning users from the service. The language of "restricting
access" is troubling because the implied meaning suggests a policy of limiting free speech, not protecting it. This is an apparent shift in emphasis away from taking down troublesome content, to preventing users from seeing it in the first place. It
is an environment of sanctions rather than rights and freedoms.
There is no definition of "a particular kind" and it is up to the tech companies to identify the content they would restrict. Indeed, they could restrict
access to whatever they like, as long as they tell users in the terms of service.
The political pressure will be on them to restrict the content that the government dictates. It will not be done by the law, but by backroom
chats, nods and winks over emails between the companies, Ofcom and government Ministries.
Joining the dots, Ofcom has a legal duty to "produce guidance" for the tech companies with regard to compliance. Ofcom takes
direction from the two responsible Ministries, DCMS and the Home Office. A quick call with expression of the Minister's concerns could be used to apply pressure, with the advantage that it would skirt around publicly accountable procedures. "Yes,
Minister" would morph into real life.
Restricting access to content
The new clauses do attempt to define "restricting users access to content". It occurs when a tech company
"takes a measure which has the effect that a user is unable to access content without taking a prior step" or "content is temporarily hidden from a user". It's a definition that gives plenty of room for tech companies to be inventive
about new types of restrictions. It does seem to bring in the concept of age-gating, which is a restriction on access, requiring people to take the step of establishing their identity or age-group, before being allowed access.
The
new provisions also state that tech companies "must not act against users except in accordance with their terms and conditions", but the repetition of restrictive language suggests that the expectation is that they will restrict. There is no
recognition of users' freedom of expression rights, and they may only complain about breach of contract, not breach of rights.
These restrictive clauses should also be seen in light of another little twist of language by the
Bill's drafters: "relevant content". This is any content posted by users onto online platforms, but it is also any content capable of being searched by search engines, which are in scope of the Bill. The mind boggles at how much over-reach this
Bill could achieve. How many innocent websites could find themselves demoted or down-ranked on the basis of the government whim of the day?
"Relevant content" is applicable when users seek to complain. But how can users
complain about their website being down-ranked in a search listing when they don't have any confirmation that it has happened? The Bill makes no provision for users to be informed about "restricted access".
The change
fails to take account of the potential cross-border effects, that will especially affect search functions. The Bill limits its jurisdiction to what it calls "UK-linked" content or web services. The definition is imprecise and includes content
that is accessible from the UK. Online platform terms and conditions are usually written for a global user base. It's not clear if this provision could over-reach into other jurisdictions, potentially banning lawful content or users elsewhere.
Failure of policy-making
It reflects a failure of policy-making. These platforms are important vehicles for the global dissemination of information, knowledge and news. The restrictions that online
platforms have in their armoury will limit the dissemination of users' content, in ways that are invisible and draconian. For example, they could use shadow bans, which operate by limiting ways that content is shown in newsfeeds and timelines. The
original version of the Bill as introduced to Parliament did acknowledge this, and even allowed user to complain about them. The current version does not.
Overall, this is a failure to recognise that the vast majority of users are
speaking lawfully. The pre-Christmas change to the Bill puts them at risk not only of their content being taken down but their access being restricted. Freedom of expression is a right to speak and to be informed. This change affects both.