Last night the Online Safety Bill received its Second Reading in the House of Commons, which was the first opportunity for MPs to debate its general principles and determine whether it should be approved to proceed.
Our primary interest in the Bill is its potential to emerge as a potent weapon against online abuse. We have seen the damage caused to rural communities, and the professionals and enthusiasts who live and work there, from bullying and harassment by extreme activists in areas related to animal rights. We are currently hearing from victims through our Online Bullying Survey. But the principle applies to all abuse, regardless of its motivation.
Critically, this behaviour is already illegal. As Carla Lockhart MP (Upper Bann, DUP) pointed out,
"a logical cornerstone would be that what is illegal offline – on the street, in the workplace and in the schoolyard – is also illegal online. The level of abuse I have received at times on social media would certainly be a matter for the police if it happened in person. It is wrong that people can get away with it online."
While much attention has centred on the Bill's provisions on "legal but harmful" content, the Bill also includes some useful reforms to the offences surrounding harmful communications. We want to ensure that these offences are a key focus for social media companies, so that they will be expected to be rigorous in taking down this content and, where necessary, removing its perpetrators from their platforms.
Other issues that the Bill must tackle include that of extremist activists posting false critical reviews of businesses, motivated by their ideological opposition to aspects of their operations or their owners' personal choices. Care must be taken to avoid stifling legitimate criticism but false messages, which are likely to fall within the scope of the new 'False communications' offence, similarly need to be defined as priority offences for all platforms to tackle.
Questions also remain as to how the Bill will combat harassment under the cloak of anonymity. The Bill seeks to make identity verification available to all users of large platforms and allow them to filter out content from unverified accounts. It would seem, therefore, that part of the intended response by victims is that they should restrict their interactions to other identity-verified users, but this does not address the risk of reputational damage resulting from harmful communications being viewed by third parties.
Provided the law is rigorously enforced this Bill should offer some enhancement to existing deterrence, but the best defence for those suffering harassment is for it not to happen in the first place, or at least that it be stopped at source. We look forward to continuing our contribution to the scrutiny of this Bill to help ensure this can be accomplished.