A recent revision to the (draft) Online Safety Bill could mean that executives who don’t comply with the regulator’s information request could start facing penalties such as jail just two months after the bill becomes law.
The Online Safety Bill
The UK government’s Online Safety Bill, published in May 2021 and now introduced to parliament, is (draft) legislation that’s designed to place a ‘duty of care’ on internet companies which host user-generated content to limit the spread of illegal content and “legal but harmful” content on these services.
The idea of the Online Safety Bill is essentially to prevent the spread of illegal content and activity (e.g. images of child abuse, terror material, and hate crimes), as well as to protect children from harmful material, and to protect adults from legal but harmful content.
The Bill applies to social media platforms, video-sharing platforms, search engines plus other tech services and requires them to put in place systems and processes to remove illegal content as soon as they become aware of it. The Bill also requires these services to take additional proactive measures with regards to the most harmful ‘priority’ forms of online illegal content.
Ofcom’s Expanded Role
The Bill, which is due to be introduced as law later this year, will use Codes of Practice to regulate the behaviour of social media companies and will be enforced by the media and communications regulator, Ofcom. The regulator will have the powers to fine rule-breakers as much as 10 per cent of their global annual turnover! Also, Ofcom will have the powers to force companies failing to comply to improve their practices and even to block non-compliant sites.
Dame Melanie Dawes, Ofcom Chief Executive, said of the introduction of the Bill to Parliament (March 17): “Today marks an important step towards creating a safer life online for the UK’s children and adults. Our research shows the need for rules that protect users from serious harm, but which also value the great things about being online, including freedom of expression. We’re looking forward to starting the job”.
Punishing Named Executives
One recent aspect of the debate around the Online Safety Bill, in line with the idea of bringing about a new era of accountability, has been the naming and punishing of specific individuals/executives within offending companies to make them more accountable. The draft Bill, for example, already included the ability to impose criminal sanctions of named tech executives.
Was 2 Years – Could Be Two Months!
These sanctions (i.e. prison sentences) were originally due to be delayed for two years (a grace period) after the laws are passed but some UK MPs have been asking the government to remove this long grace period before criminal sanctions can be faced.
Digital Secretary Nadine Dorries, who has personal experience of having been targeted by trolls, was recently reported to have favoured a six months timeline (grace period) before the imposition of prison terms for those tech execs who fail to remove “harmful algorithms”.
The most recent revisions to the Bill, however, mean that when it becomes law, the time frame for being able to apply criminal liability powers against senior executives in social media and tech companies could be down to as little as two months.
In a recent press release (March 17), the government said: “Today the government is announcing that executives whose companies fail to cooperate with Ofcom’s information requests could now face prosecution or jail time within two months of the Bill becoming law, instead of two years as it was previously drafted.”
The punishment for not cooperating with Ofcom (including falsifying or destroying data) could see offenders facing up to two years in prison, or a substantial fine.
The kinds of priority offences listed in the draft bill are terrorism, child sexual abuse, and exploitation. The Department for Digital, Culture, Media and Sport’s Secretary of State also has powers to add further priority offences (with Parliament’s approval) via secondary legislation once the bill becomes law.
As it stands now, the Bill has been written to tackle online safety in areas such as protecting children from harmful online content, limiting user’s exposure to illegal content and requiring online platforms where users can post their own content ensure they ‘protect children, tackle illegal activity and uphold their terms and conditions’.
More recent additions to areas covered by the Bill include:
– Making social media platforms tackle ‘legal but harmful’ content (as defined by Parliament).
– Tackling paid-for-scam adverts on social media and search engines.
– Ensuring that there are 18+ age verification checks on pornography-hosting sites.
Also written into the Bill are measures to address anonymous trolls online, and the criminalisation of cyber flashing.
What Does This Mean For Your Business?
With the Bill being strengthened in recent months to bring about greater accountability among executives of social media companies, the hope is that this will make them take it more seriously and make compliance a priority. The treat of possible prison terms for executives, has now been backed up with a dramatically reduced ‘grace period’ – two months instead of two years. The hope that this will really drive the message home that the UK government now intends to get tougher about online safety and how social media platforms offer protection to users. The Bill is now being debated in Parliament which is a signal that it could soon become law. Social media platforms, freedom and rights groups, child safety organisations, law firms, and tech and safety commentators will now be watching closely to see what aspects of the Bill will make it into law and what changes will need to be made by tech businesses to comply