The communications watchdog has been accused of backing big tech over the safety of under-18s after the children’s commissioner for England criticised new measures for tackling online harms.
Rachel de Souza said she warned Ofcom last year that its proposals for protecting children under the Online Safety Act were too weak. New codes of practice issued by the watchdog on Thursday have ignored her concerns, she said.
“I made it very clear last year that its proposals were not strong enough to protect children from the multitude of harms they are exposed to online every day,” de Souza said. “I am disappointed to see this code has not been significantly strengthened and seems to prioritise the business interests of technology companies over children’s safety.”
De Souza, whose government-created role promotes and protects the rights of children, said she had received the views of more than 1 million young people, who said the online world was one of their biggest concerns. The codes of practice would not allay those fears, she said. “If companies can’t make online spaces safe for children, then they shouldn’t be in them. Children should not be expected to police the online world themselves.”
Measures announced by Ofcom include:
-
Requiring social media platforms to deploy “highly effective” age checks to identify under-18s.
-
Ensuring algorithms filter out harmful material.
-
Making all sites and apps have procedures for taking down dangerous content quickly.
-
Ensuring children must have a “straightforward” way to report content.
From 25 July, sites and apps covered by the codes must implement those changes – or use other “effective measures” – and risk fines for breaching the act. The measures apply to sites and apps used by children, ranging from social media to search and gaming.
Last year de Souza published a response to an Ofcom consultation on protecting children from online harm in which she made several recommendations including regular consultations with children.
The Molly Rose Foundation, a charity established by the family of the British teenager who took her own life after viewing harmful online content, also criticised the measures, which it said were “overly cautious”. The foundation said flaws in the codes included a lack of annual harm reduction targets.
Beeban Kidron, a crossbench peer and online safety campaigner, said the measures would mean “significant changes” to what children see in their feeds in terms of content related to pornography, suicide, self-harm and eating disorders, but the approach of Ofcom was “timid and unambitious”.
Ofcom rejected de Souza’s criticism. “We don’t recognise this characterisation of our rules, which will be transformational in shaping a safer life online for children in the UK,” said a spokesperson.
Melanie Dawes, Ofcom’s chief executive, said the measures were a “reset” and companies failing to act would face enforcement. “They will mean safer social media feeds with less harmful and dangerous content, protections from being contacted by strangers and effective age checks on adult content,” she added.
The Duke and Duchess of Sussex called for stronger protections for children online after unveiling a memorial in New York to young people who lost their lives due to the harmful effects of social media. The duke told BBC Breakfast “life is better off social media” and said “enough is not being done”.
The technology secretary, Peter Kyle, revealed he was considering a social media curfew for children after TikTok’s introduction of a feature that encourages under-16s to switch off the app after 10pm. Kyle told the Telegraph he was “watching very carefully” the impact of the curfew feature. “These are things I am looking at,” he said.
Kyle said the Ofcom codes should be a “watershed moment” that turned the tide on “toxic experiences on these platforms”.
Under the children’s codes, online platforms will be required to suppress the spread of harmful content, such as violent, hateful or abusive material and online bullying. More seriously harmful content, including that relating to suicide, self-harm and eating disorders, will need to be kept off children’s feeds entirely, as will pornography.
The codes require age-checking of users by platforms at risk of showing harmful content to children, such as social media networks. Those age checks could take the form of facial age estimation or matching a face to an uploaded ID document. Once the platform has gauged whether a user is under 18 it can then deploy measures to ensure they have a safe online experience.
Legal experts said some of the measures indicated a tougher approach than under initial Ofcom proposals. Companies are now expected to exclude from children’s feeds content that “potentially” fits into the most damaging category of material. “This reframing will likely cause more content to be captured by this measure,” said Ria Moody, a managing associate at the law firm Linklaters.
If companies fail to comply with the requirement to protect children from harmful content, Ofcom can impose fines of up to £18m or 10% of global revenue. In extreme cases, Ofcom can ask a court to block the site or app in the UK. Senior managers at tech companies will be criminally liable for repeated breaches of their duty of care to children and could face up to two years in jail if they ignore enforcement notices.