In a significant move to safeguard young users, Britain’s leading media and privacy regulators have issued a stern warning to major social media companies, urging them to take more robust actions to prevent children from accessing their platforms. The regulators highlighted growing concerns that these companies are not adequately enforcing their own minimum age restrictions, leaving children vulnerable to potentially harmful online experiences.
The UK government has been actively exploring stricter regulations aimed at limiting children’s exposure to social media, with discussions underway about potentially banning users under the age of 16 from these platforms altogether. This proposed measure aligns with recent policy changes in countries like Australia, which have taken a firm stance on protecting minors from the risks associated with social media use.
Ofcom, the UK’s communications regulator, alongside the Information Commissioner’s Office (ICO), expressed increasing alarm over the role of algorithm-driven content feeds. These algorithms often expose young users to addictive or damaging material, raising serious questions about the responsibility of social media companies to prioritize child safety in their product design. Melanie Dawes, the chief executive of Ofcom, emphasized that while these platforms are household names, they have fallen short in placing children’s welfare at the core of their operations. She warned that if significant improvements are not made swiftly, Ofcom is prepared to take enforcement action.
As part of the ongoing enforcement of Britain’s Online Safety Act, Ofcom has set a deadline of April 30 for companies including Meta’s Facebook and Instagram, Roblox, Snapchat, TikTok (owned by ByteDance), and YouTube (owned by Alphabet) to demonstrate concrete plans for enhancing age verification processes. These plans must also include measures to restrict contact between children and strangers, improve the safety of content feeds, and halt the testing of new features on underage users. The ICO has also issued an open letter to these platforms, urging them to implement “modern, viable” age-assurance technologies that effectively prevent children under 13 from accessing services not intended for their age group.
Paul Arnold, the chief executive of the ICO, stressed that with today’s technological advancements, there is no legitimate excuse for failing to implement reliable age verification systems. He called on social media companies to adopt these tools immediately to better protect young users. In response, a Meta spokesperson highlighted that the company already employs artificial intelligence-based age detection and estimation tools, and that teenage users are placed in accounts with built-in safety protections. The spokesperson further suggested that age verification should be centralized at the app store level to avoid requiring families to repeatedly share personal information across different platforms.
Meanwhile, a representative from YouTube stated that the platform offers tailored, age-appropriate experiences for its users. However, they expressed surprise at Ofcom’s shift away from a risk-based regulatory approach, urging the watchdog to concentrate its efforts on high-risk services that are clearly failing to comply with existing laws. At the time of reporting, Roblox, Snapchat, and TikTok had not provided comments on the regulators’ demands.
It is important to note that Ofcom holds the authority to impose fines of up to 10% of a company’s qualifying global revenue for non-compliance, while the ICO can levy penalties reaching 4% of a company’s worldwide annual turnover. Illustrating the seriousness of enforcement, the ICO recently fined Reddit nearly £14.5 million for inadequate age verification measures and unlawful processing of children’s data.
This regulatory crackdown reflects a broader global trend towards holding social media companies accountable for the safety and privacy of their youngest users. As digital platforms continue to play an increasingly central role in children’s lives, the pressure mounts on these corporations to implement effective safeguards that prevent exploitation and exposure to harmful content.
