On October 26, 2023, the UK passed a law. On July 25, 2025, it started to bite.
Today the UK Online Safety Act’s most consequential provisions go live. Every platform accessible to UK users including social media, search, video sharing, forums, dating, adult sites must now meet Ofcom’s Protection of ChildrenCodes. This is regulation with teeth with fines up to £18 million or 10% of global turnover, potential service bans, and for the first time, criminal liability for senior executives who fail to comply. The shift isn’t just legal. Because from today, digital systems must carry proof of age, not as a polite request, but as a condition of operation. Age assurance has moved from back-end infrastructure to front-page governance. And what that really means is that the UK has just hardwired child protection into the internet’s commercial plumbing.
Ofcom has made it clear: anything less than “highly effective” age assurance will not meet the standard. That rules out every checkbox, self-declaration, or passive age estimate the big platforms have used for decades.
In their place, only verified methods are permitted:
- Live facial age estimation
- ID document matching
- Mobile operator data
- Verified credit/debit card credentials
Systems that “could be easily bypassed” are already ruled noncompliant. Providers must even raise “challenge ages” to compensate for margin of error. This is the first legal sunset of checkbox culture. And it's long overdue.
The Platform Response is Split
Some of the world’s biggest platforms are rolling out AI facial estimation, ID verification via partners like Yoti and Persona, email-based age prediction, and behavioural modelling. Meta, Pornhub, Reddit, and OnlyFans are moving fast. Others are hedging, some quietly geo-blocking UK users, others stalling for legal interpretation or hoping Ofcom’s audit pipeline takes months. But enforcement isn’t theoretical anymore. Ofcom’s window opened weeks ago. Assessments are underway. And the September review phase is expected to trigger the first wave of enforcement action. Those hoping to wait this out? They’re already out of time.
The narrative has largely fixated on pornography. But that’s a narrow reading of what this Act enables, and enforces. From July 25, platforms must restrict all harmful content from minors, not just explicit material. That includes suicide encouragement, eating disorder glorification, dangerous viral challenges, cyberbullying, and access to age-inappropriate forums. This is not just a ban, it’s an expectation to prevent access, with age assurance to prove it.
Which means:
- If you run a platform with any user-generated content, and under-18s can find it, you are now on the hook. Failing to act isn’t a moral lapse it’s a regulatory failure, with real economic teeth.
- Schools, youth-facing platforms, ad-funded communities, and mental health service providers must now review not just what they publish, but how age exposure is filtered, segmented, and reported. Age-appropriateness is no longer a curriculum issue it’s a compliance one.
- With the rise of facial estimation and behavioural profiling, a new debate is brewing: how do we safeguard privacy in a world where platforms must know your age to serve you content? These systems create new attack surfaces, mission creep, biometric misuse, opaque AI profiling. Every layer of protection must be mirrored with data minimisation and auditability.
What comes next isn’t just technical adaptation. It’s a reshaping of how digital platforms are allowed to treat minors. Three pressure points are already forming:
1. AI Age Estimation Will Be Scrutinised
Ofcom has allowed facial and behavioural inference but flagged them as high risk. Platforms leaning on behavioural scoring should expect hard compliance reviews, especially if they can’t show empirical validation across age groups, genders, and ethnicities.
2. “Protection” Risks Becoming Infantilisation
Platforms that default 13–17 year-olds into shallow “safe” zones sanitised feeds, siloed search, limited messaging may inadvertently erode youth agency. Protecting isn’t the same as excluding. The next debate won’t be “is it safe?” but “is it still meaningful?”
3. Privacy as a Systemic Imperative
The infrastructure being installed this month facial data, document checks, usage tracking will outlive this Act. How it’s stored, reused, and governed will shape trust in digital systems for a generation. Privacy cannot be a footnote. It must be a first principle.
For two decades, I have witnessed digital regulation lag behind digital risk on the frontlines. Today, the UK stops lagging. Platforms are now faced with a clear binary that is to design for integrity, or design for fines. And they’re being watched, not just by Ofcom, but by every regulator, investor, and parent in the world. The checkbox era is over. The compliance era has begun. Now, platforms must prove, not just promise, that their systems work for children before they are exposed to harm.