Why Big Platforms, Big Charities, and Broken Systems Keep Failing Our Kids

Kirra Pendergast presenting on online safety
Kirra Pendergast
July 19, 2025
3 min

There’s a sentence buried in Roblox’s latest press briefing that sums up everything wrong with the current state of child safety online: “There’s no silver bullet.”

They say it to excuse the limits of their new “Trusted Connections” tool, which uses AI to scan video selfies of teens for age verification, allowing them to unlock unfiltered chat if they pass. But that line - no silver bullet -is repeated so often in these circles it might as well be a motto. It’s not a reflection of realism. It’s an escape hatch used by companies, regulators and their trust-and-safety consultants to avoid one thing: accountability.

According to the article in WIRED (that I was interviewed for), Roblox’s new AI tools are being marketed as a leap forward in youth safety, with video-based age estimation determining access to more mature chat features. But here’s the catch: it’s still opt-in. It still shifts the burden onto children and families. And for millions of kids there’s a deeper problem: they don’t have ID. So if the AI doesn’t work and your kid can’t verify themselves? They’re locked out. Or worse, still in the system but unverified, underprotected, and exposed.

But here’s what’s more disturbing than the flawed tech, its the silence from organisations who should be calling this out.

Big tech platforms fund nonprofits, research labs, and “child safety alliances” with the left hand - and with the right, invite them onto advisory boards. These boards are then used as PR shields. They’re named in statements, shown in reports, and rolled out during crises to say, “We consulted experts.” But what does that consultation really look like? It’s a trust economy. And trust, in this world, is for sale.

You don’t get invited onto a major trust and safety board by speaking too loudly. You get there by playing nice. By offering “constructive critique” that doesn’t damage the brand. By staying quiet when funding is at stake.

So when Roblox launches a safety tool that lets 13-year-olds verify their age with a selfie and grants unfiltered chat once passed, where are the loud, public objections from the people sitting at the safety table? Where is the collective statement from leading nonprofits saying: this isn’t good enough?

You won’t hear it. Because it’s hard to bite the hand that signs your grant agreement.

The Bias We Don't Talk About

This is what bias looks like in the child safety world: not a lack of care, but a conflict of interest. Not shouting falsehoods, but staying silent on uncomfortable truths. You can’t call out them out if they fund your program. You can’t critique Meta’s teen tools if you're in a pilot with their team. You can’t condemn TikTok’s grooming blind spots if your board just got a cheque for research partnerships.

And yet these are the very companies whose platforms are being used today for grooming, exploitation, and harm. Roblox has already faced lawsuits and criminal investigations linked to child sexual abuse material and predator activity. And still, they claim to be “setting the standard” for youth safety.

They say things like: “We’re leading with AI.” Or: “We trust families to make the right choices.” Translation? We’re offloading the responsibility onto children and parents - and then calling it empowerment.

You don’t get to stay quiet on a press release and call yourself a child advocate. This isn’t an attack on the people doing good work inside those systems. This is a call to reckon with the structures that keep them quiet.

Because right now, a parent reading the WIRED article might assume that Roblox’s latest tools have been rubber-stamped by experts. They might believe that video-based age AI and QR code scans are real protection. They might not know that in-person verification is still being framed as “safe” even though real-world predators use trust-based grooming tactics every day.

They won’t hear that because the people best positioned to say it are conflicted out of doing so.

If we’re going to have a real safety ecosystem, then we need two things:

  1. Mandatory public disclosure of funding relationships between tech platforms and safety advisory groups.
  2. Independence - not just in name, but in structure.

If a safety body or academic team can’t criticise its funder. It’s a brand extension.

Stop Calling This Leadership

What Roblox is optics wrapped in good intentions, supported by passive experts, and legitimised by silence. And if the people at the table won’t call that out, then the rest of us have to.

While they talk about building trust, kids are still being exposed, parents are still being blamed, and predators are still one selfie away from the next unlocked feature.

So yes, let’s be blunt to all who take grants or payment to help create whatever it is this week branded as “education or resources”:

How can you take money from the people hurting kids with one hand, and call yourself a safety expert with the other?

You can’t. And it’s time more people said so.

Keep up to date with more breaking news?

Created by the Coalition.
The new standard in online safety for educators

Building effective online safety support systems and tools for educators on the front line. 
the ctril shft team cut out pngthe ctril shft team cut out png