“Regulating AI & Social Media for Real-World Safety”
A practical look at what it would actually take to regulate AI and social platforms for kids’ real-world safety—drawing on lessons from other industries and naming the tradeoffs.
Every piece.
- Part 1 · Perspective
Why Self-Regulation Fails (And Always Has)
Companies don't change because they want to — they change when the cost of not changing outweighs the cost of changing. The case against hoping tech fixes itself.
- Part 2 · Perspective
What GDPR & Canadian Anti-Spam Laws Actually Teach Us
The 'regulation won't work because the internet is global' argument sounds convincing until you look at what GDPR and CASL actually did.
- Part 3 · Perspective
The Only Thing That Works: Hitting the Bottom Line
If you want companies like Meta or OpenAI to change their behavior around kids, the conversation has to reach revenue. Everything else is negotiable.
- Part 4 · Parent Guide
Age-Gating That Actually Works (Not Just a Checkbox)
Typing in a fake birthday isn't age verification — it's just a question. Here’s what real, working age-gating would need to look like, and what parents can do right now.
- Part 5 · Perspective · May 7, 2026
Should Free Access Exist for Kids?
Free access removes every barrier — including the ones that are supposed to keep younger users out. That's not accidental. It's part of how these platforms grow.
- Part 6 · Perspective · May 7, 2026
AI and Privacy vs Safety — Where Should the Line Be?
Privacy isn't absolute in any other context. The question isn't whether to protect it — it's where the threshold should be when behavior starts signaling real risk.
- Part 7 · Perspective · May 7, 2026
The Mental Impact No One Is Regulating Yet
The most measurable harms get regulated first. But the slow accumulation of distorted thinking, constant comparison, and shallow engagement is harder to name — and probably matters more.