Perspective · Part 01 of 07

Why Self-Regulation Fails (And Always Has)

Companies don't change because they want to — they change when the cost of not changing outweighs the cost of changing. The case against hoping tech fixes itself.

Why Self-Regulation Fails

I keep hearing this argument come up whenever people talk about AI and social media, especially when kids are involved.

The idea is basically that companies should just make better choices. That if we give them time, or pressure them socially, or trust that the people building these tools care enough, things will sort themselves out.

And I get why people want that to be true. It’s a much easier solution than regulation. It feels less heavy, less invasive, more optimistic.

But when you actually look at how these companies operate, it just doesn’t hold up.

Companies like OpenAI or Meta aren’t public services. They’re not built to sit there and ask, “what’s the healthiest outcome for society over the next 20 years?” They’re built to grow.

That means more users, more engagement, more data, more revenue. Not because anyone sat in a room and decided to harm people, but because that’s literally what the system rewards.

So when we say they should self-regulate, what we’re really asking is for them to voluntarily slow their own growth. And that’s where the whole thing starts to break down for me.

There’s this version of the conversation where people point to ethics. Better design. More responsible AI. And again, I don’t think those things are fake. I’m sure there are people inside these companies who genuinely care about building something good.

But caring doesn’t change incentives.

If a feature keeps people on the platform longer, it’s going to get prioritized. If something increases engagement, even if it has some negative side effects, it’s very hard for that not to win internally. Not because anyone is evil, but because the numbers will always justify it.

We’ve already seen this play out over and over again. Social media feeds didn’t become addictive by accident. Recommendation systems didn’t drift into extreme content randomly. Email marketing didn’t magically become respectful of your inbox one day.

It changed when it had to.

A really simple example is email. Before Canada’s Anti-Spam Legislation, companies sent whatever they wanted. No unsubscribe, no consent, nothing. It worked, so they kept doing it. It only stopped when there were actual consequences tied to it. Same with General Data Protection Regulation. Companies didn’t suddenly decide to respect user data out of the goodness of their hearts. They changed because they would lose access to entire markets if they didn’t.

That’s the pattern. Not intention, but pressure.

And I think this becomes way more serious when we’re talking about kids, because the “just make better choices” argument doesn’t apply the same way.

Kids don’t understand what’s happening under the hood. They don’t understand how their data is being used, or how algorithms are shaping what they see, or why certain content keeps showing up. They’re just interacting with something that feels fun or interesting or validating in the moment.

At the same time, the barrier to entry on most of these platforms is basically nothing. You can sign up for free, say you’re older than you are, and you’re in. And that free model is actually a huge part of the problem, because it’s not really free. You’re paying with your attention, your behavior, your data. That’s the whole engine.

So now you’ve got a system where it’s incredibly easy for kids to get in, incredibly profitable for companies to keep them there, and almost no real enforcement around whether they should be there in the first place.

And we’re still expecting self-regulation to solve that.

I don’t think the issue is that companies don’t care at all. I think it’s that the system they’re operating in doesn’t reward them for caring in the ways that actually matter here.

If growth is the goal, growth will win. If engagement is the metric, engagement will win. If collecting more data improves the business, that’s what the product will move toward. Every time.

Until something outside the company forces a different outcome.

That’s the part I keep coming back to. The only time we’ve really seen meaningful shifts is when there’s something at stake that the company can’t ignore.

Not a suggestion. Not public pressure that fades after a news cycle. Something that directly affects whether they can operate, or how much money they can make.

That could look like losing access to a market entirely. It could be restrictions on advertising. It could be fines that actually scale with revenue instead of being treated like a cost of doing business. It could be ongoing compliance requirements that make it more expensive not to follow the rules than to just build the system properly in the first place.

Once you hit that level, the conversation changes internally. It’s not “should we do this,” it’s “we have to.”

And that’s really the core of it. Self-regulation is optional. Accountability isn’t.

From a parent perspective, this matters because we’re not just handing our kids devices or apps. We’re putting them into systems that are very, very good at holding attention and shaping behavior, and those systems are not neutral.

I’m not saying the solution is simple or that regulation fixes everything. It doesn’t. There are tradeoffs, and some of them are uncomfortable.

But hoping that companies will just choose to limit themselves in ways that hurt their own growth, especially when it comes to kids, doesn’t feel like a real strategy.

It feels like something we say because the alternative is harder to deal with.

The MPC briefing

One short letter. No outrage cycle.

Reviews, practical guides, and parent perspectives on games, screens, AI, and online life — straight to your inbox.

Free · unsubscribe anytime · we never sell your data