AI is the most effective tool for gaslighting ever devised

You've heard of gaslighting—when someone makes you feel like you’re the crazy one—by pretending that something clearly wrong is actually normal.

The term comes from the film Gaslight, and it's become a well-known psychological phenomenon.

It’s the same concept behind “The Emperor’s New Clothes”: If everyone around you acts like the emperor's wearing clothes, but we know he's naked, we start to question ourselves.

When we post content online, we hope people will like it, share it, maybe even subscribe. We don’t love being criticized—especially for how we look or who we are.

But historically, even if a rando said something cruel, at least you knew it was a real person—a 12-year-old troll in his mother’s basement perhaps, but still human. It was part of the “town square” dynamic that platforms like Twitter (now X) were built on.

Now, that’s changing.

Soon, the majority of online discourse will come from AI bots.

And I don’t know if you’ve noticed, but on X—leaning more and more to one side of the political spectrum (thanks to its almighty CEO)—you’ll see something disturbing:

Anytime someone from “the other side” posts something, it’s met with a flood of nasty, hateful, vitriolic replies. This creates the illusion that the entire world hates that viewpoint.

I used to scroll through YouTube or Reddit comments to gauge public sentiment on hot topics. But increasingly, many of those replies—especially on X—aren’t written by real people.

And that has a real effect. We start to question ourselves. We wonder if we’re wrong, or if everyone else sees something we don’t.

If everyone else seems to think one thing and we think another, our brains starts to say: “Maybe I’m wrong,” or “Maybe I’m just dumb.”

In a good faith system, questioning our values is a wonderful thing that can lead towards tremendous personal growth. But when our values are questioned/manipulated en masse by an algorithm, that’s gaslighting—by the system, at scale.

It chips away at our confidence, our logic, and our reasoning.

And that’s what makes AI so dangerous. It makes intelligent, thoughtful people question their own sanity.

So here’s my advice: don’t take any of it at face value.
Don’t doubt your mental capacity—trust it. Assume that if someone disagrees with you online, they might not be real. In fact, that’s increasingly likely.

Turn off comments if you need to. Turn off likes.

But don’t turn off your own brain.

Don’t let an algorithm convince you that you have nothing of value to say.