No TV Month

Every year, for one full month, we turn off every screen in our house.

No Netflix. No YouTube. No Disney+. No more binging the Kardashians.

My dad started “No TV Month” when I was a kid. I thought he was cruel. Now I do it with my own daughter, and I completely get it.

Here’s what happens: In the first days, I can watch her go through the symptoms of withdrawal. Her feeling of boredom fills the house like a stench that can’t be escaped. Slowly but surely, she begins to fill her days with other activities. She even read a 215-page book in a single day, couldn’t put it down.

Witnessing screen withdrawal is scary. But what fills the void is better.

Matt Stone — co-creator of South Park, one of the most successful TV shows in history (28 seasons and counting) — said: “I don’t watch any television. I got kids, I got work. I’m not a TV person. I never have been.”

The billionaire TV mogul who makes TV doesn’t watch TV. Let that sink in.

Steve Jobs didn’t let his kids use the iPad he invented. “We limit how much technology our kids use at home,” he told a stunned New York Times reporter. His biographer Walter Isaacson described dinners at the Jobs house: discussing books and history around the kitchen table. No one ever pulled out an iPad. The kids didn’t seem addicted to devices at all.

The guy who built the most addictive screen on earth kept his own kids away from it.

There’s a pattern here that most people miss:

The people who create the things we consume understand something fundamental: consumption is the default. Creation is the choice.

And the ratio matters, especially now that the news cycle seems desperate to hook every second of our finite attention and keep us in a perpetual state of terror.

I code, I write, I teach. And I shout random jabberings into a void on LinkedIn like a maniac in Central Park. And I can tell you from experience: the weeks I consume the most content are the weeks I create the least.

No TV Month isn’t about being anti-technology. It’s about remembering that screens are tools for making things, not just watching things.

When my daughter picks up a paint brush instead of a remote, she’s not “missing out.” She’s doing what the creators of the stuff she’d be watching are actually doing with their time.

Pick your month. Turn it off. See what happens.

You might be surprised what you build when you stop consuming.

We need to stop "resulting"

Buy this book.

The biggest problem in company AI roll-outs right now isn't hallucinations.

It's resulting.

Annie Duke (champion poker player turned author) has a name for the mistake most leaders are making with AI.

Resulting: judging a decision by its outcome instead of the process that made it.

So far I've watched resulting in AI play out in two ways:

One: An AI chatbot disappoints a customer. See I knew we were wrong to embrace AI!

Two: A promising demo app becomes a new religion. OMG stop the presses: I’m replacing every employee with AI right now!

Neither reflect the right way to think about the situation.

Duke's point: Life isn’t chess, it’s poker. In chess, there is a right answer. In poker and business, hidden information and luck mean a brilliant decision can blow up, or a terrible one can pay off.

In Never Split the Difference, former FBI hostage negotiator Chris Voss calls hidden information Black Swans: pieces of information that, once uncovered, completely reframe everything. Every AI deployment is full of them: edge cases the demo never hit, user behaviors no one modeled, and exciting possibilities that don’t reveal themselves for months.

Your AI strategy is one Black Swan away from being either a crisis or a breakthrough. Certainty, in today’s environment, isn't strength. It's legerdemain: sleight of hand that fools you as much as your audience.

Duke's suggestion is simple: separate the quality of the decision from the quality of the outcome. Before you do/don’t do anything, write the bet. What do you believe will happen? How confident are you, in an actual percentage? What would change your mind? Then run the pre-mortem: assume it fails. Why?

Charlie Munger would call this inversion. Voss would call it hunting for Black Swans. Duke would call it calibration.

Most companies seem to be skipping this process entirely. They're asking "did it work?" when they should be asking "were we right to believe it would?"

Nobody knows which AI bets will pay off. Anyone who says otherwise is selling something.

Are you aware of the bets in AI you’re making right now? And are you aware that not embracing this technology is, itself, a bet?

177,000 lines of code

Depending on who you ask, that's years of work.

That's how big my agency management software platform is now. I was able to combine 6-7 paid tools into one, that unlike the others, is perfectly suited to our exact workflow (with features no commercial software has).

That a single person with dedication can build an app this full-featured in a such a short period of time is mind-boggling.

It's taken me three months of non-stop work to build. But without AI?

3 years to never!

What nagging problems have you accepted over the years that you could actually solve now?

Better bulls*** detectors

Great founders are getting harder to spot. And it's not entirely their fault. A few years ago, a 200-page business plan meant something. Not because length equals quality—but because creating something that comprehensive required either real mental capability or stealing someone else's work.

Now? ChatGPT can generate it in an afternoon.

This isn't a complaint about AI. It's an observation about signal degradation.

We've spent all of human history developing intuition for gauging talent through artifacts. Through pieces that we create. A sharp deck. An intuitive website. A polished pitch. But now, those artifacts actually tell us less about the person who submitted them.

And this is why oral exams are making a resurgence in academia, in a time where ChatGPT can cheat anyone’s way through any test. It’s why in-person interviews still matter. It’s why investors still insist on meeting founders face-to-face. You can learn more about someone's actual thinking in a 10-minute conversation than in a 50-page document you can't verify they wrote.

After over 200 founder interviews on my podcast, I've watched this shift happen in real time. The digital materials keep getting better. The variance in actual human mental capability stays the same.

If we're not careful, we’re entering an era where we build houses of cards on top of houses of cards—investing in people based on artifacts that may not reflect their real capacity.

The fix isn't banning AI. It's recalibrating how we evaluate human work and investment potential.

Digital output is now table stakes. Conversation and humanity are the new signals.

And as digital creators, we must go out of our way to show our work in a way that can’t possibly be faked.

We shouldn't be called Homo sapiens

Subtitle: Why we won’t stop the rise of AI, even it means we go extinct.

“Homo Sapiens” means “Wise man”. "Sapiens" means wise. And let's be honest—have ya met us!?

We smoke. We doom-scroll at 2am. We build nuclear weapons and then just leave ‘em lying around. We build the world’s most powerful dopamine machines and then just set them loose on our children because they’re shiny and fun! We know exactly what we should do, and then we go and chase a technological squirrel.

Wisdom? Please.

But curiosity? That we cannot help. It's the one thing we do without fail, without exception, without being asked.

When a human is curious, absolutely nothing will stand in our way. It’s the one universal law in human behavior. And it’s why no matter how tired we are, we’ll always scroll to that NEXT social video, just in case that one is the one. You know what I’m talking about, admit it!

PT Barnum knew it all too well, and I would argue that the same forces that drove people to go see the world’s tallest bearded lady in his day are exactly the same forces behind more “serious” investments today than we’d care to admit. Simple curiosity. What will happen if we do this? What’ll happen if we put a bunch of money behind this? I don’t know! Let’s find out.

It’s the reason so many founders are happy to build world-changing tech, not particularly knowing how exactly their tech might actually affect that world, or as Sarah Wynn-Williams’ exposé Careless People points out in the case of Facebook, simply not caring.

Every invention. Every theorem. Every app. Every late-night rabbit hole all started with "I wonder if..."

Curiosity can lead to wisdom, but wisdom is only attained when we collectively agree that we’ve had enough. And I don’t see that happening any time soon, do you?

And here's what's interesting about AI: it can optimize. It can predict. It can generate. But it cannot wonder. It cannot stay up at night asking "but what if?" for no reason other than it needing to know. OMG did you hear what Claude wore to the meeting yesterday?? Dish! Dish!

We barrel ahead with technology not because we're wise, but because we genuinely can't help ourselves. Human curiosity is why AI is inevitable, no other reason. We are all just dying to know what ChatGPT 6.0 can do! And curiosity isn't a feature of being human—it's the entire operating system.

Some days that terrifies me. Other days it fills me with hope.

Most days, little bit of both.

It’s obvious that evolution is accelerating all around us, and that after hundreds of thousands of years of “not much” happening, we all get to witness a pivotal moment in the evolution of an entire species.

I don’t know what will become of humans in 20 or 200 years—no one does.

But if I were a gambling man? Some time down the road when our fossils are studied in museums of the future? I predict we won’t be called homo sapiens, but homo curiosus.

Hey Siri, Remind me in 200 years to look up what humans are called now.

Siri: Here are 10 recipes for baked potatoes.