We shouldn't be called Homo sapiens

Subtitle: Why we won’t stop the rise of AI, even it means we go extinct.

“Homo Sapiens” means “Wise man”. "Sapiens" means wise. And let's be honest—have ya met us!?

We smoke. We doom-scroll at 2am. We build nuclear weapons and then just leave ‘em lying around. We build the world’s most powerful dopamine machines and then just set them loose on our children because they’re shiny and fun! We know exactly what we should do, and then we go and chase a technological squirrel.

Wisdom? Please.

But curiosity? That we cannot help. It's the one thing we do without fail, without exception, without being asked.

When a human is curious, absolutely nothing will stand in our way. It’s the one universal law in human behavior. And it’s why no matter how tired we are, we’ll always scroll to that NEXT social video, just in case that one is the one. You know what I’m talking about, admit it!

PT Barnum knew it all too well, and I would argue that the same forces that drove people to go see the world’s tallest bearded lady in his day are exactly the same forces behind more “serious” investments today than we’d care to admit. Simple curiosity. What will happen if we do this? What’ll happen if we put a bunch of money behind this? I don’t know! Let’s find out.

It’s the reason so many founders are happy to build world-changing tech, not particularly knowing how exactly their tech might actually affect that world, or as Sarah Wynn-Williams’ exposé Careless People points out in the case of Facebook, simply not caring.

Every invention. Every theorem. Every app. Every late-night rabbit hole all started with "I wonder if..."

Curiosity can lead to wisdom, but wisdom is only attained when we collectively agree that we’ve had enough. And I don’t see that happening any time soon, do you?

And here's what's interesting about AI: it can optimize. It can predict. It can generate. But it cannot wonder. It cannot stay up at night asking "but what if?" for no reason other than it needing to know. OMG did you hear what Claude wore to the meeting yesterday?? Dish! Dish!

We barrel ahead with technology not because we're wise, but because we genuinely can't help ourselves. Human curiosity is why AI is inevitable, no other reason. We are all just dying to know what ChatGPT 6.0 can do! And curiosity isn't a feature of being human—it's the entire operating system.

Some days that terrifies me. Other days it fills me with hope.

Most days, little bit of both.

It’s obvious that evolution is accelerating all around us, and that after hundreds of thousands of years of “not much” happening, we all get to witness a pivotal moment in the evolution of an entire species.

I don’t know what will become of humans in 20 or 200 years—no one does.

But if I were a gambling man? Some time down the road when our fossils are studied in museums of the future? I predict we won’t be called homo sapiens, but homo curiosus.

Hey Siri, Remind me in 200 years to look up what humans are called now.

Siri: Here are 10 recipes for baked potatoes.

Why AI Sucks at Branding

AI can make 10 great PDFs. They might all look like they came from 10 different companies, though.

AI is phenomenal at generating individual assets. A stunning social graphic. A polished presentation. A slick one-pager. Each one impressive in isolation.

But brands aren't built in isolation. They're built across websites, pitch decks, trade show booths, video content, packaging, investor materials—and they all must work together to say the same thing in the same voice with the same visual language.

That cohesion is the difference between "assets" and a "brand."

After 9 years running a branding agency, I can tell you what consistency actually requires: a world-class designer, an all-star video person, someone who thinks in 3D, engineers who understand systems, and one doofus in a jacket to stay up in a panic every night wondering if you’ll get paid that month.

AI is an incredible tool for each of the all-star people on my team to use. It's not a replacement for any of them or the orchestration between them.

The companies that look "grown up" aren't the ones with the best individual assets. They're the ones where every touchpoint feels like it came from the same brain.

Today, AI can help you move faster. It just can't yet help you move together.

We're living in the Rube Goldberg era of AI.

[Watch the video of this post here.]

If you don't know Rube Goldberg: he was a cartoonist in the early 1900s who drew absurdly complicated machines that accomplished simple tasks. A bowling ball rolls down a chute, which tips a bucket, which pulls a string, which lights a match, which... eventually butters your toast.

The machines weren't meant to be efficient. That was the point.

Goldberg was satirizing his era's obsession with mechanization. Everyone was so drunk on "we CAN automate this" that nobody stopped to ask "should this take 47 steps?"

Sound familiar?

Right now I can:

  • Use one AI to research a topic

  • Pipe that into another AI to write a draft

  • Send it to a third AI to critique the draft

  • Have a fourth AI revise based on the critique

  • Run it through a fifth AI to check for tone

  • Then a sixth to format it for LinkedIn

I’ll admit this is a little contrived, but we are spending more and more time creating “agents” to do things like this.

Often at the end of these magnificent contraptions... we have a social media post.

Which we could have just written in 30 seconds.

Don't get me wrong—I love this stuff. Building AI apps is my favorite puzzle to solve right now. I've got truly befuddling automations that would make Mr. Goldberg proud.

And watching a piece of software come to life is the very act of creation itself. It’s life! It is art!

But I've noticed something:

Many AI workflows I’ve seen out there often accomplish what a clear-thinking person could do in a fraction of the time. We're so excited that we CAN chain these tools together that we forget to ask if we should.

Goldberg's machines were commentary disguised as invention.

I wonder if some of ours are too.

The next era won't be about building longer chains. It'll be about knowing when the chain is the point—and when you should just butter the damn toast yourself.

When *shouldn't* you use AI?

[Watch the video of this essay here.]

AI has made a generation of founders believe they should do everything themselves.

Design the pitch deck. Write the copy. Build the landing page. Edit the video. Why pay someone when Claude can help you figure it out?

Here's what that logic misses: some moments don't offer a learning curve.

You don't get to "iterate" on your Series A pitch while Sequoia waits patiently. You can't A/B test your product launch keynote in front of 2,000 people. Your first sales call with the dream client isn't a sandbox environment.

There's a difference between tasks where "good enough" works and moments where the gap between amateur and professional is everything.

I've watched founders spend weeks using AI to build a pitch deck, then walk into the room and forget that they still have to deliver it. Whoops! Or while they were dinking around with AI they forgot they still have a product to build. The slides were fine. The story was scattered. The nerves were obvious. The check didn't come.

I've spent 20 years in those rooms. I’ve helped a team win TechCrunch Disrupt. I’ve coached Harvard hackathon finalists hours before they took the stage to win. I’ve worked with founders who quickly raised tens of millions.

Not because I'm smarter than AI. But because I've already made the mistakes they're about to make - just not in front of the people who write checks.

And side note, yes, I am smarter than AI, obviously, as this picture will show.

Now AI is an incredible tool for the 90% of work where repetition and refinement are possible, as I’ll show you more and more shortly.

But for the 10% where you have to get it right the first time? You need someone who's felt that specific pressure. Who knows what actually matters when the lights come on.

The question isn't "can AI help me do this?"

It's "can I afford to learn this lesson in front of a live audience?"

Take it from one overconfident amateur to another:

Sometimes, one chance is all you get.

Leading in the age of AI

[Watch the video of this post here].

After almost a decade of running Aloa® Agency, let me tell you the only management advice that actually works, and how that applies to being a leader in the age of AI.

In my opinion, every management book really overcomplicates this, but it’s not that hard.

Dale Carnegie said it best in 1936: "The only way to influence people is to talk about what they want."

Adam Smith said it in the 1700s: "It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest."

It’s the foundation of modern economics.

Translation: As a leader, nobody cares about your goals. They care about theirs.

One of the best management stories I've ever heard? FedEx couldn't get packages sorted fast enough. Their workers were paid hourly. So they worked... as hourly workers do. cough

The fix wasn't motivation or threats. It wasn't culture. It wasn't KPIs or pizza parties. It was much simpler:

The big insight was to pay people by the shift instead of by hour. Finish early? You go home early. Same pay.

Packages started flying, and their problem was solved.

The workers' goal was never "sort packages efficiently." It was "get home to my family." FedEx just aligned their system with what people actually wanted.

This means everything:

→ What does this person actually need? Hint: It’s often more time and money.

→ How do I make their goal and my goal the same thing?

Great leaders know it's not enough to only have negative consequences for missed targets. That's just a stick with no carrot.

Exceed your target by X? You get X percentage. You make more money. You improve. Now we're rowing in the same direction.

That's incentive alignment.

This is exactly why most companies have a hard time communicating AI to their teams.

They announce AI initiatives by talking about "efficiency gains" and "competitive advantage”, but those are company goals.

Employees hear: "We're replacing you."

This leads to fear, resistance, and worse, quiet sabotage.

But what if you said: "This tool will eliminate the 3 hours of data entry you hate. You'll finally have time to do the work you actually wanted to do when you took this job."

Same initiative. Completely different message and response.

The fear isn't about AI. It's about people not knowing what's in it for them.

We need to stop managing people by talking in terms of what we want.

We must take the time to learn what they want, and start building a bridge to that.