We're living in the Rube Goldberg era of AI.
[Watch the video of this post here.]
If you don't know Rube Goldberg: he was a cartoonist in the early 1900s who drew absurdly complicated machines that accomplished simple tasks. A bowling ball rolls down a chute, which tips a bucket, which pulls a string, which lights a match, which... eventually butters your toast.
The machines weren't meant to be efficient. That was the point.
Goldberg was satirizing his era's obsession with mechanization. Everyone was so drunk on "we CAN automate this" that nobody stopped to ask "should this take 47 steps?"
Sound familiar?
Right now I can:
Use one AI to research a topic
Pipe that into another AI to write a draft
Send it to a third AI to critique the draft
Have a fourth AI revise based on the critique
Run it through a fifth AI to check for tone
Then a sixth to format it for LinkedIn
I’ll admit this is a little contrived, but we are spending more and more time creating “agents” to do things like this.
Often at the end of these magnificent contraptions... we have a social media post.
Which we could have just written in 30 seconds.
Don't get me wrong—I love this stuff. Building AI apps is my favorite puzzle to solve right now. I've got truly befuddling automations that would make Mr. Goldberg proud.
And watching a piece of software come to life is the very act of creation itself. It’s life! It is art!
But I've noticed something:
Many AI workflows I’ve seen out there often accomplish what a clear-thinking person could do in a fraction of the time. We're so excited that we CAN chain these tools together that we forget to ask if we should.
Goldberg's machines were commentary disguised as invention.
I wonder if some of ours are too.
The next era won't be about building longer chains. It'll be about knowing when the chain is the point—and when you should just butter the damn toast yourself.
When *shouldn't* you use AI?
[Watch the video of this essay here.]
AI has made a generation of founders believe they should do everything themselves.
Design the pitch deck. Write the copy. Build the landing page. Edit the video. Why pay someone when Claude can help you figure it out?
Here's what that logic misses: some moments don't offer a learning curve.
You don't get to "iterate" on your Series A pitch while Sequoia waits patiently. You can't A/B test your product launch keynote in front of 2,000 people. Your first sales call with the dream client isn't a sandbox environment.
There's a difference between tasks where "good enough" works and moments where the gap between amateur and professional is everything.
I've watched founders spend weeks using AI to build a pitch deck, then walk into the room and forget that they still have to deliver it. Whoops! Or while they were dinking around with AI they forgot they still have a product to build. The slides were fine. The story was scattered. The nerves were obvious. The check didn't come.
I've spent 20 years in those rooms. I’ve helped a team win TechCrunch Disrupt. I’ve coached Harvard hackathon finalists hours before they took the stage to win. I’ve worked with founders who quickly raised tens of millions.
Not because I'm smarter than AI. But because I've already made the mistakes they're about to make - just not in front of the people who write checks.
And side note, yes, I am smarter than AI, obviously, as this picture will show.
Now AI is an incredible tool for the 90% of work where repetition and refinement are possible, as I’ll show you more and more shortly.
But for the 10% where you have to get it right the first time? You need someone who's felt that specific pressure. Who knows what actually matters when the lights come on.
The question isn't "can AI help me do this?"
It's "can I afford to learn this lesson in front of a live audience?"
Take it from one overconfident amateur to another:
Sometimes, one chance is all you get.
Leading in the age of AI
[Watch the video of this post here].
After almost a decade of running Aloa® Agency, let me tell you the only management advice that actually works, and how that applies to being a leader in the age of AI.
In my opinion, every management book really overcomplicates this, but it’s not that hard.
Dale Carnegie said it best in 1936: "The only way to influence people is to talk about what they want."
Adam Smith said it in the 1700s: "It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest."
It’s the foundation of modern economics.
Translation: As a leader, nobody cares about your goals. They care about theirs.
One of the best management stories I've ever heard? FedEx couldn't get packages sorted fast enough. Their workers were paid hourly. So they worked... as hourly workers do. cough
The fix wasn't motivation or threats. It wasn't culture. It wasn't KPIs or pizza parties. It was much simpler:
The big insight was to pay people by the shift instead of by hour. Finish early? You go home early. Same pay.
Packages started flying, and their problem was solved.
The workers' goal was never "sort packages efficiently." It was "get home to my family." FedEx just aligned their system with what people actually wanted.
This means everything:
→ What does this person actually need? Hint: It’s often more time and money.
→ How do I make their goal and my goal the same thing?
Great leaders know it's not enough to only have negative consequences for missed targets. That's just a stick with no carrot.
Exceed your target by X? You get X percentage. You make more money. You improve. Now we're rowing in the same direction.
That's incentive alignment.
This is exactly why most companies have a hard time communicating AI to their teams.
They announce AI initiatives by talking about "efficiency gains" and "competitive advantage”, but those are company goals.
Employees hear: "We're replacing you."
This leads to fear, resistance, and worse, quiet sabotage.
But what if you said: "This tool will eliminate the 3 hours of data entry you hate. You'll finally have time to do the work you actually wanted to do when you took this job."
Same initiative. Completely different message and response.
The fear isn't about AI. It's about people not knowing what's in it for them.
We need to stop managing people by talking in terms of what we want.
We must take the time to learn what they want, and start building a bridge to that.
Are you *intelligently* lazy?
In 2025, I built over 10 functional AI applications… because I’m lazy.
I’d like to think I’m intelligently lazy.
There's a difference. (I hope.)
The unintelligently lazy person avoids work.
The intelligently lazy person builds systems so the tedious work handles itself—freeing up brain cells for the stuff that actually matters.
In 2025, I built over 10 functional AI applications that are highly specific to tedious pain points I’ve experienced throughout my career. Some automate content workflows. Some process data I'd never have time to touch. Some do things I genuinely didn't think were possible 12 months ago.
And here's the exciting part: I'm not a "real" programmer. I wrote calculator games in Z80 assembly language when I was 13, then took a 20-year detour through theater, electronic music, and marketing. Now AI lets me come back to building software.
So in 2026, I'm going to share my progression with you—what I built, why, what worked, what was a nightmare, and how you might apply similar thinking to your own work.
Because the gap between "superhuman productivity" and "drowning in busywork" isn't talent. It's communication, curiosity, and clarity of thought.
Interested in learning how AI can help you become intelligently lazy? Let's talk. Shoot me a message.
Happy New Year!
Communication is the #1 AI Skill
Communication and clarity of thought are the top two skills for the AI age.
Why?
Look at the history of business:
In theory, any business that makes money has access to the same talent pool.
All companies can hire the same developers, for the same salaries.
All leaders can find the same marketing firm, or the same designers.
So why is it that all things being equal, some leaders get vastly more out of the same resources?
Why do some leaders make exceptional products, and others make crap?
A high-quality team can execute any plan. But it’s the leaders who set that plan into motion. And if leaders don’t communicate their plan well (or if they simply don’t have one), their organization will be riddled with chaos, doubt, politics, and confusion.
We are in a world in which AI can make anything you can dream up.
But the catch is: As a leader, you still have to dream it up.
And even though AI can do anything, you still have to explain your vision to AI in a way that it can understand.
No matter how advanced these tools get, those with superior communication skills and clarity of thought will ALWAYS get more out of them.





