I had a chance to play around with Sora 2 last week and the results are pretty crazy. Like, eerie, I shouldn’t have this kind of power, crazy. With just a small prompt I was able to see video highlights of the Cleveland Browns at the 2025 Super Bowl (the most improbable event imaginable), an epic Grand Canyon helicopter ride, and a corn dog blasting off into space. With the power to make videos of literally anything, creativity is unlimited, right?
If that’s true, that also means we’re headed into an era where you can’t believe everything you see. I studied Broadcasting in undergrad and wrote for the school paper, and it was drilled into me that above all else, you should always verify sources and validate leads before publishing a story. If something sounds too good to be true, it probably is.
It’s easy to take a video for face value and immediately react. There’s a whole side of YouTube dedicated to people reacting to videos. As more AI-generated content is created and anything can be made a reality at the stroke of some keys, it’s going to be extremely easy to bend the truth. Even small things, like public figures having public freakouts, or your co-workers saying something outrageous, could be manufactured with a quick prompt. You are going to have to do the leg work to figure out if it’s reality or not. Snopes.com won’t be able to keep up (is Snopes still a thing?).
Same goes for management (and don’t get me started on B2B sales! /s). You will hear things about your team, their work, what’s happening when you’ve given them autonomy, and you have to determine which version of reality is accurate. Half of management is detective work, following leads, and determining the version of reality you and your team will follow. It’s the same thing for data too. One graph could mean any number of things, so use your critical thinking skills to determine why it’s right or wrong. It’s one of the reasons I love the 1993 classic “How to Lie With Statistics.”
This is also not intended to be doom-and-gloom, but more of a warning to think before you react. I made a mistake early in my career where I assumed I knew the full story, but definitely did not. Learn from my mistakes. And by all means, don’t let me rain on your AI-generated video parade.
Take a moment to gather the facts of the situation, validate the sources, determine the credibility of those sources, and make decisions from there. Redactions and apologies are tough to come back from, so spend a few extra thinking cycles on the believability of what you’ve seen and heard.
Anyway, here’s a video I prompted for Sora 2 of a totally real guy eating totally real onion ring glasses on a park bench. flavors-you-see.io coming soon.
🧭 Engineering Leadership & Management
Nothing Prepares You for Your First Director Role [Honeycomb]
Moving into a director role is like switching from sous chef to running the kitchen. This piece nails the messy reality of moving from tactical execution to political influence. Bookmark it for your next promotion talk.Leading with Emotional Intelligence in the Age of AI [CIO]
As automation grows, EQ becomes the most scarce leadership resource. This article makes a solid case for empathy as a competitive edge, not a soft skill.Reverse Impostor Syndrome [Wes Kao Newsletter]
Instead of feeling unqualified, high performers often underestimate how capable they actually are. A sharp mindset shift that helps leaders stop playing small.On Good Software Engineers [Can Dost Blog]
A grounded reflection on what makes great engineers stand out. Hint: it’s not raw speed, it’s ownership, clarity, and generosity.
Async Programming for Humans [Braintrust]
Not a coding tutorial, a management philosophy. Async thinking is a survival tool for distributed teams. This piece breaks down how to adopt it without slowing collaboration.
🤖 AI in Engineering Workflows
Is AI Good Enough to Lay Off Your Engineers [Section AI]
The question everyone’s whispering. Section AI runs the math on where LLMs actually deliver ROI and where human judgment still dominates. The answer is more nuanced than the headlines.The AI Bubble Is About to Burst, But the Next Bubble Is Already Growing [Medium]
A sharp take on how hype cycles move faster than innovation. The next gold rush might not be AI itself, but the tooling around it.AI Podcast Startup Plan Shows the Chaos in Audio Automation [Hollywood Reporter]
A wild glimpse at AI-generated podcasting. Think cloned voices, messy IP, and an industry still figuring out what “authentic” even means.How to Think About AI Progress [Marginal Revolution]
Tyler Cowen at his best. A clear, skeptical lens on what counts as genuine AI progress versus what’s just packaging.
📊 Prioritization, Strategy & Metrics
The Wrong Way to Use DORA Metrics [The New Stack]
DORA metrics aren’t magic. Used poorly, they become performance theater. This post explains how to make them actionable and culturally safe.The Purpose of Prototypes [SVPG]
Marty Cagan reminds product teams that prototypes are for learning, not validation. A tight, essential read for anyone shipping too slowly.Determination [Paul Graham]
Graham revisits grit. Success often depends less on brilliance and more on stubborn progress. An evergreen piece that feels even more relevant in the AI era.
🔮 Broader Industry & Thought Leadership
The Era of the Business Idiot [Where’s Your Ed At]
A spicy critique of leaders who confuse buzzwords for strategy. Equal parts rant and insight. You’ll nod, laugh, and maybe wince a little.Is Waymo Safe [The Atlantic]
A balanced investigation into the safety record and perception of autonomous vehicles. The debate around “safe enough” is just beginning.How to Think About AI Progress (Archived Conversation with Yann LeCun) [Archive]
A preserved deep-dive on the philosophical and technical sides of AI development. Less hype, more history.The Future of Agentic AI and Automation (Google Share Link)
A peek behind closed doors at how large enterprises are experimenting with internal AI systems. A little vague, but telling.