I’m back with another short film script plus a fun ridiculous. A couple of days ago I’d shared the results of ChatGPT generating a script depicting the Empire holding a Death Star postmortem. Since then AI help produce an alternate world postmortem where leadership was still evil, but had read my best practices article and tried to adopt its ideas. The script that came out is neat - if a little clunky. You can read it at the end of the article.
I enjoyed continuing to playing with the video storyboarding tools and laughing at how the characters changed in style, age and appearance after every cut transition. Though by the end of the process it was annoying. Akin to trying to direct an incredibly gifted toddler who has eaten all the sugar and meth they can find and is also distracted by Sponge Bob and a live squirrel. Or so I’d imagine… That’s my long winded saying that there are more hallucination type errors in this one vs. the first.
I tried less hard this time to get things “correct”. I’d bought a block of generation credits and when they ran out so did my patience. I believe I selected a “Pixar” style. Which somewhat explains that the look is akin to what I’d expect if Pixar produced a video of the script below, but also did an insane amount of acid throughout the production.
.
.
.
Title: Death Star Post-Mortem: Lessons Finally Learned
Setting:
A sleeker, slightly warmer conference room aboard a newer Empire Star Destroyer. The Empire has clearly rebranded slightly—fewer skull-like logos, more muted greys. A banner at the back reads: “We learn so we don’t burn (again)”. There's a whiteboard with the words: "Psychological Safety + Metrics > The Force".
Characters:
Jira, Lead Engineer (still skeptical, but empowered and respected now)
Caden, Product Manager (sincerely curious, working on being more data-driven)
Vera, UX Designer (now valued and vocal)
Director Voss, New Project Sponsor (cold, calculating, still evil—but process focused)
Milo, Junior Engineer (newly confident, occasionally asks brilliant questions)
[SCENE OPENS]
Director Voss: (cool, measured tone)
Let’s begin. We lost a trillion-credit battle station, again. But unlike last time—we learn. This is a blameless postmortem. No throat-choking. No summary executions. Just data and learning. Jira, walk us through it.
Jira: (nods)
Thank you, Director. The root cause of failure was a missile launched into an exposed exhaust port—again. Despite warnings from Vera and others, we accepted the vulnerability due to timeline pressure and a misplaced belief that “no one would ever find it.”
Vera: (even tone)
I want to acknowledge—it’s refreshing to be asked back to this meeting. Last time, I was told my concerns were “soft design fluff.” This time, I documented my assumptions and shared possible failure scenarios—several of which now map almost exactly to what happened.
Director Voss:
Acknowledged. That’s why you’re on the review board now. Milo, I saw your alert flow prototype. It caught the anomaly, but it was ignored?
Milo: (a bit nervous)
Yes, sir. I simulated heat signatures suggesting unusual thermal mapping weeks before the attack. But I didn’t escalate because… well, I wasn’t sure if it was my place.
Caden: (softly)
And that’s on the culture we had. We’ve since instituted “Red Alert Fridays,” where any staffer can raise a concern directly. We’re tracking how often concerns are raised—and resolved—with a “Feeling Dumb Now > Being Dumb Later” metric.
Jira:
We’re also implementing pre-mortems. Each build sprint starts with the team imagining it’s six months post-failure. Then we ask, “What broke, and what could we have done to prevent it?”
Director Voss: (nods)
Excellent. Painful clarity is our new ally. Continue.
Vera:
We’ve also added graceful degradation protocols. If the port had been hit, internal blast shutters should have activated to contain damage. Those were disabled for maintenance and never re-enabled.
Caden:
And our change management process missed that entirely. So we’re deploying rollback automation—every config change now logs rollback instructions.
Director Voss:
What about monitoring?
Milo:
We’ve pivoted from hundreds of hyper-granular metrics to a few high-level mission indicators. For example, "Battle Station Hull Integrity %," and a new “Galaxy-Level Weapon Risk Index.” If either dips unexpectedly, a red alert auto-triggers.
Jira:
And we’ve adopted anomaly detection via AI. If it sees any event that previously correlated with catastrophic events—like rebel fleet movement patterns or Ewok activity—it alerts command directly.
Director Voss:
(coldly) Good. This is the path forward. Not mystical destiny. Not fear. But operational discipline. Any final thoughts?
Vera:
Just this—when we say it’s okay to speak up, that has to be demonstrated repeatedly, even when it’s uncomfortable.
Caden: (nodding)
And rewarding disagreement has already surfaced better ideas. I used to be afraid of raising hard questions. Now we do it before launch, not after explosions.
Director Voss:
Excellent. Jira—summarize the action items.
Jira:
Hardened all single points of failure with defense-in-depth.
Standardized pre- and post-mortem processes for all weapons platforms.
Rolled out automated rollback and recovery tooling.
Redefined KPIs to align with actual system resilience, not just aesthetics or timeliness.
Culture change mechanisms: Red Alert Fridays, “Feeling Dumb Now is better than being Dumb Later” posters, and exec-led vulnerability sessions.
Director Voss:
Perfect. We will build again—but smarter. Fear made us rigid. Data will make us unstoppable.
Milo: (smiling faintly)
Should we maybe not call the next one “Death Star?” Feels like we’re asking for trouble.
Director Voss:
Noted. Marketing can workshop it.
Caden: (cheerfully)
What about “Live Sphere”? Or “Resilience Moon”?
Jira & Vera: (in unison)
No.
[Scene fades as the team begins preparing for the next project with actual documentation, cross-team trust, and a fresh round of psychological safety training.]
[END SCENE]
Share this post