Join the next cohort of The Marketing Engineer: Vibe Coding for Marketing course on Maven →
Your project just failed. Here's what happened.

Your project just failed. Here's what happened.

Natalie Lambert
Natalie LambertFounder, GenEdge
March 17, 2026
5 min read

You've planned everything. The timeline looks solid. The stakeholders are aligned. You're ready to launch.

Six months later, the project has tanked. Not a small miss — a visible, embarrassing failure. The kind people reference in post-mortems for years.

Here's the uncomfortable truth: most of the warning signs were already there on Day 1. You just couldn't see them because you were too close, too optimistic, or too focused on making the plan work.

Today, we're flipping the script. Instead of asking "what could go wrong?" (a question that generates vague, generic answers), we're going to have AI live in the failure and write about it like it already happened.

Why this matters

Traditional risk assessments are sterile. You brainstorm a list of bullets — "timeline slippage," "stakeholder misalignment," "budget constraints" — and then move on. The items are too abstract to trigger real action.

Pre-mortems work better because they force you to imagine specific failure. But most people don't have the emotional distance to truly imagine their own project collapsing. We protect our ideas. We rationalize. We skip the uncomfortable parts.

AI doesn't have that problem. It will cheerfully describe exactly how your initiative went sideways — the meeting where the executive lost confidence, the metric that cratered, the decision point where everything unraveled. And because you're reading someone else's description of the failure (even though it's about your work), it hits differently than your own list of "risks."

Use case spotlight: Inversion as insight

There's a technique called "inversion" that Charlie Munger made famous: instead of asking how to succeed, ask what would guarantee failure — and then avoid doing those things.

The same logic applies here. When you ask AI to write a narrative of your project's failure, it has to invent causes — plausible, specific reasons why things broke down. Those causes surface assumptions you didn't realize you were making, dependencies you hadn't tracked, and stakeholder dynamics you hadn't thought through.

The narrative format matters too. A story about "the meeting where Sarah realized the pilot data was meaningless" is stickier than a bullet point saying "Ensure data validity."

Your AI experiment: Try this prompt

Time to tinker: Think of a project, campaign, initiative, or major decision you're about to launch. Gather the basics: what's the goal, who's involved, what's the timeline, and what does success look like.

Then copy and paste the following into your AI tool of choice.

The prompt:

"I'm about to launch [describe your project, campaign, initiative, or decision — include the goal, timeline, key stakeholders, and what success looks like].

Do the following:

  1. Write a detailed, narrative post-mortem dated 6 months from now in which this project has failed — not a minor stumble, but a genuine, visible, embarrassing failure. Write it as if you're a frustrated team member explaining to leadership exactly what went wrong. Be specific: name the moments where things broke down, the decisions that backfired, the warning signs that got ignored, the stakeholder who lost confidence, the metric that cratered. Make it plausible and painful — not generic risks.
  2. Step out of the narrative. Based on that post-mortem, identify the 3 failure modes that are most likely to actually happen AND most preventable right now, before launch. For each one, explain in one sentence why it's fragile.
  3. For each of those 3 failure modes, propose one specific, concrete action I could take in the next 2 weeks that would meaningfully reduce the risk. Not vague advice like "communicate more" or "align stakeholders" — give me something I could put on a calendar or delegate today.

"

Pro tip

Run the same prompt on the same project with different AI tools (Claude, ChatGPT, Gemini). Each model has slightly different blind spots and will surface different failure modes. The overlap is where you should pay closest attention — those are the risks multiple "brains" flagged independently.

What did you discover?

Did the AI describe a failure that felt uncomfortably plausible? Did it flag a stakeholder dynamic or a fragile assumption you hadn't thought about? The goal is to find those "oh no, that could actually happen" moments — before they actually happen.