Manthan Intelligence’s knowledge graph contains 220 startup postmortems — deaths and wind-downs from 2022 through early 2026. AI-first startups represent the largest single cohort of recent failures. Of those deaths, competition killed roughly a third. The remaining two-thirds died of problems specific to AI-first business models: model commoditisation, wrapper traps, demo-to-product gaps, single-agent failure modes, and cost spirals that no revenue could outrun.
These patterns repeat. They’re preventable.
The 10 Patterns
1. Competition Crush (~34% of AI deaths): A well-funded competitor ships and captures 60%+ market share within 18 months. The original founder’s edge — say, custom LLM fine-tuning — becomes table stakes for funded incumbents. By month 12, the edge is gone. Three AI email productivity tools from 2024 are now walking wounded. Gmail’s AI compose feature costs $0. That’s the endgame for feature-layer AI companies.
2. Model Commoditisation (~15%): Founder built on GPT-3.5, raised $2M, planned a two-year runway. By month 6, GPT-4 shipped and half the differentiation vanished. By month 12, Claude and Grok were in the wild. By month 18, foundation model providers moved upstack and built the exact product the founder was building. Pattern: any product whose core output quality depends on model capability is a contractor business, not a venture outcome, unless the founder owns a data moat above the model layer.
3. Wrapper Trap (~12%): The product is a thin wrapper over an OpenAI API. $50K/month revenue, 80% gross margin. Looks profitable. The real dynamic: OpenAI is using the founder as a distribution channel. When OpenAI ships that feature natively (estimated timeline: 6–18 months for any widely-used AI feature), revenue goes to zero. Multiple AI code review tools and AI customer support tools died this way when OpenAI and Google shipped native alternatives.
4. Demo-to-Product Gap (~18%): The demo (45 seconds, controlled inputs, cherry-picked output) works flawlessly. The product (real user inputs, edge cases, latency, inference cost) fails. Founders spent 18 months on the demo and 3 months on the product. AI products fail silently — no error message, just wrong output. This requires 10x more testing than traditional software. Founders without ML backgrounds underestimate by 5x.
5. Single-Agent Failure (~8%): Product depends on one well-calibrated agent. Initial product-market fit looks real. But agents degrade: context rot, prompt drift, distribution shift from new users. Month 3: the agent succeeds 90% of the time. Month 9: 70%. Month 18: 40%. No obvious reason why — churn accelerates without an identifiable cause. One medical triage chatbot worked well on its initial cohort but failed on symptom distributions outside the training data. One agent, one task, one failure point.
6. Data Moat Illusion (~8%): Founder claims: “we own unique data.” What they mean: “we accumulate user interaction logs.” That’s not a moat — that’s telemetry. A structural data moat requires proprietary data that competitors can’t replicate: exclusive partnerships, regulatory license, a decade of human-expert annotations. Logs alone aren’t defensible. When a funded competitor ships, they solve the same problem with commodity data. The illusion collapses.
7. GPU Cost Spiral (~12%): Product ships. Adoption is strong. But inference cost per request ($0.50–$2.00) scales faster than revenue. Founders assumed GPU prices would drop fast enough. They didn’t. Two multimodal AI agents from 2024 saw GPT-4V pricing remain flat while they scaled. The math: $2 revenue per request, $1.50 inference cost = 25% gross margin. Add support, infrastructure, and sales — you’re at -75%. Dead in 18 months.
8. Talent Poaching (~5%): Solo engineer plus AI product person. Product gets traction. Google recruits both with $2M+ options. Startup equity becomes worthless. AI startups can’t outbid Google, OpenAI, or Anthropic for scarce ML talent. Small teams are uniquely vulnerable because value is concentrated in 1–2 people.
9. Regulatory Surprise (~3%): Product operates in healthcare, finance, or an EU jurisdiction. Regulations ship faster than expected. Compliance cost: $500K–$2M. The runway wasn’t budgeted for legal. Wind-down. Pattern: any AI product with a regulatory surface needs separate legal runway. Most founders don’t build it in.
10. Customer Concentration (~8%): First customer represents 60% of revenue. That customer gets acquired by a larger company with internal AI capability. Contract cancels. Revenue drops 60% overnight. AI products targeting enterprise are particularly vulnerable to this — the same companies buying AI tools are also building internal AI. Diversify to 5–7 anchor customers before assuming revenue stability.
What To Do With This
For founders: these 10 patterns are preventable if caught early. Before shipping, ask five questions. (1) Am I building on a data moat or a model moat? (2) What happens when my foundation model provider ships my feature natively? (3) Do I have calibration loops that detect when my agent degrades? (4) Is my core team irreplaceable or recruit-able? (5) What’s my gross margin at 10x current scale? If #5 is below 50%, you’re in a cost spiral. If #1 is “model,” you’re in a wrapper trap.
For investors: nine of these AI deaths were visible six months before the company shut down. The patterns were there. The question is whether you have a framework to see them before the postmortem writes itself.
The Charaka View
Manthan Intelligence runs these 10 patterns against every AI-first company that enters our analysis pipeline. When we see a model-dependent product without calibration loops, it gets flagged as high attrition risk. When we see a wrapper product with sub-60% gross margin at scale, we downgrade. When we see a founder claiming a data moat that’s actually transaction logs, we dig. The postmortem patterns are the checklist. The checklist exists to prevent the postmortem.
This analysis draws on Manthan Intelligence’s knowledge graph of 220+ startup assessments. It is informational only and does not constitute investment advice or a solicitation to invest in any security.
Charaka Notes by Manthan Intelligence — intelligence dispatches from the knowledge graph. Subscribe