Insights Charaka Notes Analyse a Deal — £9
1000x Worker | 7 min read

The 1000x Investor: What Happens When You Build an Investment Fund That Never Stops Thinking

We built Manthan Intelligence as the AI-native analytical engine for Tavaga Fund. 87,000+ entities, 25 daily analyses, self-calibrating accuracy loops. Here are the real numbers.

I started building Manthan Intelligence in February 2026 with a question that sounds simple but isn’t:

What would a venture fund look like if the entire analytical infrastructure was AI-native from day one?

Not “a fund that uses AI tools.” Not “a fund with a ChatGPT subscription.” A fund where the knowledge base, the research engine, the pattern recognition, the calibration system, and the content production are all designed around AI agents from the ground up.

We built this system — Manthan Intelligence — as the analytical engine for the upcoming Tavaga Venture Capital Fund. Tavaga invests in seed to Series A startups across AI, Fintech, and Consumer industries.

The intelligence challenge in venture capital is fundamentally different: sparse data, no quarterly filings, no Bloomberg terminal for Series A companies. You either have to rely on very expensive data providers for basic information or build an information infrastructure from scratch, similar to what Manthan Intelligence has done.

Three months in, the system analyses 25 private companies every single day, scores its own predictions against reality, and rewires its own analytical lenses based on what it gets wrong.

Here are the real numbers.


The Knowledge Graph: 87,000+ Entities — and It Never Sleeps

The backbone of the entire operation is a knowledge graph holding over 87,000 entities across 14 categories. That’s roughly:

13,800 companies · 4,200 investors · 64,800 relationships between them · 300+ startup postmortems · 1,900+ funding rounds · 1,100+ completed analyses

Every day, autonomous agents scrape funding announcements, research new companies, enrich existing profiles, and extract the relationships between entities. The graph grows by hundreds of entities daily. It never sleeps. It never forgets a connection.

This isn’t a database you query. It’s a living model of the startup ecosystem that continuously reorganises itself as new information arrives.


The Starting Point: Multiple Analytical Lenses, One Verdict

When we evaluate a company, it doesn’t go through a single analyst’s lens.

The Analytical Council runs multiple independent perspectives simultaneously — each calibrated to a different dimension of investment risk and opportunity. Think of it as a panel of specialists who never get tired, never have a bad day, and never anchor on the first data point they see:

→ One lens evaluates technology defensibility → Another stress-tests unit economics → Another maps competitive positioning → Another assesses through a macro and regulatory lens

The synthesis isn’t consensus — it’s structured disagreement. Where the lenses agree, that’s a strong signal. Where they disagree, that’s where the real due diligence begins.

The system runs these assessments in structured batches, prioritised by deal relevance, sector activity, and portfolio exposure. It’s not brute force. It’s targeted intelligence.


The Part Nobody Talks About: A System That Learns From Its Own Mistakes

This is what makes the whole architecture fundamentally different from “using AI for analysis.”

Every day, the system runs 25 fresh company analyses through the full analytical pipeline — blind. The verdict is locked before the actual outcome is looked up. Then it scores itself.

Nearly 500 graded scorecards so far. Each one a training signal.

The results feed into three interlocking calibration loops:

Loop 1 — Execution. Every analysis follows a structured workflow: initial screening, multi-dimensional assessment, synthesis of agreement and disagreement, final verdict. Each analytical lens runs a mandatory self-reflection check before publishing its output — questioning its own weakest claim, running an inversion scenario, checking for drift.

Loop 2 — The Improvement Loop. One metric rules everything: weighted accuracy — the system’s overall hit rate across all verdict types (invest, pass, conditional), weighted by confidence level. This is different from individual call reliability (e.g., INVEST reliability at 97.9% measures precision on invest calls specifically). Weighted accuracy captures the whole picture: how often is the system right, across every type of call it makes?

Currently at ~68% — up from the low-to-mid 60s when the dataset expanded to include harder, ambiguous cases in late March. The system has been running daily backtests for seven weeks now. Every prompt change, every calibration adjustment is tested against this single number. If accuracy improves, the change stays. If it doesn’t, it’s discarded. Binary. Ruthless.

Loop 3 — Continuous Evaluation. Every analysis produces a score between 0 and 1. The system runs regression checks against a frozen benchmark set of deals. Weekly calibration sweeps compute accuracy by sector, by stage, by confidence band. Systematic biases get identified and corrected. Calibration notes are automatically injected into the analytical lenses at runtime — so measured weaknesses from last week directly shape this week’s analysis.

The numbers that matter most:

INVEST call reliability: 97.9% — when the system says invest, 97.9% of those companies went on to raise further funding at a higher valuation (the closest proxy for validation in venture, short of actual exits) → Decline detection: 100% — when it says a company is heading for trouble, it has been right every single time in the scored dataset → False positives dropped from 15 to 3 in a single week of calibration → 72+ learning entries cataloguing every type of mistake: analysis failures, backtest misses, cross-agent lessons, prompt regressions

This is not a static model. This is a system that gets measurably better every week.


What 303 Startup Postmortems Reveal

The knowledge graph includes 303 documented startup deaths. Not anecdotes — structured data with cause of death, timeline, sector, funding history, and competitive dynamics.

The patterns are brutally consistent:

Competition crushes roughly a third of failed startups. Not “the market was crowded” — specific: a competitor with better capital, faster execution, or incumbent advantage. Median time from launch to death: just over four years.

Regulatory kills slower but more finally. About one in seven deaths. New compliance thresholds, licensing revocations, category bans. These deaths take five years on average. Founders keep pivoting inside a shrinking regulatory window until capital runs out.

The surprising finding: unit economics kills fewer startups than you’d expect. Most founders sense bad unit economics early. The ones who don’t are already dead from competition or timing before unit economics becomes the proximate cause.


The Content Engine: 20+ Articles Published Autonomously

Charaka Notes — our public research publication on getmanthan.com — now has over 20 data-driven pieces covering startup survival patterns, sector intelligence, and AI-native operations.

An autonomous content engine identifies patterns from the knowledge graph, drafts analysis, runs it through fact-checking, and publishes. The editorial process: the system produces, a human approves the output.

Same principle as what Dan Shapiro calls a “dark factory” for software — except applied to research and content production. Specifications drive the work. Agents execute. Humans approve outcomes, not process.


Manthan Intelligence × Tavaga Fund: The Architecture in Practice

A quick note on how the pieces fit together, because people often ask.

Manthan Intelligence is the AI-native analytical platform — the knowledge graph, the analytical council, the calibration loops, the content engine. Everything described above. I built it from scratch starting February 2026.

Tavaga Fund is the venture fund that Manthan powers. Tavaga makes the investment decisions. Manthan provides the analytical infrastructure that gives a two-person fund the research depth of a team twenty times its size.

The reason I’m sharing this publicly: the architecture isn’t specific to Tavaga. Any investment operation — an emerging VC fund, an angel syndicate, a family office doing venture allocations — could deploy a similar system calibrated to their investment thesis, their sector focus, their risk parameters.

The calibration loops work because venture investing is deeply verifiable: you make a prediction (invest/pass), time passes, outcomes are observable (company raised, company died, company stagnated). That’s a feedback loop. Which means you can calibrate. Which means accuracy improves systematically.


Why This Matters Far Beyond Venture Investing

If agentic engineering — Karpathy’s term, which I prefer to “vibe coding” — works for venture investing (one of the most complex, judgment-intensive, relationship-dependent forms of knowledge work), it works for consulting, legal, accounting, research, and every other professional service built on human analytical capacity.

The same principle applies to every knowledge work domain with measurable outcomes. If you can score predictions against reality, you can build an agentic system that learns.

The future of knowledge work is being redesigned globally. Small teams with AI-native architecture delivering output that previously required organisations ten or twenty times their size. We’re one proof point. There will be many others.

I’ll be sharing the building journey here — what works, what breaks, what surprised us, and the frameworks emerging from the data.

If you’re building or running a venture fund, angel syndicate, or investment operation and want to explore what an AI-native analytical infrastructure could look like for your thesis — I’d welcome the conversation. Connect with me here or reach out directly.

Mayank Mathur | Founder, Manthan Intelligence | GP, Tavaga Fund

Read the full analysis on Charaka Notes.

Never miss an insight

Free dispatches, every day. Unsubscribe anytime.

No spam. Just intelligence.