Consider a scenario that has played out repeatedly across investment firms deploying AI for deal screening: a system reads pitch decks, extracts key metrics, and produces summary memos. After six months, the partners stop using it.
Not because it’s inaccurate. Because it’s positioned wrong. The AI produces “draft investment memos” that partners are supposed to review and edit. The partners — who have spent 20 years developing their own analytical frameworks — don’t want to edit someone else’s memo, even if that someone is a machine. The tool asks them to be editors when they want to be decision-makers.
This is the most common failure mode in AI deployment for knowledge work: putting the human in the wrong layer.
The Two-Layer Model
Every decision has two layers:
The analysis layer is where you gather data, identify patterns, test hypotheses, and surface risks. It’s iterative, detail-heavy, and time-consuming. A single deal analysis might require reading a 40-page deck, researching 15 competitors, checking 3 years of financial data, assessing the team’s track record, and mapping the regulatory landscape. This is where AI agents are not just useful but genuinely superior to humans — they’re faster, more thorough, and they don’t get tired at page 35.
The decision layer is where you weigh incommensurable factors — market timing against team quality against personal conviction against portfolio fit against relationship dynamics — and commit capital. This is where judgment lives. It’s not computational. It’s not even fully articulable. A partner who says “something feels off about this deal” and passes on it isn’t being irrational — they’re drawing on pattern recognition built across thousands of deals and decades of experience. That pattern recognition is exactly what AI agents cannot replicate, because it includes context that was never written down.
The failure mode: putting humans in the analysis layer (where they’re slow and inconsistent) or putting AI in the decision layer (where it lacks judgment and accountability).
What Belongs Where
AI agents should handle:
- Initial screening: reading every deck, extracting metrics, flagging red flags. A human reading 500 decks a year misses things. A pipeline reading 500 a week doesn’t.
- Cross-validation: running the same company through multiple analytical lenses independently. No human team can produce 9 independent assessments of the same deal without anchoring on each other. Agents can.
- Pattern matching: “This company’s metrics look like Company X at the same stage, which went on to [outcome].” The knowledge graph makes every historical analysis available to every future one. Human memory is selective and biased.
- Monitoring: tracking portfolio companies daily for signals — hiring changes, competitor moves, regulatory shifts, customer sentiment. No human can monitor 20 portfolio companies across 50 signal sources continuously. Agents do this trivially.
- Synthesis: compressing 9 independent assessments into a single coherent brief that highlights the tensions, not just the consensus. This is where agents save the most partner time — turning a week of analyst work into 10 minutes of reading.
Humans should handle:
- Novel situations: when the analysis surfaces something the system hasn’t seen before — a new regulatory regime, a market structure that doesn’t fit historical patterns, a founder with an unconventional background that defies the usual heuristics. The human recognises novelty. The system recognises patterns.
- Relationship-dependent judgments: “Will this founder listen when things get hard?” is not a question that data answers. It’s a question that lunch answers.
- Ethical boundaries: the decision to pass on a profitable investment because it conflicts with the fund’s values is a human decision. Encoding ethics as rules is possible but insufficient — the interesting ethical questions are the ones the rules don’t cover.
- Final capital allocation: the signature on the wire transfer. The commitment of the fund’s capital and reputation. This is where accountability lives, and accountability requires a person.
- Portfolio construction: how this deal fits with existing investments, LP expectations, sector concentration limits, and the fund’s evolving thesis. This is strategic judgment that draws on context no agent has access to.
The Intervention Spectrum
It’s not binary. Between “the AI does everything” and “the human does everything,” there’s a spectrum of intervention points:
Fully automated (no human): Deal screening, research monitoring, data pipeline validation, KG maintenance. These are high-volume, low-stakes decisions where human involvement is waste.
Human-in-the-loop (review before action): Outbound communications, published analysis, client deliverables. A human reviews the output before it reaches an external audience. The AI produces; the human approves.
Human-on-the-loop (monitor and intervene): Portfolio operations, ongoing analysis. Agents run continuously; humans review summaries and intervene when something looks wrong. Most of the time, the human’s job is to confirm the system is working correctly.
Human-led (agents assist): Investment committee deliberation, founder negotiations, LP conversations. The human leads; agents provide real-time data, comparables, and risk flags. The human is the decision-maker; the agents are the research team in the room.
Human-only (no agents): Relationship building, crisis management, fund strategy. These are the domains where AI assistance actively gets in the way because the value comes from human presence, not information processing.
The Test
If you’re deploying AI in a knowledge-work context, here’s the diagnostic: trace one decision from data to action, and ask at each step whether a human or an agent should be doing the work.
If you find humans doing analysis that agents could handle faster and more thoroughly — you’re wasting human judgment on mechanical work.
If you find agents making decisions that require judgment, accountability, or relationship context — you’re creating fragility that will surface at the worst possible time.
The firms that get this right will look like they have fewer people. They’ll actually have more judgment per decision, because the humans are spending 100% of their time on the work that only humans can do.
This analysis is informational and does not constitute investment advice, a research report, or a recommendation to buy, sell, or hold any security.
Charaka Notes by Manthan Intelligence. Subscribe