Vatsal Trivedi
RunlogAtlas
Makedecisionsfromlargevolumesofdocumentswitheveryinsighttracedtoitssource

Atlas turns document-heavy workflows into persistent, structured knowledge. Investment teams, legal teams, and underwriters can query across hundreds of files, surface what matters, and back every decision with a traceable chain to the source — not a summary that disappears after the call.

TheProblem

You're sitting on hundreds of documents with the answers already inside them. You can't query them, compare them, or trust them at scale.

Decisions lack grounding

AI tools summarize — but summaries can't be audited. When a decision is challenged in a deal, a dispute, or a regulatory review, there's no traceable chain back to the source.

Insights don't persist

An analyst reads 50 pitch decks. That pattern recognition lives in their head — not the system. The next analyst starts from zero. Institutional knowledge never compounds.

Volume breaks teams

Every new data room, filing, or discovery set adds linear effort. There's no mechanism for prior work to reduce future work. The math never improves.

WhatAtlasDoes

Atlas turns a corpus of documents into a queryable, persistent knowledge base — where every insight is anchored to its source and every decision can be explained.

Persistent Structured Insights

Every document is broken into atomic observations — financials, claims, risks, entities, terms. These observations persist across your entire corpus, not just the current session.

Query across 500 data rooms the same way you query one. Patterns surface. Outliers surface. Nothing is lost when the tab closes.

Full Provenance on Every Decision

Every insight links back to the exact source: document, page, section, and the confidence score that drove the extraction. No black-box summaries.

When a decision is questioned — by a partner, a counterparty, or a regulator — Atlas shows exactly what was seen and why it was flagged.

Confidence-Routed Review

High-confidence observations flow automatically. Low-confidence observations go to a priority queue — not a pile of documents.

Humans spend time on genuine uncertainty, not verification theater. Review effort shrinks as volume grows — not the other way around.

Judgment That Compounds

When a human corrects an extraction, that correction becomes a durable asset — reapplied automatically to every similar observation going forward.

An analyst who reviews 1,000 documents makes the next 10,000 cheaper. The system learns the team's standards, not just the model's defaults.

BuiltforTwoWorkflows

Both share the same underlying constraint: high-stakes decisions, large document volumes, and a need for traceable answers under time pressure.

Primary

Due Diligence & Underwriting — VC / PE

Investment teams read the same document types — pitch decks, data rooms, cap tables, financial statements, founder memos — deal after deal. The extraction problem is solved. The institutional memory problem isn't.

Extract financials, team pedigree, market claims, and risk flags from every document in a data room

Surface inconsistencies across documents — revenue figures that don't match, cap table discrepancies, conflicting claims

Route only uncertain or conflicting observations to the analyst — not the whole data room

Build a persistent knowledge base across deals — so pattern recognition from deal 1 informs deal 100

"A Series A fund analyst reviews 400+ decks a year. Atlas retains what they learned from each one — compounding judgment across the entire portfolio, not just the current deal."

Secondary

RFQ Response & Legal Discovery

Responding to an RFQ or navigating legal discovery means sifting through thousands of pages under strict deadlines. The stakes are high, the documents are dense, and errors carry real consequences.

Find every responsive document and extract the relevant observations — with source citations

Surface contradictions and privilege issues before review — not during production

Full auditability: every extracted claim links to the page and paragraph it came from

Reuse prior review decisions — so the same clause type doesn't get re-examined from scratch every engagement

HowAtlasWorks
1

Ingest any file format

PDF, Excel, JSON, images, audio, and video files are automatically partitioned into logical sections and broken into atomic observations. Each observation is independent, inspectable, and confidence-scored — linked back to its exact source location.

2

Route uncertainty, not volume

High-confidence observations are accepted automatically. Low-confidence observations surface in a priority queue — with the exact reason for uncertainty shown. Humans spend time on what actually needs judgment, not on re-verifying what the model already knows.

3

Retain judgment as a durable asset

Every human correction is stored and applied forward. The team's standards compound across documents and over time — so the same question never costs the same effort twice.

4

Decisions with provenance, not summaries

Every insight is anchored to its source. When you make a decision — pass on a deal, respond to an RFQ, produce a discovery set — Atlas can show exactly what evidence it rests on and where that evidence came from.

WhyIBuiltThis

Atlas isn't a research project. I hit the same broken system three times at three different companies — each time with more documents, more accuracy, and the exact same outcome.

1

Meta (2022–2023)

I built ML systems serving 50M+ users daily and improved hate-organization detection by 15% PR-AUC. The lesson: confidence scores on a dashboard change nothing. If the score doesn't determine what a human sees next, it's decorative.

2

Dirac (2023–2025) — Head of AI

As Head of AI at a CAD-automation startup, I processed 1M+ geometries and unstructured engineering PDFs, then built confidence scoring and human-review workflows from scratch. Reduced latency 70% and workflow interruptions 90% — by making the system route uncertainty instead of handing everything to humans.

3

Bridge (2025)

I built document intelligence pipelines processing 10,000+ documents daily at 90%+ accuracy. Teams still reviewed nearly every output. Even when the extraction was right, no one could make a confident decision from it — because nothing persisted, nothing was traceable, and prior work didn't carry forward. I founded Runlog to fix that layer.

"I've built document pipelines, confidence scoring, and review queues at three different companies. The extraction problem was solved at each one. The decision problem wasn't — no persistence, no provenance, no way to carry forward what the team already knew. That's what Atlas fixes."

— Vatsal Trivedi, Founder & CEO, Runlog

TheMarket

TAM

$500B+

by 2030

Enterprise AI workflows where document-centric decisions require traceable, auditable intelligence

SAM

$60-100B

by 2030

Document review and confidence systems across private markets, legal, finance, and government

SOM

$5B+

by 2030

VC/PE due diligence, underwriting, and legal discovery — starting from $500M–$1B today

Document AI ($14.7B → $27.6B by 2030) and HITL AI ($4.8B → $39B by 2033) markets are growing fast. None of them solve the persistence and provenance layer. That's the gap Atlas fills.

Ifyou'reinvestingindocument-heavyworkflows,let'stalk.

Runlog Atlas doesn't summarize documents. It builds persistent, traceable intelligence from them — so your team's judgment compounds instead of starting over with every new file.