Meta
2022 - 2023
Bridge
2025
Dirac
2023 - 2025
Enterprise AI adoption has crossed a tipping point. 78% of enterprises use AI for document processing. But extraction accuracy means nothing when humans still review everything.
$500B+
Global AI Software Market by 2030
Where review infrastructure is mission-critical
$60B+
Document + HITL AI by 2030
Core intelligence stack requiring human judgment
The bottleneck isn't extraction. It's review, trust, and making human judgment reusable. That's what RunLog Atlas solves.
At Bridge, I built pipelines handling 10,000+ documents daily. Best-in-class extraction. High confidence scores. Yet review scaled linearly—more docs meant more human hours.
"AI extraction scales linearly. Review scales catastrophically."
RunLog Atlas makes review scale with ambiguity, not volume. Human judgment compounds instead of being throwaway labor.

Meta
2022 - 2023
Built ML systems serving 50M+ users daily. Improved hate detection accuracy by 15%. Learned that confidence scores without operational routing are meaningless.

Dirac
2023 - 2025 · Head of AI
Built systems from scratch handling 1M+ geometries. Reduced user-facing latency by 70% and workflow interruptions by 90%. Saved $300K annually.

Bridge
2025
Built document intelligence pipelines processing 10,000+ documents daily. Saw firsthand why extraction alone never scales—review was the bottleneck.
Extraction is solved
Models improve every quarter. The bottleneck has shifted to review.
Human judgment must compound
One expert reviews 1,000 cases. That judgment should make the next 10,000 easier.
Confidence needs routing
Every system has scores. Almost none use them operationally.
Systems should scale efficiently
Current AI systems get expensive with volume. Atlas flips this.
I've lived this problem at Meta, Dirac, and Bridge. RunLog Atlas is the system that should have existed.
