Skip to main content

Managed Services Providers

The Legal AI Landscape - This article is part of a series.
Part 4: This Article

Managed Services Providers
#

Document review accounts for 70–80 percent of litigation costs. The firms that have run that work for decades are now embedding AI into it — not as a product they sell, but as a capability woven into the service.

Companies like Epiq, Lighthouse, Consilio, FTI Technology, Ankura, and KLDiscovery have spent decades doing the grunt work of litigation and investigations: processing terabytes of data, staffing document reviews, managing privilege logs, running forensic collections. Now they’re layering generative AI on top of that operational expertise — selling outcomes, not software licenses. (Consilio’s 2026 Global Survey found that selecting and deploying legal technology has overtaken work volume as the biggest challenge for legal professionals — 54 percent cited technology decisions versus 52 percent for volume.)

This is the fourth post in our Legal AI Landscape series. The first covered foundation models. This one maps the managed services providers and their role in the ecosystem.

The Major Providers and Their AI
#

Six firms dominate the managed legal services market. All six launched or significantly expanded AI platforms between late 2024 and early 2026. They fall into three categories.

Full-Service E-Discovery Providers
#

Epiq, Lighthouse, and Consilio offer the most developed AI suites — covering review, privilege, early case assessment, and case strategy — backed by dedicated AI teams and deep Relativity integration.

Epiq has 4,000 employees across 17 countries and more than 2,600 clients. Epiq AI, powered by the proprietary Laer™ platform, launched in January 2025 and expanded in March 2026 into agentic AI solutions: Epiq AI for Review (automates over 80 percent of review at up to 500,000 documents per hour), Epiq AI for Privilege (automated privilege classification and logging), Epiq AI for Antitrust, Epiq Assist (conversational AI for fact research and deposition preparation), and Epiq AI Accelerators (translation, image analysis, OCR directly in RelativityOne). 130 clients adopted Epiq AI in its first year, supported by over 200 AI consultants, data scientists, and engineers. Won the 2026 Legalweek Leaders in Tech Law Award. In March 2026, Epiq acquired LitLingo, a communications monitoring AI, expanding into proactive compliance.

Lighthouse has more than 30 years in e-discovery and information governance, with 50+ multinational corporate clients. LighthouseIQ, launched January 2026, includes four AI applications on a proprietary IQ Fabric infrastructure: IQ Answers (natural language questions across a document set before full review), IQ Case Strategy (chronologies, timelines, deposition prep, witness summaries), IQ Review (AI-driven responsive content identification), and IQ Priv (generative and predictive AI for privilege logs). Pressure-tested across 1.4 billion documents. Cleary Gottlieb partner C.J. Mahoney publicly credited LighthouseIQ with enabling his team to substantiate a damages theory with data-backed confidence. Lighthouse also expanded its AI Search product to the UK and Europe in December 2025, aligned with UK GDPR and the EU AI Act.

Consilio covers e-discovery, managed review, investigations, compliance, and flexible legal talent (Lawyers On Demand). Its Aurora platform expanded in early 2026 with: AI Review (custom fine-tuned classification models), AI Investigate (conversational AI for fact-finding), AI PrivGen and AI PrivDetect (privilege identification and logging), AI Summarize, TrueLaw (narrative work product from review data), and Verity Review (announced March 2026, a purpose-built AI-native review platform). Consilio partnered with Prevail for real-time AI-powered deposition verification. All AI runs on private cloud data centers, not public cloud. Consilio’s 2026 Global Survey found 52 percent of respondents identify improving review efficiency and quality as their most critical challenge.

Expert-Led Advisory Firms
#

FTI Technology and Ankura embed AI within broader consulting engagements — the technology is one tool within a larger advisory framework.

FTI Technology is the technology segment of FTI Consulting (NYSE: FCN, $3.8B revenue, 8,100+ employees, 32 countries). IQ.AI, launched in 2024 and expanded in March 2026, is a patent-pending platform combining proprietary workflows with generative AI from multiple providers: first-pass review, privilege review, privilege logging, and investigation analysis. IQ.AI Studio adds pre-built AI tasks for antitrust, data breach, cross-border litigation, and investigations, with early access to agentic capabilities. FTI maintains partnerships with Reveal and Relativity, selecting and configuring the best tools per engagement rather than defaulting to a single platform. The General Counsel Report 2026, co-published with Relativity, found AI adoption in corporate legal departments nearly doubled to 87 percent.

Ankura (~2,100 employees) covers disputes, investigations, restructuring, cybersecurity, data privacy, and financial crime compliance. Ankura AI includes a custom-trained LLM for private deployment, Ankura Otter Analytics™ (patented platform with predictive modeling, image analytics, and sentiment analytics integrated with Relativity), and Ankura AI Analyst (financial crime compliance — KYC, AML alerts, sanctions screening, enhanced due diligence with multi-LLM quality control). In early 2026, Ankura acquired Omniscient Platforms to strengthen AI capabilities in Latin America. Ankura sits closer to the advisory end — its AI tools are deployed by consultants and subject matter experts (former prosecutors, forensic accountants, compliance officers) who use AI to augment investigations rather than running high-volume document review.

Platform-Centric Providers
#

KLDiscovery operates the Nebula platform for processing, review, and production, with AI layered on top: ECAi (generative AI for early case assessment — themes, categorization, custodian activity analysis), AI-enabled review with managed review teams, sentiment analysis, and interactive timeline visualization.

How AI Changes the Workflow
#

The standard managed review workflow — scoping, processing, review, production — hasn’t changed. What’s changed is what happens inside it.

AI processing. Documents flow through AI classification, extraction, and analysis: responsiveness, privilege flags, PII detection, key term extraction, sentiment analysis. At scale, this runs at hundreds of thousands of documents per hour.

Human validation. Experienced reviewers handle exceptions, edge cases, and quality control — monitoring AI performance, adjusting prompts, and validating outputs against legal standards. For privilege, an AI flag is a starting point, not a final determination.

AI processing and human validation feedback loop — corrections recalibrate the model, improving precision from 75% to 90% by midpoint

Recall and Precision Across Review Methods
#

Grossman and Cormack’s landmark 2011 study in the Richmond Journal of Law and Technology, analyzing data from the TREC Legal Track, demonstrated that technology-assisted review achieved results superior to exhaustive manual review on both recall and precision. The TREC 2016 Total Recall Track confirmed that even a hypothetical perfect human assessor with 100 percent precision would achieve only about 70 percent recall, because individual reviewers disagree on what counts as relevant.

Method Typical Recall Typical Precision Notes
Manual (human) review 60–70% Highly variable Grossman & Cormack 2011; quality depends heavily on reviewer training and fatigue
TAR 1.0 (batch training) 75–80% ~80% Courts have accepted 75%+ as reasonable (Lawson v. Spirit AeroSystems)
TAR 2.0 / CAL (continuous active learning) 85–90%+ Higher than TAR 1.0 at equivalent recall EDRM 2024; Grossman & Cormack TREC 2015/2016
GenAI-assisted review 90%+ (vendor-reported) Comparable to or exceeding TAR 2.0 EDRM 2024; Relativity aiR, Everlaw, DISCO Cecilia; less independent benchmarking available

Recall = percentage of all relevant documents successfully identified. Precision = percentage of documents identified as relevant that actually are relevant. Sources: Grossman & Cormack, Richmond JOLT (2011); TREC 2016 Total Recall Track; EDRM Review in Transition (2024); DISCO precision/recall analysis.

The 60–70 percent recall rate for manual review is the baseline — and it is not very good: roughly one in three relevant documents missed entirely. TAR 2.0 and GenAI-assisted review close that gap significantly, but the GenAI figures carry a caveat: most of the 90%+ recall claims come from vendors testing their own tools. Independent benchmarking at the rigor of the TREC Legal Track studies hasn’t caught up yet.

Better accuracy also changes the cost equation. Finding more relevant documents on the first pass means less rework, fewer missed deadlines from late-discovered evidence, and lower risk of sanctions — savings that don’t show up in a per-document price comparison but dwarf the model costs.

The Cost and Speed Math
#

Document review accounts for 70–80 percent of total litigation costs — roughly $42 billion per year industry-wide, according to the American Bar Association.

The Pricing Stack
#

The cost of AI-augmented review has four layers, and each is compressing.

The four layers of document review cost — from raw API at $0.01–0.05 per document to human contract review at $1–3, with platform pricing compressing toward zero

Layer 1: Raw API costs. As we covered in The Foundation, reviewing a single document through a mid-tier model like Claude Sonnet 4.6 costs roughly $0.03. Processing 250,000 documents through a budget-tier model runs about $2,500. A frontier model on that volume stays under $15,000.

Layer 2: Platform pricing. The e-discovery platforms are driving their AI pricing down to those token API costs. Relativity made aiR for Review and aiR for Privilege free — bundled into standard RelativityOne pricing in early 2026. Everlaw made single-document AI features free. DISCO collapsed its entire platform into a single per-GB fee with AI included. This is a market-share play: pricing AI review at or near API token cost, betting that adoption locks customers into their ecosystems.

Layer 3: Managed services markup. Providers charge a 4–15x markup over API costs. That covers prompt engineering, security certifications (SOC 2, HIPAA, data residency), the human QC layer, project management, and defensibility.

Layer 4: Human contract review. Traditional managed review staffed with contract attorneys runs $1–$3 per document for first-pass responsiveness and $4–$8 for privilege review — the baseline that AI-augmented review is displacing.

Raw API Cost AI-Augmented Managed Review Human Contract Review
Per document $0.01–$0.05 $0.11–$0.50 $1.00–$3.00
250K documents $2,500–$12,500 $27,500–$125,000 $250,000–$750,000
1M documents $10,000–$50,000 $110,000–$500,000 $1,000,000–$3,000,000
What you get Raw classification output Classification + QC + privilege log + defensibility Human-reviewed, coded documents

Raw API costs assume mid-tier model pricing from The Foundation. Managed review figures from Winter 2026 eDiscovery Pricing Survey. Human review at $1–$3/doc first-pass per DecoverAI benchmark.

Speed: Why It Matters More Than Cost
#

The calendar is often the real constraint. A regulatory subpoena with a 30-day response window, a whistleblower investigation that needs answers in days, a post-breach notification deadline of 30–60 days — none of these wait for a six-month review timeline.

Human review moves at 40–50 documents per hour per reviewer. DISCO’s Cecilia processes roughly 25,000 per hour. Epiq claims up to 500,000.

Human Review AI-Augmented Review
Throughput 40–50 docs/hour/reviewer 25,000–500,000 docs/hour
250K documents ~10 weeks (25 reviewers) 1–3 days (AI + QC team)
1M documents ~27 weeks (25 reviewers) 3–7 days (AI + QC team)
Time to first strategic insight Weeks into review Hours (via ECA tools)

Human review timeline assumes 25 reviewers at 40 docs/hour, 40-hour weeks, plus 10% QC overhead. AI-augmented timeline includes scoping, AI processing, human validation, and production.

That speed difference changes the strategic calculus of litigation. A team that can review an opposing party’s production in days instead of weeks can prepare better deposition questions, file more targeted motions, and make earlier settle-or-fight decisions.

Why QC Is the Whole Point
#

QC isn’t just a checkpoint at the end — it’s an input that makes the AI better in real time. The AI surfaces documents it’s least confident about for human review and incorporates those corrections to refine its classification of the remaining population. Every overturned coding decision recalibrates how the model handles similar documents still in the queue. A review that starts at 75 percent precision on a novel document type can reach 90 percent by midpoint if corrections flow back continuously. The humans have an important role: deciding which corrections matter most, adjusting thresholds mid-review, and catching when the AI is systematically missing a document category critical to the case theory.

The feedback loop also drives the review from broad classification toward the documents that actually win or lose the case. First-pass AI review answers a blunt question: responsive or not? But litigation teams need to get from a million documents down to 50 per witness to be deposed, or the 100 that go on an exhibit list. Each round of human correction narrows the AI’s focus — from responsive documents to key documents, from key documents to the specific communications that establish notice, demonstrate intent, or contradict deposition testimony. That refinement — from classification to case strategy — is where the combination of AI speed and human judgment creates something neither delivers alone.

Barriers to Adoption
#

In adversarial civil litigation, two procedural barriers slow AI review adoption.

No judicial precedent for GenAI review. Da Silva Moore v. Publicis Groupe (S.D.N.Y. 2012) was the first major decision approving predictive coding, and TAR took years to gain broad court acceptance after that. GenAI-assisted review is at a similar inflection point. Courts have issued at least 35 standing orders requiring AI disclosure for submissions, but no equivalent of Da Silva Moore has blessed GenAI-specific review workflows for document production. Until that precedent develops, litigation teams face the risk that opposing counsel will challenge the methodology — and will need to explain their process, validation metrics, and human oversight to a court that may never have evaluated GenAI review before.

ESI protocols must address AI upfront. Under FRCP Rule 26(f), parties must confer early to discuss discovery handling, including whether AI-driven review methods will be used and what transparency will be required. If you plan to use AI-augmented review, that decision belongs in the Rule 26(f) conference and in the ESI protocol — not introduced after the review is complete. The protocol should specify the AI tools, the validation and sampling methodology, how exceptions are handled, and the human oversight framework. Opposing counsel who learn about AI review after production are far more likely to challenge it than those who negotiated the terms in advance. Rule 26(g) compounds this: attorneys must certify that discovery responses reflect a “reasonable inquiry,” and blind reliance on AI without validation could violate that duty.

Where AI-Augmented Review Has the Fewest Barriers
#

The barriers described above — judicial precedent, ESI protocol negotiation, Rule 26(g) certification risk — apply to adversarial civil litigation. Several high-volume use cases sidestep them entirely.

No opposing party, no court supervision. Internal investigations triggered by whistleblower complaints, FCPA concerns, or compliance failures don’t face adversarial discovery rules. A compliance team reviewing two million Slack messages for a potential FCPA violation doesn’t need judicial blessing to use AI classification. It needs speed, accuracy, and a defensible process in case the matter escalates. The same applies to cyber incident response, where state breach notification deadlines of 30–60 days drive the timeline and the regulator cares whether you notified on time, not what technology identified the affected individuals.

Regulatory production where the government sets the terms. HSR Second Requests from the FTC or DOJ during merger review involve massive data volumes under deal-critical deadlines — every day of delay risks killing the transaction. Civil investigative demands (CIDs) from the DOJ, FTC, CFPB, or state attorneys general are pre-litigation administrative subpoenas with no Rule 26(f) conference and no opposing counsel. In both cases, the government dictates the process, already contemplates technology-assisted review, and cares about completeness and timeliness.

Post-production work. Once documents have been produced and discovery is closed, the procedural constraints on AI use largely fall away. The challenge shifts from defensible review to winning — getting from a reviewed document set to deposition outlines, cross-examination materials, and trial exhibits as fast as possible. AI-augmented workflows that surface the 50 documents that matter per witness from 500,000 reviewed, build chronologies around them, and generate witness preparation materials are operating where speed directly translates to trial readiness.

Takeaways
#

For staffing decisions: Contract reviewer roles are shifting, not disappearing. AI replaces the volume work; humans move to QC, privilege validation, and model tuning. Fewer reviewers per matter, higher skill requirements per reviewer.

For budgeting: Free AI on platforms is compressing managed services margins. With Relativity, Everlaw, and DISCO bundling AI review into base pricing, providers now justify their markup on human expertise — configuration, defensibility, project management — not technology access. Per-document pricing is giving way to outcome-based engagements.

For risk management: Exception handling is the unresolved risk. AI classifies the straightforward 80 percent of a document population well. The remaining 20 percent — ambiguous privilege calls, documents in unfamiliar formats, communications where context determines relevance — requires human judgment that can’t be automated away.

Further Reading
#


This post is part of the Legal AI Landscape series on LegalAI Insights. It is intended for informational and educational purposes only and does not constitute legal advice. AI capabilities, pricing, and service offerings described here reflect publicly available information as of the publication date and are subject to change. Laws governing AI use and data handling vary by jurisdiction.

The Legal AI Landscape - This article is part of a series.
Part 4: This Article

Related