AI Playbook: Litigation Workflows with Claude Cowork#
Most AI tools for lawyers are chatbots with legal branding. You paste in a document, type a question, get an answer. Useful, but limited to one exchange at a time — and every session starts from zero.
Agentic AI works differently. Instead of answering a single prompt, an agentic system takes a goal, breaks it into steps, executes them autonomously, and delivers finished work product. It reads your files, writes documents, builds spreadsheets, recovers from errors, and coordinates parallel tasks — all without you managing each step. The shift from “answer my question” to “complete this project” is the difference that matters.
Every major legal AI vendor is moving in this direction. Harvey launched AI Agents in early 2026 — autonomous tools that execute multi-step legal tasks end-to-end, now running over 700,000 agentic tasks daily. Thomson Reuters is rolling out agentic capabilities across CoCounsel, with workflows for drafting, deposition analysis, and compliance assessments. A&O Shearman and Harvey jointly launched agentic AI agents for antitrust, cybersecurity, and loan review. But these products are enterprise-priced and designed for AmLaw 100 workflows. For a litigation boutique, the most accessible agentic AI tool right now is Claude Cowork.
Claude Cowork is Anthropic’s agentic AI for knowledge work. It launched in January 2026, went generally available on macOS and Windows in April, and is now available on all paid plans. It runs inside the Claude Desktop app — you point it at a folder on your computer, describe what you need in plain English, and it executes. Code runs inside a sandboxed virtual machine with access only to the folders and network destinations you’ve approved, enforced at the operating system level. (For background on how foundation models work, see The Foundation: A Legal Professional’s Guide to LLMs; for how a litigation boutique can build a full AI stack around Cowork and other tools, see AI Playbook: Outgunning BigLaw on a Budget.)
Cowork’s feature set maps onto litigation practice through two core concepts: Projects for matters and Skills for tasks. Understanding those two frames is the key to making the tool useful.
The Privilege Question Comes First#
In United States v. Heppner (S.D.N.Y. Feb. 17, 2026), Judge Rakoff held that a defendant’s exchanges with the free, consumer version of Claude were not privileged — because AI is not an attorney, Anthropic’s consumer terms permit data disclosure, and the defendant acted without counsel’s direction. The Perkins Coie analysis is worth reading in full; the short version is that existing privilege doctrine applies to AI tools without modification, and consumer-tier terms are exactly the kind of third-party disclosure that waives protection. Use an Enterprise or Team plan with contractual no-training commitments. Consumer-tier plans create privilege risk that no amount of convenience justifies. Document that the attorney directed the AI-assisted work.
Projects: One Matter, One Workspace#
A Cowork Project is a persistent workspace with its own files, instructions, and memory. Create one per matter. Point it at the case folder. Give it standing instructions — your theory of the case, the key witnesses, the issues you’re tracking, the charges or claims at issue. Claude carries that context forward across every session, so you never re-explain the background.
This is the feature that separates Cowork from a chatbot. A chatbot forgets. A Project accumulates. Each session builds on what came before, and the working case file grows more useful over the matter’s lifecycle.
Discovery Triage#
A criminal defense firm that inherited a case mid-stream set up a single Project and pointed it at the government’s production. Cowork indexed every document by type, date, and parties; generated draft transcripts of dozens of recorded jail calls; and cross-referenced the indictment against the production to flag evidence relevant to their client’s specific charges. The setup took minutes. The output would have consumed well over a hundred hours of manual work.
As new productions arrived in subsequent weeks, the attorney dropped them into the same Project folder. Because the Project already knew the indictment, the charges, and the evidence previously catalogued, each update built on the existing analysis rather than starting fresh.
Cowork is not an e-discovery platform — no Bates stamping, no chain-of-custody metadata, no integration with Relativity or Everlaw. For the boutique handling a few hundred to a few thousand documents, Cowork compresses days of indexing into hours. For larger volumes, pair it with Google’s Gemini API — Gemini’s one-to-two-million-token context window can ingest entire document sets in a single reasoning pass, spotting cross-document contradictions that file-by-file processing misses. Gemini’s Flash tier is also significantly cheaper for high-volume extraction (as covered in our pricing analysis). When a case genuinely demands heavy e-discovery, scale up to Everlaw or Relativity for that matter and pay for what the case requires — a boutique doesn’t need an enterprise platform on retainer.
Deposition Summaries That Compound#
Deposition transcripts are long, repetitive, and full of material that matters only if you can find it later. Within a case Project, Cowork processes a folder of transcripts and produces chronological summaries organized by topic, witness, or event.
The value emerges over time. When a second transcript arrives, you don’t re-explain the case. You drop it into the Project folder and ask Cowork to update the existing analysis. It knows what prior witnesses said, what themes you’re tracking, and where the new testimony confirms or contradicts the existing record. Each deposition adds to a cumulative working file rather than starting a fresh analysis from scratch.
Memory is scoped to individual Projects — what Claude learns about one case doesn’t leak into another, which is the right design for a law firm. Memory is also local to your machine with no cloud sync, so you can’t access the same Project from two computers. Team sharing for Cowork Projects isn’t available yet.
Trial Prep: Where Projects Pay Off#
By the time a case reaches trial, a well-maintained Project holds the discovery index, deposition summaries, key evidence flags, and charge-specific analysis. That accumulated context becomes the foundation for the documents a trial lawyer actually needs.
Opening statement drafts. Cowork pulls from the Project’s analysis to generate a structured first draft — threading the chronology, identifying the strongest evidence for each element, and organizing the narrative around the themes you’ve been tracking since discovery. The draft needs substantial revision for tone and courtroom rhythm, but the assembly work — connecting forty exhibits and six depositions into a coherent arc — is exactly the synthesis Cowork handles well.
Witness examination outlines. For each witness, Cowork produces a direct examination outline mapping questions to supporting exhibits with page-and-line citations from transcripts already in the Project. For cross-examination, it identifies contradictions between a witness’s deposition testimony and other case evidence. The outlines need attorney judgment on sequencing and emphasis. But having every potential impeachment point located and organized eliminates the mechanical work.
Exhibit organization. Cowork generates an exhibit reference guide linking each exhibit number to a summary of what it shows, which witnesses it relates to, and where in the transcripts it was discussed. For a case with two hundred exhibits, this is a paralegal’s full day. Cowork delivers a first draft in a fraction of that time.
Skills: Packaging Tasks for Consistency#
If Projects are how you organize matters, Skills are how you standardize tasks. A skill is a packaged set of instructions that tells Claude how to perform a specific task — what tools to use, what sequence to follow, what the output should look like. You build one by running a task, correcting the output, iterating until it meets your standard, then saving it. After that, anyone on the team can invoke the skill and get consistent results.
Here’s what building a skill looks like in practice. Say your firm needs deposition summaries in a consistent format — a chronological narrative with page-and-line citations, organized by topic, with key admissions flagged separately at the top. The first time, you give Cowork a transcript and describe what you want. Claude produces a draft. Maybe it buries the strongest admission in the middle of a paragraph, or cites page numbers without line references, or organizes by witness answer rather than by topic. You correct it, explain what it got wrong, and ask it to try again. After two or three rounds, the output matches what a well-trained associate would produce. You save that accumulated knowledge as a skill called “deposition summary.” Now any attorney at your firm drops a transcript into Cowork, invokes the skill, and gets the same structured output — key admissions flagged at the top, chronological narrative by topic, page-and-line cites throughout. No re-explaining, no inconsistency between who runs it.
Skills solve the randomness problem. Ask a chatbot to do the same thing twice and you’ll get different results. That’s fine for brainstorming. It’s terrible for recurring work that needs to produce reliable output every time.
Brief Finalization#
Filing preparation is the canonical use case. Connectors let Claude work inside Word directly — manipulating formatting, field codes, and formulas. A skill packages the full filing workflow: tables of contents, tables of authorities, pagination checks, exhibit lists, certificate-of-service blocks. One firm built this skill through several rounds of iteration and now invokes it as a single command for every filing.
The hallucination risk remains real for any AI-generated substantive content. In a 2025 copyright case involving Anthropic, a Latham & Watkins attorney used Claude to format a reference and submitted a brief containing a fabricated citation, drawing a rebuke from the magistrate judge. Cowork is strong at processing citations that already exist in a brief — formatting, indexing, generating reference tables. It is unreliable at generating citations from scratch. Every cite in AI-drafted text needs verification against Westlaw or Lexis before filing.
Client Intake Processing#
A personal injury practice built an intake skill combined with a scheduled task. Cowork monitors the intake folder every few hours and, for each new submission, extracts client name, date of incident, injury type, treating physicians, insurance carrier, and statute of limitations deadline into a master Excel spreadsheet with working formulas for deadline calculations. What used to take a paralegal fifteen minutes per intake now happens automatically.
Scheduled tasks require the desktop app to be running and the computer awake — no cloud-based background execution. For a firm that keeps a workstation on during business hours, this works. For overnight processing, it’s a limitation.
Practice-Area Intelligence#
A defense firm focused on financial fraud built a skill that scans enforcement action announcements, regulatory updates, and industry news each morning and compiles a daily briefing. The same approach works for any niche: EEOC guidance, patent office actions, state AG enforcement trends. The result is a daily intelligence product that would otherwise take an hour of associate time — or, more realistically, would never get done.
The Legal Plugin and Ecosystem#
When Anthropic released a legal plugin for Cowork on February 2, 2026, the market reaction was wildly disproportionate to what the plugin actually does. Thomson Reuters dropped 16%. RELX (LexisNexis’s parent) fell 14%. Wolters Kluwer lost 13%. LegalZoom cratered nearly 20%. Jefferies dubbed it the “SaaSpocalypse.” Combined losses across legal tech and data stocks exceeded $285 billion in the five trading days that followed.
The plugin itself is more modest than the headlines suggested. It’s a free, open-source set of generic skills — structured prompts and workflow maps, not a new model or legal database. It provides five slash commands (/review-contract, /triage-nda, /vendor-check, /brief, /respond) configured against a local playbook file where you define your firm’s standard positions, acceptable ranges, and escalation triggers. As Reed Smith’s analysis noted, the plugin tells Claude how to think through legal problems in a particular sequence — it doesn’t give Claude legal knowledge it didn’t already have.
If you’ve read this far, you’ll recognize what the plugin actually is: a pre-built bundle of Skills, connectors, and slash commands, packaged by Anthropic for common legal workflows. As Artificial Lawyer put it, Skills are the recipes; a Plugin is the cookbook. The deposition summary skill, the brief finalization skill, the intake processing skill you build for your own practice — those are the same building blocks, just tailored to your firm instead of Anthropic’s generic templates. Over time, a litigation boutique that builds and refines its own Skills is assembling the core of its own plugin — one that encodes your firm’s standards, your document formats, your practice-area expertise. On Team and Enterprise plans, you can distribute your custom plugins across the firm through Anthropic’s plugin marketplace, so every attorney works from the same playbook. Anthropic’s legal plugin is a starting point. Your firm’s skill library is the destination.
Building Your Own Litigation Plugin#
Look at the plugin’s source code on GitHub and you’ll see the architecture is straightforward. A plugin is a folder with four components: Skills (SKILL.md files you invoke for specific tasks), commands (markdown files defining /slash-command workflows), an .mcp.json file wiring up connectors to external tools, and a plugin.json manifest. The legal plugin ships with nine skills — review-contract, triage-nda, compliance-check, legal-risk-assessment, meeting-briefing, and others — each one a markdown file telling Claude how to approach a specific category of work when you call on it.
You can approximate the entire plugin without installing it. Build a Skill for each recurring workflow your firm handles — deposition summaries, privilege screening, motion formatting, discovery indexing. Write a playbook file defining your firm’s standard positions, risk tolerances, and escalation triggers (the plugin uses legal.local.md for this). Add connectors to the tools your firm actually uses: an MCP connector to your document management system so Claude can pull templates and precedent, a connector to your calendar for deadline tracking, a connector to Midpage or another research tool for citation verification. For recurring work, you can attach scheduled tasks to your Skills — intake processing, daily practice-area briefings, folder monitoring. But automation isn’t always the right answer. Think of Skills the way you’d think of delegating to a junior associate or paralegal. You wouldn’t hand a new associate a task and stop reviewing their work after the first good result. You’d review their output consistently, correct patterns of error, and only reduce oversight once you’d built confidence over many repetitions — and even then, you’d spot-check. The same discipline applies here. Start every Skill as a manual invocation with attorney review of every output. Only move to scheduled automation for tasks where you’ve verified consistent quality over many runs and where an undetected error wouldn’t cause harm — intake data entry, not privilege screening. Then bundle the whole thing using Cowork’s built-in Plugin Create tool.
The result is a firm-specific plugin that reflects how your practice works — not Anthropic’s generic defaults. Every Skill you build, every correction you feed back in, every connector you wire up adds to a system that gets more useful over time. And because plugins are just markdown files, a litigation boutique can inspect, modify, and share every piece of it.
The market panic missed a critical distinction: the plugin targets contract administration, not legal research. Thomson Reuters’s moat is Westlaw’s curated case law database; RELX’s is LexisNexis. The plugin can’t search either one. As Artificial Lawyer observed, the sell-off was irrational. Both publishers have since integrated with Anthropic’s platform rather than competing against it.
What matters more for litigation boutiques is the emerging ecosystem. DeepJudge built an MCP connector that lets Cowork search a firm’s own prior matters and work product. Midpage integrated its legal research tools, adding verified case law citation. Pramata connected its contract intelligence platform. These integrations address the plugin’s biggest gap — it doesn’t know your firm’s precedents, your jurisdiction’s case law, or your existing portfolio. Third-party connectors bring that context in.
Anthropic advises against using the plugin for high-stakes or regulated legal work in its current form. All outputs require attorney review.
What Cowork Can’t Do#
No legal research database. Cowork can search the open web but has no access to Westlaw, Lexis, or any proprietary legal database. It can’t verify whether a case citation exists. Pair it with Midpage or your existing Westlaw subscription for anything citation-dependent.
No audit trail for regulated work. Cowork activity is not currently captured in audit logs, the Compliance API, or data exports. OpenTelemetry monitoring is available for Team and Enterprise plans but is explicitly not a replacement for audit logging.
Usage limits are real. On Team plans, usage is pooled across seats. Enterprise plans offer custom capacity. A complex Cowork session consumes significantly more quota than regular chat — budget accordingly.
Getting Started#
Start with low-risk tasks on your Enterprise or Team plan — reorganize a folder of CLE materials, build a spreadsheet from a year of firm expenses, run Cowork on a closed matter where you already know what the analysis should look like. Build judgment on tasks where the downside of an error is time lost, not malpractice exposure.
Set up your first Project on a current matter. Set up your first Skill on a task you do every week. Configure role-based access controls, enable OpenTelemetry, and document your firm’s AI use policy — including the Heppner-informed requirement that counsel direct the AI-assisted work.
The power isn’t any single feature. It’s that Projects give you persistent context across a matter’s lifecycle and Skills give you repeatable quality across your practice. Together, they turn Cowork from a chatbot into a workflow engine.
Further Reading#
- Claude Cowork product page. Anthropic’s overview of features and use cases.
- Get started with Claude Cowork. Anthropic’s official setup and usage guide.
- Using Cowork safely. Risk guidance on prompt injection, file access, and browser integration.
- Organize your tasks with projects in Cowork. How persistent memory and projects work.
- Cowork for Team and Enterprise plans. Admin controls, OpenTelemetry, and deployment guidance.
- Claude Legal Plugin. The free legal workflow plugin. Source code on GitHub.
- United States v. Heppner. Harvard Law Review analysis of the privilege ruling.
- Heppner and Gilbarco: Courts Apply Privilege to Generative AI. Perkins Coie’s practitioner analysis.
- Using AI Without Waiving Privilege. McDermott Will & Emery’s operational guidance.
- Claude Legal Is Here, and It’s Worth a Closer Look. Nicole Black’s practitioner review on LLRX.
- Anthropic’s Legal Plugin May Be the Opening Salvo. Bob Ambrogi’s analysis on LawNext.
- LegalTech: SaaSpocalypse Now. Law Gazette’s overview of the market reaction and recovery.
- Claude Crash Impact on Thomson Reuters + LexisNexis is Irrational. Artificial Lawyer’s analysis of why the sell-off missed the point.
- DeepJudge’s CTO on Connecting to Claude Cowork. How third-party legal tools are integrating with the Cowork ecosystem.
- Introduction to Claude Cowork. Anthropic’s free training course.
This post is published on LegalAI Insights. It is intended for informational and educational purposes only and does not constitute legal advice. The privilege analysis in this post is a summary of published judicial opinions and commentary — not a substitute for analyzing the specific terms, jurisdiction, and facts applicable to your practice. AI capabilities, pricing, and features described here reflect publicly available information as of the publication date and are subject to rapid change. Cowork is generally available but some features remain in research preview. Laws and ethics rules governing AI use in legal practice vary by jurisdiction.
