What 40 Legal Tech Leaders Are Thinking About Before LegalWeek 2026

This week, LegalWeek 2026 brings up to 9,000 attendees, 100+ sessions, and 142+ exhibitors to New York.
Between Clio’s historic $1 billion vLex acquisition and the 73 million views on Matt Shumer's recent essay declaring "Something Big Is Happening," there's a lot of noise right now.
But the real indicators of legal tech's future just happened behind closed doors.
On March 3, Ari Kaplan hosted ~40 industry leaders for his Legal Tech Mafia Breakfast at Paul Weiss. This group of innovation directors, in-house counsel, and founders made one thing clear: we're past asking if AI works.
Teams are now focused on deploying autonomous agents without breaking data governance, plugging security vulnerabilities, and navigating new privilege rulings.
Here is what they are actually focused on right now.
AI Agents Are Moving from Pilot to Production - Here's What's Breaking
Everyone in legal operations is currently asking the same question: how are firms actually deploying AI agents in production?
The discussion gave us a good idea of what's happening:
Organizations are actively building Copilot-based agents to handle specific, internal administrative workflows.
Teams deploy agents for inbox triage (automatically surfacing action items and deadlines from email threads), initial NDA reviews checked against internal playbooks, and company policy Q&A.
Instead of asking HR about PTO accrual or stock trading windows, employees query a bespoke agent.
Inside law firms, agents are handling heavier, practice-area-specific workflows.
Firms are piloting agents for SEC form analysis, procurement contract review, and intranet chatbots that instantly surface institutional knowledge, such as complex outside counsel guidelines and billing rules.
When it comes to picking vendors, multi-model stacking is the trend
Nobody is going all-in on a single vendor.
Organizations typically start with enterprise Copilot as their baseline infrastructure, then layer in Gemini or Claude for distinct, specialized tasks.
If your firm doesn't have the budget or infrastructure for enterprise Copilot, Zapier was discussed as a highly capable, lightweight alternative for agent-building.
However, moving from pilot to production has exposed three problems that nobody has entirely solved:
Consistency: Agents often give different outputs from the exact same inputs across multiple runs. If an agent analyzes a lease agreement on Tuesday and flags three issues, but analyzes the identical document on Wednesday and flags four, teams do not know whether to trust the results.
Currency and staleness: When a company updates its stock trading policy, or a firm revises a contract playbook, agents frequently continue to serve the old information. The industry has not yet automated the governance layer required to instantly deprecate outdated source documents across an agent fleet.
Governance frameworks: Organizations are struggling to write the rules for AI usage. One approach discussed involves actively encouraging personal AI use outside of work to build baseline fluency, while enforcing strict, uncompromising controls on corporate data.
Teams want to figure out how to actually use AI timekeeping and workflow automation, but the transition requires solving these infrastructural breaks first.
The AI Security Problem Nobody Has Solved (The Lethal Trifecta)
Data exfiltration via prompt injection remains an unsolved problem in legal tech.
Despite the money spent on security, nobody has a complete fix.
Think of the threat through the "Lethal Trifecta" framework, coined by software engineer Simon Willison in June 2025.
Willison dictates that you can safely give an AI agent access to any two of three things, but granting all three simultaneously creates a critical vulnerability. The trifecta consists of:
Sensitive data (e.g., privileged client communications, M&A targets).
Untrusted inputs (any document, email, or prompt you do not personally control).
Internet write access (the ability to send data outward).
The way this happens is straightforward.
A bad actor publishes a document or sends an email containing hidden text instructions.
An agent encounters this untrusted input.
Because large language models are fundamentally optimized to follow instructions, the agent executes the hidden command, packages up the sensitive data it has access to, and exfiltrates it to an external endpoint via its internet connection.
This is especially dangerous for law firms
Agents built to review third-party contracts, ingest opposing counsel documents, or browse external case law are inherently exposed to untrusted inputs.
If those exact same agents have access to privileged client data and internet connectivity, the lethal trifecta is complete.
Many firms have a blind spot when it comes to oversight. Most firms believe they are secure because they have a "human-in-the-loop" (meaning a lawyer provides the initial input or reviews the final output).
What firms actually need for autonomous agents is a "human-on-the-loop" - a hard kill switch that allows a human supervisor to immediately stop the machine mid-process if it begins behaving maliciously.
Claude Is Becoming the Operating System for Legal Work
When comparing general-purpose large language models with specialized legal tools, simplicity usually wins.
The reason is simple: fewer tools beat more tools.
Relying on a single foundational model reduces change management friction, vendor bloat, and the training overhead required to teach lawyers five different interfaces.
The performance gap between Anthropic’s Claude and specialized legal software has narrowed enough that top-tier firms are consolidating.
Attendees referenced a major litigation firm of approximately 400 attorneys that recently gave every single lawyer access to Claude. They’re using it as an operational layer for nearly everything:
Drafting motions
Conducting clause analysis
Editing documents
And performing case research
Firms are also building robust "case profiles."
By uploading entire case binders into persistent Claude projects, the entire legal team can query the collective intelligence of the matter instantly, without digging through folder hierarchies.
The legal tech ecosystem is adapting to this shift
Legal research startups are actively launching MCP (Model Context Protocol) connections, integrating case law databases and citators directly into Claude.
Lawyers can now conduct rigorous legal research without ever leaving the Claude interface.
Anthropic’s strict ethical positioning serves as a solid trust signal for cautious legal buyers
The company's recent standoff with the Pentagon resulted in Anthropic being designated a "supply chain risk" and facing a federal ban in February 2026 due to its refusal to compromise safety protocols.
This was cited as proof that the company prioritizes data security over government contracts.
The Privilege Bombshell - Judge Rakoff Says AI Outputs Aren't Protected
In February 2026, the Southern District of New York delivered a ruling that fundamentally alters how lawyers and clients interact with AI.
In US v. Heppner, Judge Jed Rakoff ruled that documents created by feeding defense counsel notes into a consumer AI tool are protected by neither attorney-client privilege nor the work product doctrine.
The criminal defendant in the case generated 31 documents using an AI model to prepare for meetings with his attorneys.
The court's holding was absolute: AI possesses no law license, owes no duty of loyalty, maintains no attorney-client relationship, and offers no reasonable expectation of confidentiality.
Therefore, feeding privileged information into the model waives the privilege.
Many people get this wrong about the case
most assume the defendant used ChatGPT.
He actually used the consumer version of Claude. The waiver occurred not because of the specific vendor, but because it was a consumer-grade tool lacking enterprise data protections.
This has big implications for criminal defense
Attorneys are realizing that their clients routinely use consumer AI to outline narratives or prepare for meetings.
Activity that was previously viewed as helpful preparation is now a recognized mechanism for waiving privilege over underlying communications.
This creates a tough situation for access to justice
Pro se litigants and criminal defendants are the individuals who most require the structural assistance AI provides.
Yet, by utilizing accessible consumer tools to level the playing field, they are the ones whose legal protections are most actively at risk.
Contract Review AI - What the Mature Workflow Actually Looks Like
The conversation surrounding AI contract review has shifted.
Nobody in the room asked, "Does it work?"
The focus is now, "How do we calibrate it?"
Attendees mapped out the specific maturity curve that nearly every adopting organization goes through:
Stage 1 (Over-flagging): New users lack trust in the system and demand the AI flag everything as high-risk. The result is unmanageable noise.
Stage 2 (Calibration): Users realize that most flagged clauses are low-stakes standard language. They begin actively tuning the risk thresholds to match actual business realities.
Stage 3 (Delegation): The workflow reaches maturity. Non-legal teams (like sales or procurement) handle routine contracts such as NDAs and vendor agreements entirely autonomously. The AI is programmed with three or four "showstopper" flags. If none trigger, the business unit signs the document without ever involving the legal department.
Experienced users are clearing NDAs fast - often in under 10 seconds.
The validation playbook has also evolved
Teams no longer review every document the AI processes.
Instead, they spot-check a sample.
If an AI reviews a diligence data room for assignment provisions and hits 10 out of 10 flagged issues accurately on the sample set, the legal team trusts the output and nobody searches the remaining documents manually.
To make this work, you need clear rules
Materiality thresholds vary wildly depending on the contract type and the specific practice group. To reach Stage 3, in-house teams require the General Counsel to explicitly define risk levels and establish the rules of engagement up front.
The Conversation Kept Going Broader (and That's Worth Noting)
The practitioners building and deploying these tools are not operating in a vacuum.
Throughout the morning, even among leaders focused on hyper-tactical deployments, the room continually drifted back to questions that do not have answers yet.
The Anthropic-Pentagon standoff was analyzed not just as a security signal, but as a broader indicator of the increasingly fraught relationship between AI infrastructure companies and the federal government.
Attendees repeatedly raised the environmental costs of massive AI data centers, debating whether new efficiency standards are sufficient to offset the grid demand.
People also talked about the psychological impact. Leaders discussed emerging research on the psychological risks of humans anthropomorphizing AI agents. The conversation turned personal, with attendees sharing stories about their kids and teenagers engaging with chatbots in ways that fundamentally concern them as parents.
It was a useful reminder that the people closest to this technology - the ones integrating it into the fabric of the legal system - are also the ones most acutely conscious of its societal weight.
Regular AI interaction is subtly changing how people communicate, think, and evaluate truth.
Final Thoughts
The Legal Tech Mafia breakfast revealed what's actually happening in the industry right now. AI is no longer just a panel discussion topic, it's becoming core.
The firms that win in 2026 won't be the ones with the flashiest pilots.
The winners will be the organizations actively solving the unglamorous problems:
Mitigating data exfiltration risks
Calibrating contract thresholds
Defining AI governance
And protecting attorney-client privilege from consumer-grade leaks
When AI infrastructure works best, it runs quietly and securely in the background. It removes friction without exposing the firm to the lethal trifecta of security vulnerabilities.
This is exactly how we built Ajax to operate
Next time you find yourself doing your last week's hours on a Saturday morning, consider that Ajax's AI time tracking just runs natively on your desktop. It reads screen activity to automatically capture billable hours across documents, emails, and research platforms.
Ajax solves the exact security anxieties discussed by legal tech leaders:
It utilizes a privacy-first architecture with rolling, automatic data deletion.
It never trains models on firm data.
It completely bypasses the untrusted input vulnerabilities of web-crawling agents by keeping the analysis localized to your immediate workflow.
By automating time entry narratives and handling matter attribution in the background, it ensures timekeeping compliance without forcing you to change how you work.
Before navigating the noise of LegalWeek, look at how AI is actually supposed to work in production. Book a demo to see how Ajax automatically recovers lost billable time and replaces manual timekeeping, all without compromising your firm's security.
