How to use AI as a lawyer: the complete guide for 2026

Most practicing lawyers in 2026 are already using AI for something: research, drafting, contract review, intake. The harder question is how to use AI as a lawyer in a way that improves client work without sacrificing accuracy, confidentiality, or peace of mind. 

This guide walks through which tools earn their keep, how to evaluate them for your firm, the ethics rules to know, and what we'd roll out first.

How to use AI as a lawyer

The honest way to use AI as a lawyer is to treat it as a set of distinct tools, each suited to a different kind of work. The best place to start depends on which parts of your day take up the most time and produce the lowest return per hour. 

For most firms, four categories of legal work are mature enough that you can deploy AI in them this quarter and see the savings on the next billing cycle: timekeeping and billing, contract review, document review, and drafting from source material with an AI model.

The riskier uses sit in a different group. Open-ended legal research, client-facing communications, and any work product the AI generates without source material to ground it all need close supervision under your existing bar ethics rules. 

The cleanest approach for any firm: pick one or two well-defined jobs and run a real pilot on actual matters. Prove the workflow over a few weeks before moving on to the next category.

What "using AI" means for a lawyer

There are seven categories worth knowing about, and they answer different questions. Lumping them together is the first mistake firms make when they start evaluating tools.

Legal research and case-law analysis. Tools like Lexis+ AI, Westlaw AI-Assisted Research, and Casetext's CoCounsel (now part of Thomson Reuters) ground their outputs in licensed legal content and try to provide pinpoint citations. The job is finding the right authority and summarizing it.

Drafting from source material. Briefs, demand letters, deposition summaries, transactional checklists generated from a transcript or document set you provide. The major research tools handle this; so do firm-deployed instances of Harvey, Spellbook (for transactional), and Microsoft Copilot for everyday drafting. The key word is "from." The AI works with the material you supply.

Contract review and analysis. Spellbook, Ironclad AI, LawGeex, and Kira flag missing clauses, deviations from playbook, risk-rated language, and unusual terms. Useful for transactional work, for in-house teams reviewing inbound paper, and for any practice running high-volume agreements through a playbook.

Document review and e-discovery. Everlaw, Relativity aiR, and DISCO are the discovery platforms that have layered generative AI on top of predictive coding. The newer features include conceptual clustering across millions of documents, privilege flagging, and AI-drafted review memos.

Timekeeping and billing automation. Ajax, Billables AI, PointOne, BigHand SmartTime, WiseTime. The category recovers billable hours that lawyers worked but never invoiced.

Client intake, communications, and chat. Smith.ai, Clio Duo's intake features, and matter-specific chatbots handle triage, scheduling, and first-pass intake forms.

Internal knowledge management. Firm-wired search across your own corpus: past briefs, client files, deal precedents, and internal memos. Harvey is the dominant big-firm play; Microsoft Copilot, Glean, and a handful of legal-specific vendors cover the rest of the market.

Each of these has a different accuracy bar, a different confidentiality profile, and a different ROI. They deserve to be evaluated separately.

Where AI is genuinely useful today 

The exact order depends on your practice and firm size, but this is where we'd start based on the firms we work with.

1. Timekeeping and billing

This is where AI pays off the fastest. Clio's Legal Trends Report shows the average lawyer captures only 2.9 billable hours out of an 8-hour workday. The other 5.1 hours are work that happened but never made it onto a bill. AI tools that watch your work and write the time entry for you bring most of those hours back. 

Ajax is built for this exact problem: it reads the actual content on a lawyer's screen, drafts client-ready time entries, groups related work across the day, and learns your firm's matters and parties as you go.

Across Ajax deployments, firms capture about 12% more billable hours on average, and the tool usually pays for itself within two weeks. Every hour you recover is revenue you didn't have before. That's why timekeeping ranks first for any firm built on the billable hour. Contract review and document review save your lawyers' time. Timekeeping puts dollars on the invoice.

There are two main types of tools in this category. Screen-based tools like Ajax read what's on your screen and write the time entry for you. Integration-based tools like Billables AI and PointOne connect to your apps and pull data from them. For a side-by-side comparison, see our guide to the best AI timekeeping tools for lawyers.

2. Contract review and analysis

Contract review tools have been around for nearly a decade. The big change in 2024–25 is that AI started writing plain-English explanations of what each clause does, on top of the old "this clause is missing" flag. Junior associates and contract paralegals save the most time. In-house legal teams save even more, because they review high volumes of contracts against the same playbook.

On standard agreements like NDAs, MSAs, and employment templates, AI cuts first-pass review time by 30–50%. On custom deal documents, the savings drop to 10–20% because a person still does most of the work. The catch: you have to teach the tool what your firm considers good language. Off-the-shelf playbooks miss your firm's standards and your clients' standards, so expect to spend time setting it up before the time savings show up.

3. Document review and e-discovery

If you have a matter with serious discovery volume, AI-assisted review is now expected. The platforms have moved past predictive coding (TAR / CAL). They write review memos, group millions of documents by topic, and flag privileged material with strong accuracy.

The catch: your work product can be discovered, the stakes are high, and you cannot skip supervision. Stick to established platforms with audit logs and defensible workflows. The major vendors handle this well. The failures we see come from firms picking cheaper tools or cutting back on quality checks.

4. Drafting from source material

This is where modern AI models work best. When you give the AI source material (a transcript, a deposition, a document set) and ask it to draft a brief or a memo from that material, it is rewriting what you gave it, not making things up. The source stays attached, so you can check the output line by line.

Ask the same AI to draft a brief without giving it source material, and that's when it starts inventing cases or quotes. Drafting from source gives you output you can trust and edit quickly. The same tool without source material is where you hit the verification wall.

5. Legal research

Legal research tools are better than they were in 2023, but the marketing oversells how reliable they are. Lexis+ AI and Westlaw AI-Assisted Research pull from licensed legal content and check their citations, which makes them safer than general tools like ChatGPT. Even so, Stanford's RegLab tested legal-specific AI in 2024 and still found made-up cases and wrong holdings. The error rate was lower than ChatGPT, but it was not zero.

The rule is simple. Never cite a case the AI gave you until you've checked it yourself. Mata v. Avianca and the sanctions cases that followed through 2024 made this clear: you are responsible for the cases you cite, and "the chatbot told me" is not a defense.

6. Client intake and communications

AI handles the routine parts of client intake well: scheduling calls, filling out intake forms, sorting incoming questions, and answering common FAQs. It should not be giving legal advice. Most state bars prohibit AI from giving legal advice unless a licensed attorney reviews it. Many also expect you to tell clients when AI played a meaningful role in their case.

Use AI here for repetitive, low-stakes work. It is too early to use it on anything that touches the substance of a client's matter.

7. Internal knowledge management

These tools let you search and ask questions across your own firm's documents (past briefs, client files, deal precedents, internal memos). Harvey is the leading option for AmLaw 100 firms. Microsoft Copilot, Glean, and a few legal-specific vendors cover everyone else.

The catch: the tool is only as good as the documents you point it at. If your firm's document storage is messy, AI will give you confident-sounding answers based on old or wrong documents. Clean up your document storage before you pay for AI search on top of it.

How to evaluate any AI tool for your firm

Use this following checklist of questions on every tool that crosses the desk.

What does it actually see?

The data the tool processes sets the accuracy ceiling. For research, what corpus is it grounded in and what's the cite-check protocol? For contract review, is it reading the full document or extracting metadata? For timekeeping, is it reading screen content or only app names and window titles? Every other question is downstream of this one.

Does it produce a finished work product or an activity log?

A tool that hands you a list and expects you to write the entry, brief, or summary hasn't reduced your effort. Adoption suffers because the work hasn't moved off the lawyer's plate.

Does it learn from corrections?

Static tools degrade. Tools that learn case-specific keywords, firm style, matter relationships, or playbook deviations compound over months. Ask for the data: after six months of use, what share of decisions is coming from learned behavior versus the default model?

How does it integrate with what you already use?

Two-way sync with billing systems (Clio, MyCase, PracticePanther, Aderant, Elite 3E), DMS (NetDocuments, iManage), and your matter management platform is meaningfully different from a one-way push or no integration at all. Confirm depth with your specific systems, beyond the logos on the vendor's website.

What does the security and data-handling posture look like?

The questions that matter, in order: Is screen content or document data deleted on a rolling automatic basis? Is the model trained on client data? Are downstream subprocessors contractually prohibited from retaining or training on your data? Is the vendor SOC 2 compliant, and is Type II in hand or in progress? Are audit logs available? Are individual users siloed from each other? If a vendor stumbles on any of those questions in the demo, that's the answer.

Has it been adopted at firms like yours?

Pilot conversion rates and 90-day seat usage. A tool nobody uses is a budget line that doesn't deliver, and adoption tends to predict accuracy gains better than feature lists do. Reference checks with current customers should ask about month-three usage, not month-one enthusiasm.

The ethics rules for using AI as a lawyer

Two ethics rules drive most of the practical decisions: competence and confidentiality. The other rules matter, but they sit on top of those two.

Model Rule 1.1 Comment 8 says competence includes understanding the benefits and risks of relevant technology. Translation: a lawyer who deploys AI in client work needs to understand how the tool handles inputs, what its failure modes are, and where supervision is required. State bars are now citing Comment 8 in disciplinary proceedings.

Model Rule 1.6 governs confidentiality. The rule applies to AI vendors as it applies to any other third party with access to client data. Send no client information to a tool whose terms of service permit training on your inputs, retain your data beyond what's required for the service, or share data with downstream subprocessors without contractual protection. Public ChatGPT with default settings does not meet this bar; enterprise instances with the right contracts can.

ABA Formal Opinion 512, issued in July 2024, is the most useful single document on this topic. The opinion covers generative AI specifically and lays out the practical obligations: understand the tool, supervise output, protect confidentiality, communicate with clients about material AI use, and bill responsibly, meaning don't charge clients for time the AI saved you.

Model Rules 5.1 and 5.3 round out the picture by extending supervisory obligations over AI the way they do over non-lawyer staff. The lawyer remains on the hook for what AI produces.

The practical translation for a firm: pick vendors whose data terms hold up under scrutiny, write a one-page firm AI policy covering approved tools and prohibited ones, and treat AI output the way you treat a junior associate's draft: useful, fast, and reviewed before it leaves the building.

Where AI still gets it wrong

An honest list of limits worth knowing about before you sign anything.

  • Hallucination on open-ended legal research. Stanford's 2024 study found that even legal-specific AI tools hallucinated cases or misstated holdings on roughly 1 in 6 queries. General-purpose LLMs were meaningfully worse. Mata v. Avianca and the Michael Cohen sanctions filings remain the canonical reminders. Verify every cite.

  • Off-screen and off-platform work. Anything that doesn't touch a screen (pen-and-paper notes, in-person client meetings, hallway conversations) is invisible to passive capture tools. Plan for short manual entries to fill gaps.

  • Jurisdictional and procedural fine print. AI is competent on the federal rule and the majority state rule. It misses local rules, judge-specific preferences, recent procedural amendments, and bench-and-bar conventions that aren't well documented online. Verify against the local source before you file.

  • New matters and ambiguous edge cases. First few entries on a new matter, party-name overlaps across cases, and novel deal structures all need close supervision until the AI has learned the context. Most legal-specific tools catch up within a week or two; the brand-new period is where attention concentrates.

  • Adoption decay. Heavy use in week one, less in month three, sometimes nothing by month six. The ROI math depends on sustained use. The strongest predictor of sustained use is how little effort the tool requires from the lawyer once it's running.

How to actually roll AI out at your firm

Picking the tool is the simpler part. Most firms working out how to use AI as a lawyer get tripped up at the rollout stage, where adoption is the real test. A 90-day plan beats a 12-month strategy document. The order we'd recommend:

  1. Pick one or two jobs to start. Choose the highest-ROI category that fits your practice. For litigation and transactional firms running on the billable hour, that's timekeeping. For transactional teams and in-house departments, contract review. For any firm with active discovery, document review.

  2. Run a real pilot, not a demo. Two weeks of real work, real matters, real lawyers. Track hours captured, write-downs, time-to-bill, and attorney satisfaction. A demo on canned data tells you nothing about how the tool performs on your work.

  3. Designate an AI champion. One partner or senior associate owns the rollout, fields questions, and reports adoption back to firm leadership. Without an owner, even good tools quietly die on lawyers' desktops.

  4. Write a firm AI policy. A one-pager covering approved tools, prohibited tools (consumer ChatGPT with client data, scraped public LLMs), client-disclosure norms, and supervision standards. Most state bars now expect firms to have something in writing.

  5. Train the team. A 60-minute internal session per tool, supplemented by CLE-eligible programs from your state bar or the ABA. Reference materials and an internal Slack or Teams channel for questions that come up after training.

  6. Add the next category once the first is sticky. Sticky means 80%+ of seats actively used at month three. If the first tool hasn't crossed that bar, fix the rollout before adding a second tool. Adding more software won't fix an adoption problem.

How Ajax can help you capture more billable hours

Of the seven categories above, Ajax sits in timekeeping, and the reason we lead with timekeeping in the rollout sequence is that it's the highest, fastest-realized ROI for any firm built on the billable hour.

Ajax reads the actual content on a lawyer's screen, drafts client-ready time entries, groups related work across the day, and learns case-specific keywords and party names from corrections. The day-to-day impact for firms using it shows up in four places:

  • Lawyers get back 15 to 45 minutes a day on timekeeping. Ajax runs in the background, drafts the entries, and presents them ready to review. Review usually settles around a few minutes a day.

  • Firms capture more billable hours. The average lift across Ajax deployments is 12%. For a 10-attorney firm billing $300 per hour at 1,600 hours per attorney per year, that's roughly 192 additional hours per attorney, or about $300,000–$576,000 in recovered annual revenue depending on assumptions.

  • The subscription pays for itself quickly. Typical payback is around 11 days; one recovered hour per user per month covers the cost.

  • On-time billing improves alongside accuracy. Our analysis of nearly 170,000 time entries found that the typical timekeeper using automated capture releases entries within 10.4 hours, and 62% average under 24 hours.

If timekeeping is your starting point, book a demo and we'll show you what entries look like for your matters, your billing guidelines, and your billing system.

Final thoughts

AI for lawyers in 2026 is several different jobs with different accuracy bars, confidentiality profiles, and ROI curves. The order and rigor of the rollout matter more than the brand on the box. 

Pick one job that matches your highest-leverage workflow, evaluate vendors on the six questions that matter, write the policy, run a real pilot, and add the next category only once the first is sticky.

If your firm’s billable minutes disappear before they reach the billing system, that's where Ajax fits, reach out to the Ajax team or book a demo, and we'll walk you through how it fits into your firm.



Schedule a demo. Start a pilot. See the results before you decide.

Schedule a demo. Start a two-week pilot. See the results before you decide.

Book a demo

Book a demo

Schedule a demo. Start a pilot. See the results before you decide.

Schedule a demo. Start a two-week pilot. See the results before you decide.

Book a demo

Book a demo

Schedule a demo. Start a pilot. See the results before you decide.

Schedule a demo. Start a two-week pilot. See the results before you decide.

Book a demo

Book a demo