Ditch the document viewer.

Come see the new ChronoTracer at LegalWeek Booth 605 or book a demo now

Latest News

Sweating the Small Stuff: What Lawyers Should Expect from AI

February 26, 2026

By Richard Gorelick, Co-Founder & CEO

When I was a first-year associate, I handed a draft to a partner who found a small grammatical error, the kind you catch only on a slow reread. He didn’t yell, but he paused long enough to make the point.

I asked (politely, I’m sure) why it mattered so much.

“If your clients see you making mistakes with the little stuff, they won’t trust you to get the big stuff right.”

I’ve never forgotten that. In most professions, “don’t sweat the small stuff” is reasonable advice. For lawyers, it’s the opposite. Precision is the product. Small errors signal big risks.

Lately I’ve been thinking about that lesson in the context of AI.

What standard of correctness should lawyers expect from AI?

Legal teams have not yet settled on an answer. Do we judge AI like a junior associate or a senior paralegal, smart and capable but needing supervision? Or like a computer, something that simply shouldn’t make mistakes at all?

There’s no consensus. And there may not be for some time.

For decades, software was deterministic. If a computer produced an error, it meant a human had mis-programmed or mis-configured it. Generative AI breaks that mental model. It can seem brilliant in one moment and inconsistent in the next. It behaves less like a calculator and more like a smart but erratic colleague.

Some lawyers accept that tradeoff. After a few years of LLMs in daily use, many have learned to tolerate occasional inconsistency if the system delivers the bulk of the answer quickly and accurately. In many areas of practice, “90% right in a few seconds” feels like a reasonable exchange.

But litigation is not a 90/10 environment. A wrong date changes a narrative. A misidentified participant shifts a theory. A hallucinated fact can contaminate a filing. Lawyers do not grade on a curve when the stakes are high.

So what should the expectation be? The profession hasn’t decided. And that ambiguity shapes how teams adopt, or reject, AI tools.

When expectations are unclear, vendors need to be clear

If legal teams don’t know what standard to hold AI to, the burden shifts to the people building the tools. Vendors owe their users clarity about how their systems behave, where they’re precise, and where they rely on approximation.

That clarity comes down to four things.

  • Be explicit about what’s certain and what’s not. If something is extracted deterministically (a date, a sender, a timestamp) say so. If something is probabilistic (a summary, a classification, an inference) say that too. Users should never have to guess which is which.
  • Show your reasoning. If the system reaches a conclusion, surface the evidence that supports it. If it’s uncertain, expose that uncertainty rather than smoothing it over. Traceability builds trust; opacity destroys it.
  • Separate fact from interpretation. Draw a bright line between outputs grounded in structured, validated data and outputs where the model is interpreting, summarizing, or inferring. Both are useful, but they carry different levels of reliability. Users should know which one they’re looking at.
  • Make it easy to verify and correct. Lawyers verify everything. Give them straightforward tools to check, override, and trace the system’s work. A product that can be audited is a product that can be trusted.

Trust comes from clarity, not magic

Lawyers still haven’t agreed on how to think about AI, whether it’s a tool, an intern, an advisor, or something else entirely. Until those expectations settle, transparency is the only sensible posture.

Tell users what the system guarantees, what it approximates, how it reached a conclusion, and where they should slow down and double-check.

That’s how we think about things at ChronoTracer. In a profession where small errors can undermine large arguments, precision isn’t optional, and neither is candor about where precision ends and approximation begins.

If legal technology vendors want the trust of legal teams, we have to earn it the same way lawyers do: by sweating the small stuff and being honest about our choices.

Request a demo.

See Chronotracer in Action

Ready to see how Chronotracer works with your data?