2026 SKILLS Legal AI Survey: Q&A With Oz Benamram
Oz Benamram, founder of SKILLS.law, answers our questions about this year’s SKILLS Legal AI Survey results.
As law firms move from experimenting with AI to deploying it across real matters, one question matters more than ever: Where is Legal AI actually being used in production?
With the release of the SKILLS Legal AI Use Cases Survey, the industry now has a clearer view. Based on responses from leaders at 130 of the world’s largest law firms, the survey highlights where AI tools are gaining traction across core legal workflows.
We spoke with Oz Benamram about what the results reveal and what surprised him the most.
When you look across this year's results, what stands out most to you?
What strikes me most is how far we've moved past the experimentation phase. When you visit the survey dashboard and look at the “Live” column across categories like Legal Drafting, Contract Review, Due Diligence, and eDiscovery, you're not seeing a handful of firms cautiously piloting something — you're seeing 40‑plus firms, out of the 130 who answered the survey, running these tools in production, in client‑facing work.
Just looking at your results for Harvey, 41 firms live simultaneously in Legal Drafting, Contract Review, and Contract Negotiation, and 39 in Due Diligence. That’s not a tool being evaluated. That’s a platform being operationalized.
The story this year isn’t “Are firms adopting AI?” It’s “Which firms are moving fast enough to stay competitive, and how are they managing the risk and change that come with that?” That second part is where I spend most of my time with firms.
Some platforms appear across a wide range of use-case categories, while others are concentrated in narrower areas. What does that tell you about how firms are thinking about AI deployment?
There are two distinct procurement strategies playing out right now. A handful of platforms like Harvey appear meaningfully across eight to twelve use‑case categories each, while point solutions like Kira, Relativity, SimplyAgree, and Jigsaw still dominate one or two categories and hold very deep installed bases there. What I’m seeing is firms essentially running a two‑layer architecture: a primary legal AI platform for cross‑practice, substantive work and best‑of‑breed specialists, with a workflow complex enough to justify it.
In practice, it’s a harder decision than it sounds. The firms that are getting this architecture decision right made a deliberate choice about where to standardize, where to stay flexible, and how to manage change across practices. That’s often where firms get help translating this two‑layer concept into concrete vendor, workflow, and governance decisions.
An exciting development that’s not reflected in the survey is that we are starting to see partnerships between legaltech providers in which each plays to their strengths. It’s safe to assume that some of these may lead to future M&A combinations.
The interesting pressure point is what happens when the horizontal platforms get good enough in those specialist areas. Even though it has lost several firms since last year’s survey, Kira’s position in Contract Review remains strong, with 44 firms live — but Harvey is right behind at 41, with far broader reach across other categories. That convergence is worth watching, because it forces firms to revisit earlier “point solution vs platform” assumptions.
What I'd caution firms against is assuming the point-solution vendors are safe forever. The "Consider" numbers for the horizontal platforms are growing in categories previously dominated by specialists, and that's the leading indicator to watch.
The firms that get this right typically start by mapping which workflows genuinely need deep specialization versus which ones they’re just habituated to solving with a point solution. Once you see that map, the consolidation decisions become much clearer, and it’s much easier to negotiate with vendors from a position of strategy rather than inertia.
Many of the leading categories involve substantive, client-facing work — drafting, negotiation, due diligence, discovery. What does adoption in those areas signal about where the market is?
The widespread adoption of these tools now indicates that the governance frameworks have caught up enough for production deployment: policies, review workflows, audit trails, and training that partners can actually live with.
What it also signals is that competitive pressure is now the dominant forcing function. If your peer firms are delivering contract review faster and with higher consistency, you can't hold the line on manual review indefinitely. The market has crossed a threshold where inaction has a visible cost, and firms without a clear governance and deployment plan are feeling it first.
You're highlighting collaboration as one of the featured categories this year. Why is that becoming more important now?
Two reasons: First, firms are realizing that AI amplifies the cost of fragmentation. When many associates use different tools without a shared workflow layer, you can absorb or hide that inefficiency. When AI accelerates work by an order of magnitude, the lack of coordination becomes visible and costly.
Second, and probably more important for the long run, is that collaboration between law firms and clients is about to change drastically. Law firms must rethink how they share their knowledge and expertise with clients, or they will become irrelevant. What I’m already seeing is clients asking their outside firms to provide AI‑assisted deliverables, such as structured data extracts, interactive summaries, and live deal rooms. The firms that can’t collaborate in that way are going to find themselves disintermediated, not by other law firms, but by clients building internal capability.
The firms getting ahead of this aren’t waiting for clients to demand it — they’re redesigning their delivery model now, usually as a joint effort between innovation, KM, and key client teams, and they often partner with a vendor as a development partner. That’s a very different kind of project than a tool evaluation, and it’s exactly the kind of work that keeps the focus on client‑visible outcomes rather than internal tooling.
In categories historically dominated by established incumbents, how are firms integrating newer AI platforms alongside existing systems
The pattern I see is coexistence followed by quiet displacement. The diagnostic I use with firms is simple: Are your lawyers choosing the tool because it’s better for the work, or because they are used to it? Those situations require completely different responses. In Legal Research, Westlaw Precision leads with 43 live and Westlaw Edge at 38 — the incumbents are firmly in place — but Harvey is at 31 live in the same category, and its “Consider” number continues to grow.
Firms layering newer platforms like Harvey on top of their existing legal research and then watching which one their lawyers actually reach for. The key observation is that the AI‑native platforms often got in through a different door: they were greenlit for a specific use case, proved value, and are now expanding horizontally. Incumbents that embedded AI credibly into their products, like Relativity aiR, have held their ground better than those that didn’t.
If a firm is reviewing the dashboard for the first time, what's the one insight you think they should focus on?
Look at the "Consider" column relative to the "Live" column in each category — that ratio tells you more than the absolute numbers. In Search and Retrieval, DeepJudge has 6 live and 35 in "Consider". In Automated Timekeeping, Laurel has 4 live and 29 in "Consider." Those gaps represent decisions that are in motion. If your firm is still at zero in those categories, you're behind your peers.
That’s a conversation firm leaders need to have honestly, and if you are just starting now, it’s easier to have with an external consultant who can benchmark you against the broader market and facilitate the trade‑offs without internal politics getting in the way.
Where do you see the next 12 months of Legal AI adoption concentrating?
Four areas stand out from the survey:
First, knowledge and search infrastructure: The "Consider" numbers for DeepJudge and Ask iManage are among the highest in the survey, indicating that firms have identified the knowledge management layer as the next priority. Figuring out RAG infrastructure and data governance will be key to AI's long-term success.
Second is agentic workflows: The investment in workflow automation is just beginning, and recent developments around agentic AI and agent skills make this an exciting and fast-moving area.
Third, AI governance: The category has low live numbers right now (2-5 across tools) but significant "Consider" volume, and external pressure from clients and regulators will accelerate that. We will see more solutions enter this market, such as Intapp’s recent announcement. The firms that build governance infrastructure now will have a meaningful advantage in the coming years. For most firms, the practical challenge is sequencing: you can’t do knowledge infrastructure, agentic workflows, and governance all at once. A lot of my work over the next year will be helping firms prioritize and choose an order that aligns with their client base, risk appetite, and internal capacity.
I'll add a fourth signal that isn't in the dashboard. Since we launched the weekly Legal AI Use Case Demo Seminars last month, the number of use cases firms are asking to learn about has nearly doubled. Topics that didn't exist when we ran the survey a few months ago, such as law firm–client collaboration, agentic AI, and AI governance, are now filling the queue. That tells me curiosity is running ahead of deployment, which means the next 12 months will move faster than the last 12.
For anyone who wants to stay current on what's actually being deployed and have an honest conversation about various use cases with your peers, the SKILLS weekly seminars are the place to do that. You can see the full report on our website at skills.law under Surveys, and sign up for the weekly Seminars.





