Insights

Long Horizon Agents and Ethical Walls

Why information barriers are the most important unsolved problem in legal AI.

by Gabe PereyraMar 12, 2026

For the past two years, AI in law has meant chat. A lawyer types a question, gets a response, follows up. The interaction is contained — one person, one session, one task. The lawyer controls what goes in and what comes out.

That era is ending.

The Shift From Chatbots to Agents

The next generation of legal AI operates as agents: autonomous systems that can review an entire data room, draft across multiple documents, conduct multi-step research, and produce work product — all without a human in the loop for every step. These aren't hypothetical. We're already seeing agents that can process a due diligence request list across hundreds of documents, identify regulatory filing requirements across jurisdictions, and draft first-pass responses to diligence questions — tasks that previously took a team of associates days or weeks.

The ambition is what we call long horizon agents: AI that works on complex legal tasks over extended periods, maintaining context across sessions, accessing firm document management systems, and coordinating multi-step workflows. Think of it less like asking an associate a question and more like assigning them to a deal.

This is transformative for law firm productivity. It's also a ticking time bomb for information governance, because the ethical obligations that constrain how lawyers handle client information don't get easier when you add autonomous AI. They get dramatically harder.

What are Ethical Walls and Why do They Exist?

Ethical walls or screens are the set of procedures law firms use to prevent the flow of confidential information between lawyers or groups within the firm. They're rooted in some of the most fundamental obligations in legal ethics:

ABA Model Rule 1.6 requires lawyers to protect client confidential information. Model Rules 1.7 through 1.9 outline when a lawyer may not represent a client due to conflicts between their interests and those of current and former clients. Model Rule 1.10 brings these ethical requirements together for firms, establishing that they owe the same ethical duties to clients held by each individual lawyer, with some exceptions where sufficient ethical walls are established between otherwise conflicted representations.

In practice, ethical walls arise constantly. A partner joins from a competitor and brings knowledge of an adverse client. A firm represents both the buyer and a target's competitor. An associate rotates from litigation defending a company to a team suing it. In each case, the firm must erect barriers that are robust enough to prevent any leakage of confidential information — and provable enough to survive judicial scrutiny if challenged.

The consequences of failure are severe. Courts can disqualify the entire firm from a matter. Clients can bring malpractice claims. State bars can impose disciplinary sanctions. The reputational damage alone can cost a firm its most important relationships.

Today, most firms manage ethical walls through a combination of:

  • Conflicts checking systems (predominantly Intapp, which serves most of the Am Law 200)
  • DMS access controls (iManage Work access controls, NetDocuments workspace controls)
  • Physical and administrative measures (separate floors, restricted email groups, billing code isolation)
  • Written notices to walled personnel

These systems work because the boundaries are clear. Documents live in folders. People have access lists. Email goes to distribution groups. The unit of control is a person accessing a resource, and the firm can restrict that access at every point.

Why Agents Change Everything

When a lawyer uses a chatbot, the ethical wall question is simple: Does this lawyer have access to the information they're asking about? If yes, the AI can use it. If no, it can't. The information boundary is the same as it's always been: controlled at the point of human access.

Long horizon agents break this model in three fundamental ways.

1. Agents access data, not the lawyer

When an agent autonomously retrieves documents from a firm's DMS, it's acting on behalf of a lawyer, but it's making its own decisions about what to read. An agent asked to "review the buyer's representations in the SPA and flag anything unusual" might pull 50 documents from iManage. Did each of those documents fall within the scope of what the supervising lawyer should have access to? If the agent accesses a document behind an ethical wall, the breach has already occurred — even if the lawyer never sees the result.

This isn't a theoretical risk. Firms store millions of documents in their DMS. Matter boundaries aren't always clean. An acquisition target might share a name with an existing client. A document might be misfiled. A cross-reference might point to a restricted matter. When a human lawyer hits an implicit access control, they know to stop. An agent doesn't — it processes whatever it retrieves.

2. Agents maintain context across time

A chatbot conversation is stateless by default. Each session starts fresh. But long horizon agents are valuable precisely because they maintain context — they remember what they found yesterday, they build on prior analysis, they track state across a multi-week engagement.

This creates a new kind of information leakage risk. If an agent working on Matter A picks up a fact, and then the same agent (or an agent sharing the same context) is later assigned to Matter B — which is on the other side of an ethical wall — does that prior context contaminate the new work? The answer under current ethics rules is unambiguously yes. The question is whether the technology can prevent it.

Lawyers deal with this by being aware of their own conflicts. They recuse themselves. They decline assignments. They flag issues to the general counsel. An agent has no such instinct. It will use whatever context it has to produce the best possible work product, which is exactly what you don't want when that context crosses an information barrier.

3. Agents operate at a scale humans can't monitor

A junior associate may review 30-50 documents a day across multiple deals. At this volume, the supervising attorney can realistically review the associate's work and catch an unexpected conflict of interest. An agent can process hundreds of documents in minutes. The supervising lawyer can't review every retrieval, every intermediate reasoning step, every source accessed. They see the output, not the process.

This is the core tension: agents are valuable because they operate faster and at greater scale than humans. But the ethical oversight model in law assumes human-speed, human-scale operation. You can't have autonomous agents and manual ethical wall compliance. The wall enforcement has to be as automated as the work itself.

What the Solution Looks Like

Fail closed, not open

Traditional DMS access controls often fail open — if the system can't determine whether access should be granted, it grants it and logs the access for later review. For routine document retrieval by a human who might immediately recognize they're looking at the wrong matter, this is a reasonable trade-off between security and usability.

For autonomous agents processing hundreds of documents, failing open is catastrophic. If an agent can't confirm that a document falls within the matter's boundary, it must skip that document and flag the uncertainty. The work product might be less complete, but the ethical wall is intact.

Matter-centric product isolation

The most important design decision is making the client matter the atomic unit of the product. Every agent, every document, every conversation, every piece of work product must be scoped to a specific matter. When a lawyer opens a matter, they see only what belongs to that matter. When an agent runs, it can only access resources within that matter's boundary.

This is analogous to how firms already think about work. Every engagement has a matter number. Every document is filed to a matter. Every hour is billed to a matter. The AI platform should mirror this structure, not as a metadata tag, but as a hard security boundary.

In practice, this means the product must integrate directly with the firm's DMS and conflicts system. When a matter is created in Harvey, it should inherit its ethical wall configuration from the firm's existing systems — the same walls that control DMS access, email distribution, and billing should control what the AI can see.

Integration with conflicts infrastructure

The legal industry already has robust conflicts-checking infrastructure. Intapp processes millions of conflict checks annually. iManage Work access controls enforces document-level access controls at most major firms. These systems represent decades of investment in getting information barriers right.

An AI platform that tries to build its own parallel conflicts system is making a mistake. The right approach is deep integration: When Intapp says a wall exists between Matter A and Matter B, the AI platform enforces that wall at the retrieval layer, the context layer, and the output layer. When iManage restricts access to a workspace, the AI agent inherits those same restrictions.

This is what genuine partnership between a legal AI platform and the existing legal technology ecosystem looks like — not replacing the systems that work, but extending their enforcement to a new surface area. We recently announced a partnership with Intapp and are thrilled by the new possibilities that come from building together.

Audit trails that satisfy courts

When a firm points to ethical walls to defeat a disqualification motion, the court wants evidence: who was screened, when was the screen erected, what measures were taken, and can you prove they worked?

AI agents create a new category of evidence requirements. Firms need to demonstrate not just that a lawyer was screened from a matter, but that the AI acting on that lawyer's behalf was also screened. Every document retrieval, every context window, every agent session needs to be logged at a level of detail sufficient to prove — after the fact — that no walled information was accessed.

This is non-negotiable. A firm that deploys AI agents without auditable ethical wall enforcement is creating discoverable evidence of inadequate screening procedures. The agents will be more productive, but the malpractice exposure makes the productivity gain worthless.

What Firms Should do Now

The firms deploying AI most aggressively are also the most exposed. If your firm is piloting AI agents — or planning to — here's what matters:

Audit your current conflicts infrastructure. Do you know where every ethical wall in your firm is documented? Can your conflicts system communicate those walls to external platforms via API? If your walls live in spreadsheets or email notices, they can't be enforced programmatically.

Demand matter-level isolation from your AI vendors. Ask your AI platform: When an agent runs a task, what data can it access? Is access scoped to a single matter or to the user's full permissions? How is wall enforcement implemented — at the application layer, the data layer, or both? If the vendor can't answer these questions precisely, their product isn't ready for sensitive work.

Require integration with your existing systems. Your conflicts system and your DMS already know where the walls are. Your AI platform should inherit those boundaries, not require you to recreate them. Any solution that asks you to maintain parallel wall configurations across multiple systems is creating gaps.

Insist on auditable logging. Every document an agent accesses, every context window it builds, every output it produces should be traceable to a specific matter with a specific wall configuration. This isn't a nice-to-have, it's the evidence you'll need if your wall is ever challenged.

The firms that get this right will be the ones that move fastest with AI — not because they're less cautious, but because they've built the governance infrastructure that lets them be bold. The firms that get it wrong will learn the hard way that an AI-generated brief using information from behind an ethical wall isn't a productivity win. It's a crisis.

Harvey is the AI platform for legal professionals. Our ethical walls implementation integrates with firm DMS and conflicts systems to enforce information barriers at every layer of agent operation — from document retrieval to context management to output generation.