That number probably caught your eye-90%. And you're right to question it. In compliance, 90% isn’t good enough.
That’s why we never rely on AI alone. Our domain experts rigorously review outputs to bring the assurance AI can’t offer on its own and our model is human-in-the-loop. AI agents do the heavy lifting, but the final say sits with the compliance team—exactly where it belongs.
Now, the interesting bit.
How do you actually build AI agents to interpret regulation—when even compliance professionals struggle with it?
Let’s break that open.
Most people assume the workflow is as simple as:
- Take the legislation.
- Simplify the language.
- Pull out the obligations.
- Tick the boxes.
It’s not.
The law isn’t written for execution. It’s full of ambiguity, cross-references, embedded conditions, and exceptions.If it didn’t require interpretation, every lawyer would moonlight as a compliance officer—and change managers would be out of work.
The real challenge isn’t writing an obligation register—it’s engineering one that changes behaviour. Even human experts struggle with this. Most obligation registers are static. Most policies aren’t traceable to legislation. Most controls aren’t tested against real-world change.
So, how do you get AI agents to do what humans can’t consistently manage?
You don’t just prompt models—you build systems.
Inferencing a gen AI model to read regulation is not about dumping a few PDFs into a context and asking it to “find the rules.” That’s table stakes. What actually works—what gets you to that elusive 85–90% accuracy—isn’t a model. It’s an engineered system with multiple layers of intelligence, feedback, and accountability.
Here’s what that takes:
- A tightly curated legal corpus. Most regulations don’t live in isolation. You need the full legislative context—primary provisions, definitions, cross-referenced laws, guidance notes, consultation papers. Without this scaffolding, models hallucinate or miss critical conditions.
- Granular annotation, not just labels. It’s not enough to tell a model, “This is an obligation.” You need to instruct it how to distinguish duties from guidance, to extract embedded conditions, to isolate the actors and actions in each clause. That means fine-tuning on multi-dimensional labels—scope, applicability, triggers, exclusions, references.
- Semantic awareness, not just syntax parsing. The model must understand that “a firm must ensure X unless Y” is different from “a firm may X if Y.” It must understand the interplay of qualifiers, carve-outs, exceptions. That takes more than a simple prompt—it takes structural understanding.
- Feedback loops with domain experts. Every output is reviewed, refined, and folded back into finetuning. Our human-in-the-loop process isn’t a safety net—it’s part of the engine. The AI prompts were finetuned not just from patterns in text, but from patterns in expert judgment.
- Operational integration. It’s not just about “what” the rule says—it’s “who” owns it, “where” it applies, and “how” it’s evidenced. That means tying each obligation to a control, a team, and real-world data.
- Change resilience. Gen AI Models generate different outputs. Laws evolve. What was true last quarter may not apply tomorrow. So you build version control into every layer—inputs, logic, outputs. You treat your compliance system like a software stack: continuously integrated, versioned, and monitored.That’s the lesson we’ve learned while building Zango: The goal isn’t a well-worded spreadsheet buried in a GRC folder. It’s a living, breathing infrastructure that threads compliance into the flow of work—from policy to product, from codebase to controls.
- Because in AI, “what good looks like” changes fast.
- Your obligation register should too.
That’s the lesson we’ve learned while building Zango: The goal isn’t a well-worded spreadsheet buried in a GRC folder. It’s a living, breathing infrastructure that threads compliance into the flow of work—from policy to product, from codebase to controls.
Because in AI, “what good looks like” changes fast.
Your obligation register should too.