AI adoption across financial services is accelerating. While many firms remain in a phase of experimentation, AI is moving from isolated pilots to operationally embedded systems, including more autonomous and agentic workflows.
As this shift gathers pace, it raises questions around governance, oversight, documentation, risk management, and accountability.
To better understand how the sector is responding - and what effective AI governance will need to look like over the next several years - Zango is launching a new research initiative: The Future of AI Governance & Compliance in Financial Services.
This collaborative research programme runs across 2025–26 and aims to generate evidence-based insight into how AI governance in financial services is evolving, where current frameworks are falling short, and what capabilities institutions will need to develop as AI adoption scales.
Why this research now
Regulators are increasingly focused on how firms are operationalising AI governance in practice - through day-to-day controls, accountability structures, and decision-making processes.
At the same time, many existing governance frameworks were designed for static or narrowly scoped systems, rather than adaptive, data-driven models that can evolve over time.
This creates a growing gap between regulatory expectations and organisational readiness. Compliance, risk, legal, and model governance teams are being asked to oversee increasingly complex AI systems, often without clear precedents, shared standards, or sufficient internal capability.
This research seeks to explore that gap - and to provide forward-looking insight into how it can be addressed.
Research aim and scope
The aim of the research is to produce a practical, future-focused assessment of what effective AI governance and compliance will look like in financial services over the next one to five years.
The research examines AI governance as an organisational and regulatory challenge - spanning legal interpretation, operating models, skills, accountability, and oversight mechanisms.
Key themes include:
- How and where AI is being deployed across financial institutions today, and what is accelerating or constraining adoption
- How AI is reshaping the work of compliance, risk, legal, and governance functions
- The adequacy of existing governance frameworks for newer AI systems, including generative and agentic models
- Emerging regulatory expectations and what “reasonable steps” and senior accountability look like in practice
- The open questions, capability gaps, and evidence needs that will shape the next phase of AI governance
Research approach
The study is coordinated by Zango and is based on qualitative interviews with senior leaders across financial services, including those responsible for compliance, risk, legal, model governance, and AI strategy.
Findings from the interviews will be synthesised into a published research report, alongside supporting analysis and commentary over the course of the programme.
Contributors and advisers
The research is supported by contributors and advisers with deep expertise in AI governance, financial-system risk, and technology law:
- Andrew Sutton - Contributor
Research Affiliate at the Oxford Martin School AI Governance Initiative and affiliated with the Centre for the Governance of AI (GovAI), with a focus on AI governance and financial-system risk. - Dr Alessio Azzutti - Contributor
Lecturer in Law and Technology (FinTech) at the University of Glasgow, specialising in the intersection of law, finance, and emerging technologies. - Dean Nash - Adviser
Global Chief Operating Officer (Legal) at Santander, bringing senior-level operational and legal perspective on governance and accountability.
Outputs and next steps
Insights from the research will be shared through a published report, supported by articles, briefings, and events bringing together practitioners, academics, and policymakers.
For further details, visit the research webpage.


