Sam Green is Senior Policy and Partnerships Lead at Zango. Before joining, he led policy teams in the UK Government's Department for Science, Innovation and Technology developing the UK’s AI regulatory framework and online safety legislation.
During my time shaping AI regulatory policy in the UK Government, I spent a lot of time thinking about both the huge opportunities of AI and the risks - if it’s deployed without the right checks and balances.
It’s easy to see why banks and other financial institutions are racing to adopt AI. If they don’t, they risk losing out to faster, more innovative players.
While challenger banks have innovated in the front-end - creating slick, intuitive apps - the next wave of financial innovation is happening behind the scenes. AI will transform back-end processes, with autonomous AI agents carrying out the grunt work to streamline mortgage and loan applications, complete ID checks, and automate contracts end-to-end.
But innovating fast comes with risks. Scale AI without strong safeguards, and small errors can turn into systemic failures - opening the door to fraud, costly compliance breaches, and shattered customer trust. With a recent report showing more than half of people are wary about trusting AI, public confidence is already fragile.
Of course, governments and regulators around the world have a vital role to play. We’ve already seen the EU legislate with the AI Act, and national regulators such as the UK’s Financial Conduct Authority (FCA) have signalled that AI risks are a priority. More global regulations are inevitably coming down the track as the technology continues to evolve.
But ultimately, companies themselves are on the hook for adopting AI responsibly. They need to be able to prove how they are using AI responsibly, and be ready to explain - both to the public and to regulators - the steps they are taking to manage the risks. This is especially true for compliance more than any other business function, which is rife with manual processes and ripe for AI automation, and yet has seen minimal tech disruption to date.
Every new wave of technology needs responsible governance, and with AI, the stakes are higher than ever.
That’s why I’m excited to join Zango. We’re building the AI trust and governance layer for regulatory compliance. Just as antivirus software operates quietly in the background to monitor digital threats, our system of AI agents scans the regulatory landscape, interprets new rules, flags gaps and vulnerabilities, and suggests fixes so that compliance teams can anticipate risk and turn regulation into a competitive advantage.
As we look to the next phase of innovation in finance, it’s never been more important to give institutions a strong foundation to deploy AI safely and responsibly.