Europe: horizontal, statute-based regulation
The European Union has adopted a centralised, horizontal approach to AI regulation through the EU AI Act, the world’s first comprehensive, binding AI framework. The AI Act applies across the economy, imposing obligations on developers and deployers of AI systems according to risk tiers.
This approach delivers legal certainty and harmonisation across Member States. However, it has also been criticised for its complexity, cost, and compliance burden - particularly for fast-moving, novel, or borderline use cases that sit uneasily within predefined risk categories.
In response, the European Commission has proposed a “Digital Omnibus on AI”, introducing targeted amendments to simplify implementation and reduce friction as the Act enters into force.
Key proposed changes to the EU AI Act include:
- High-risk AI timelines extended until supporting standards and Commission guidance are available, easing immediate compliance pressure.
- Lower burden for borderline systems - procedural or narrow-use AI no longer requires registration in the EU high-risk database (documentation still required).
- AI literacy training obligation removed - responsibility shifts from regulated entities to Member States and the Commission.
- AI Office gains stronger oversight powers over general-purpose-AI-based systems and AI embedded in very large platforms.
- Regulatory sandboxes strengthened - wider real-world testing allowed; cross-border sandboxes expanded; and an EU-level sandbox.
While the AI Act is legally in force, and early provisions are already applicable (including banned AI practices like social scoring by public authorities), there is a phased implementation which will continue to roll-out through 2026-2027.
United Kingdom: context-based, sector-led regulation
The UK has avoided a single, horizontal AI statute. As set out in the Conservative Government’s 2023 white paper, the UK currently operates a context-based approach to AI regulation. This integrates AI oversight into existing regulatory frameworks, tailored to how and where AI is used.
There are benefits to the UK’s context-based approach. It allows domain-expert regulators to oversee the use of AI in their respective sectors, avoids the burdens that the EU AI Act’s approach imposes on regulated entities, and minimises the risk of rapidly becoming outdated as the technology evolves.
But there are also drawbacks. AI is a general purpose technology, and its applications will increasingly engage multiple regulatory frameworks which could lead to tensions. For example, an autonomous drone developer may face conflicting regulatory expectations between data protection rules (which emphasise data minimisation) and aviation safety requirements (which may incentivise continuous recording) - leaving firms unsure how to remain compliant.
In the run up to the 2024 general election, Labour made a manifesto commitment to “ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models”.
However, now in Government, no such legislation has yet been presented to Parliament. In a recent Select Committee appearance, the Technology Secretary said that she is “especially worried” about the risk to young people forming unhealthy relationships with generative AI tools. But when pushed on an AI Bill, the Technology Secretary emphasised the need for measures to maximise growth and deal with regulatory issues.
In short, this likely means an AI regulatory framework is unlikely in the near term - though AI chatbots may be targeted via updated online safety legislation. Instead, the UK government appears focused on pro-growth AI interventions, such as proposals for an AI Growth Lab - a cross-economy sandbox to test and roll-out AI products and services in a more permissive regulatory environment - which is currently under consultation.
United States: fragmented, state-led regulation
The United States has no comprehensive federal AI law. Instead, AI regulation is emerging through a highly fragmented, state-led landscape, alongside federal agency guidance and enforcement under existing laws.
More than 1,000 AI-related bills have been introduced at state level, and while many have not progressed, several states have enacted legislation that directly affects AI development and deployment:
- Colorado: From February 2026, SB24-205 (Consumer Protections for Artificial Intelligence) will require developers of high-risk AI systems to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.
- Utah: The Artificial Intelligence Policy Act (SB149), in force since May 2024, requires transparency when generative AI is used in consumer-facing interactions, particularly in regulated or high-risk contexts.
- Texas: In June 2025, HB149 was passed, restricting harmful or deceptive uses of AI, places limits on certain biometric and automated decision-making practices, clarifies that deployers remain responsible for AI-driven harms, and requires transparency around government use of AI.
- California: A package of AI-related laws has been adopted, including SB53 which requires frontier model developers to implement frameworks to manage catastrophic risk and disclose information about advanced models.
Concerned that this patchwork threatens US leadership on AI, President Donald Trump signed an Executive Order in December aimed at blocking states from enforcing their own AI regulations.
In practice, however, executive orders do not create new law or override state legislation. States retain constitutional authority to legislate unless Congress explicitly pre-empts them or courts intervene.
As a result, the US is likely to continue along a path of decentralised, state-driven AI regulation, creating significant compliance complexity for firms operating nationwide.
Singapore: supervisory, capability-driven oversight
Singapore has not enacted comprehensive AI legislation. Instead, it has developed a supervisory-led approach, most notably through proposals by the Monetary Authority of Singapore (MAS) for AI risk management guidelines for financial institutions.
Although not legislation, the MAS proposals are among the most operationally detailed supervisory frameworks currently under consultation. They set clear expectations around AI inventories, risk classification, lifecycle controls, accountability structures, and ongoing monitoring.
Unlike the EU AI Act’s predefined risk categories, MAS is proposing a dynamic, context-sensitive approach, assessing AI risk based on impact, complexity, and the degree to which systems are embedded in critical business workflows.
If finalised, the guidelines would establish supervisory expectations for:
- accountability for third-party and externally developed AI models;
- periodic re-validation of higher-risk AI systems; and
- clearly defined override and escalation mechanisms for critical use cases.
A key implication of the proposals is that AI governance increasingly depends on organisational capability. Effective oversight requires skills that extend beyond traditional control functions, including:
- assessing AI risk materiality in context, rather than by category;
- understanding how data quality, bias, and fairness interact in practice;
- interpreting and challenging explainability outputs;
- interrogating model design and deployment choices; and
- supervising third-party and agentic AI systems where direct control is limited.
A diverging global regulatory landscape
Across jurisdictions, AI regulation is diverging. The EU has anchored oversight in statute and ex ante risk classification; the UK has prioritised sector-led flexibility; the US is evolving through fragmented, state-driven intervention; and Singapore is advancing a supervisory model centred on continuous risk assessment and governance capability - leaving firms to translate fundamentally different regulatory models into operational practice.

.png)
