HomeBlogAI Regulation: Why the UK Must Find Its Own Path Between the US and EU

AI Regulation: Why the UK Must Find Its Own Path Between the US and EU

AI Regulation: Why the UK Must Find Its Own Path Between the US and EU - Scott Dylan

One of the defining questions in AI policy right now is how to create effective regulation without killing innovation. This isn’t a purely academic question. It’s a question with real consequences for whether the UK remains a centre of AI innovation, whether British startups can compete globally, whether the UK can maintain technological sovereignty, and whether Brits benefit from economic opportunities in AI. The United States and the European Union have taken very different approaches to AI regulation. The US has been relatively permissive, relying on light-touch regulation and sector-specific rules. The EU has moved toward comprehensive regulation through the AI Act, applying prescriptive rules across all AI applications. The UK has chosen a different path: a pro-innovation approach focused on principles and sector-specific regulation rather than comprehensive rules. This post examines what’s at stake in these different approaches, what the UK’s strategy looks like, what business implications flow from the UK’s choices, and what the future of UK AI regulation should be as the policy landscape matures.

The Regulatory Landscape: Three Approaches

Let’s start by understanding the three major regulatory approaches to AI, as embodied by the US, EU, and UK. The US approach is characterised by relative permissiveness and light-touch regulation at the federal level. There are some sector-specific regulations—healthcare AI must comply with FDA rules, financial services AI must comply with banking regulation, etc. But there’s no comprehensive AI regulation. The US relies on general consumer protection law, free speech protections, and the assumption that market competition and corporate self-regulation will constrain problematic applications. The approach is largely motivated by concern that heavy regulation would stifle innovation and allow other countries to overtake the US in AI capability. The downside is that companies have significant freedom to deploy AI in ways that might be problematic ethically or economically. The upside is that innovation moves quickly and American companies often move to market faster than competitors.

The EU approach, codified in the AI Act, is prescriptive and comprehensive. The AI Act creates a risk-based framework where high-risk applications (those affecting employment, education, credit, criminal justice, biometric identification) face strict requirements including impact assessments, data governance, transparency, and human oversight. Medium-risk applications face lighter requirements. Low-risk applications face minimal requirements. All AI applications covered by the Act must comply. The motivation is to create clear rules so everyone knows what’s required, and to prevent the most problematic applications through regulation rather than relying on corporate responsibility. The downside is that the rules are complex and prescriptive, which increases compliance costs for companies and can slow innovation. Startups particularly struggle with compliance burdens, and the EU has seen some concern about whether the regulation is driving AI innovation to other jurisdictions.

The UK approach, still being developed, aims for a middle path. Rather than creating a single comprehensive regulation, the UK is developing a framework where sector-specific regulators (ICO, FCA, CMA, etc.) are responsible for overseeing AI in their domains. This is called the ‘pro-innovation’ approach because it’s intended to allow innovation to proceed whilst sector-specific oversight ensures responsible development. The theory is that sector-specific regulators understand their domains and can create proportionate regulation that’s more flexible and adaptable than comprehensive rules. The downside is that coordination between regulators is necessary but sometimes difficult, and companies might face inconsistent requirements across regulators.

The UK’s DSIT and Pro-Innovation Approach

The UK’s approach to AI regulation is guided by the Department for Science, Innovation and Technology (DSIT), which has taken an explicitly pro-innovation stance. DSIT’s position is that rather than creating heavy regulatory barriers, the UK should position itself as the global leader in responsible AI innovation. The idea is that companies developing AI responsibly will thrive, and the UK’s regulatory clarity (even if light-touch) will attract talent and investment. Rather than requiring advance approval for AI systems, the UK approach relies on sector-specific oversight. If you’re building AI for healthcare, the relevant NHS regulator provides oversight. If you’re building AI for finance, the FCA provides oversight. If you’re building AI for employment, employment law and the Equality and Human Rights Commission provide oversight.

What’s underpinning this is the UK AI Bill of Rights framework, which articulates principles that AI should embody: responsibility, explainability, fairness, accountability, transparency. Rather than prescribing exactly how companies must achieve these principles, the UK approach trusts companies to figure out how to embody them within their sectors, with regulators providing guidance and oversight. The advantage of this approach is that it allows flexibility and innovation. A company developing a novel AI application isn’t blocked by not fitting neatly into existing regulatory categories. Regulators can engage with companies on what responsible development looks like in their specific context, rather than applying one-size-fits-all rules. The disadvantage is that it requires companies to understand and comply with multiple different regulatory regimes, and it’s less clear in advance what will be required.

The EU AI Act: Prescriptive but Potentially Problematic

The European Union’s AI Act represents the most prescriptive approach to AI regulation in the world. It came into force in January 2024 and creates strict requirements for high-risk AI applications. What counts as high-risk? AI used in recruitment, in educational evaluation, in credit decisions, in criminal justice, in biometric identification, in border control, in law enforcement generally. These are areas where AI decisions directly affect people’s lives significantly. For high-risk applications, the Act requires: impact assessments before deployment, documentation of how the system works, data governance and quality standards, transparency requirements, human oversight capabilities, monitoring after deployment, reporting of serious incidents.

The theory behind the AI Act is sensible: high-risk applications should face stringent requirements because the consequences of failure are serious. The problem is that the requirements are quite prescriptive and can be expensive to comply with, particularly for startups. A small AI company building a hiring tool must now conduct impact assessments, document systems, maintain data quality, implement human oversight—all before they can sell the product. These compliance costs are real and significant. They don’t necessarily prevent companies from complying, but they do create barriers to entry that favour large companies with compliance infrastructure over startups. There’s some concern in the EU and among European tech companies that the AI Act, whilst well-intentioned, might inadvertently drive AI innovation and talent to other jurisdictions with lighter regulation.

Additionally, the definition of high-risk is somewhat expansive in the AI Act. A company building AI for any recruitment or educational application falls into the high-risk category, even if the AI is only used as a screening tool that still involves human decision-making. The prescriptiveness of the requirements can lead to compliance that’s bureaucratic rather than focused on actual responsibility. A company might tick all the boxes for compliance without genuinely thinking about whether their application is problematic or whether their approach is responsible. This is the risk of prescriptive regulation—it can lead to compliance theatre rather than genuine responsibility.

US Light-Touch Approach and Its Consequences

The United States approach to AI regulation is more permissive. At the federal level, there’s no comprehensive AI regulation. Instead, various agencies have issued guidance and some AI-related regulations in their specific domains. The FDA has issued guidance on AI in medical devices. The FTC has issued guidance on AI and consumer protection. Various banking regulators have issued guidance on AI in financial services. But there’s no overarching federal AI regulatory framework. This permissiveness has enabled rapid innovation in the US. American AI companies have moved faster, deployed more broadly, and developed capabilities that companies in more heavily regulated jurisdictions sometimes lag on. The US has maintained a lead in large language models and frontier AI research partly because of the permissive regulatory environment.

The downside of the light-touch approach is that there have been genuine harms that go unregulated. Companies have deployed biased AI systems in hiring and criminal justice that disadvantage minorities. There have been cases where AI systems caused clear harms and the companies had minimal regulatory accountability because they weren’t operating in a sector with specific AI regulation. There have been recommendations from US government agencies and from AI safety researchers that some additional guardrails are needed, but political consensus on what those guardrails should be remains elusive. The light-touch approach works well for enabling innovation, but it can create situations where harmful applications develop with minimal oversight.

The UK Position: Can a Middle Path Work?

The UK’s pro-innovation approach attempts to split the difference between the EU’s prescriptiveness and the US’s permissiveness. The idea is that sector-specific regulation, applied by regulators who understand their domains, can constrain the worst harms without creating blanket barriers to innovation. A regulator like the FCA, which already regulates financial services, can issue guidance on AI in financial services that’s proportionate and informed by actual domain knowledge. The ICO, which regulates data protection, can address data governance issues in AI systems. The CMA, which addresses competition and consumer protection, can address fairness and monopolistic concerns about AI. Rather than one agency developing one rule for all AI, different agencies with different expertise develop guidance for their domains.

The theory is sound, but execution is complex. First, it requires significant coordination between regulators to avoid inconsistency. If the FCA requires certain transparency practices for AI in finance, but the ICO requires different practices for data governance, and these requirements conflict, companies face impossible situations. The DSIT has established frameworks for coordination, but it remains to be seen whether coordination works smoothly in practice. Second, it requires companies to navigate multiple regulatory regimes depending on where their AI is applied. A company building general-purpose AI might need to understand requirements from multiple regulators depending on who purchases their system and for what purpose. This creates compliance complexity, though it’s less than under the EU AI Act because requirements are less prescriptive.

Third, it requires regulatory agencies to have AI expertise and to develop guidance proactively. If a new application of AI emerges that no existing regulator has considered, there’s a gap. The FCA knows about financial services but might not immediately understand implications of novel AI applications. The ICO knows about data protection but might not understand implications of fairness in machine learning. Regulatory agencies need to develop expertise and stay ahead of technology, which requires investment and intentional effort. Fourth, the approach relies on corporate responsibility to some degree. If companies are developing AI responsibly and within regulatory guidance, the system works. But if a company ignores guidance and proceeds recklessly, enforcement becomes necessary. The UK’s pro-innovation approach assumes more corporate responsibility than the EU approach does, which might be optimistic.

The Bletchley Declaration and International Coordination
AI Regulation: Why the UK Must Find Its Own Path Between the US and EU - Scott Dylan

Beyond domestic regulation, the UK has been active internationally in AI governance. The Bletchley Declaration, signed in November 2023 at a summit of global AI regulators and government leaders, represented an attempt to establish shared principles for AI governance. The Declaration included commitments to tackle AI risks, to maintain human agency and oversight, to develop appropriate governance mechanisms. It was notable for bringing together countries with very different approaches—the EU with its prescriptive regulation, the US with its light-touch approach, China and other countries with different governance models. Whilst the Declaration didn’t create binding rules, it represented agreement that AI governance is a global concern and that countries should coordinate.

The UK’s role in the Bletchley Declaration reflects its position as trying to bridge different approaches internationally. The UK can’t impose its pro-innovation approach on the EU, which has already passed the AI Act. But the UK can advocate for an approach that emphasises sectoral regulation over comprehensive rules. It can share its experience with sector-specific approaches and work with other countries developing their own approaches. It can attempt to coordinate at the international level so that companies don’t face contradictory requirements from different jurisdictions. This diplomatic work is less visible than regulation, but it’s important for shaping the global AI governance landscape.

Implications for UK AI Companies and Startups

The UK’s regulatory approach has significant implications for companies developing and deploying AI. For UK-based startups, the pro-innovation approach should theoretically create a friendlier environment than the EU AI Act. A startup developing a novel AI application should be able to engage with relevant regulators, understand requirements in their specific domain, and move forward with clearer timelines than under prescriptive rules. This is the case in theory, though in practice there can be uncertainty when applications are novel. The lack of explicit guidance means a startup must sometimes engage with multiple regulators to understand what’s required, which creates delays.

For companies operating across multiple jurisdictions—which most successful AI companies eventually do—the regulatory fragmentation is more complex. A company starting in the UK with a pro-innovation regulator, expanding to the EU where the AI Act applies, and expanding to the US where light-touch regulation applies, faces very different requirements in different places. The most conservative approach is to build to the highest standards (EU requirements) and apply those globally. This provides regulatory certainty but increases compliance costs. A more opportunistic approach is to vary the product or deployment based on jurisdiction, which creates complexity but lower costs. Most companies will probably take some middle path, building reasonably responsible systems and adjusting for major regulatory requirements where necessary.

The Challenge of Sector-Specific Regulation

One of the challenges with the UK’s sector-specific approach is that many AI applications cut across multiple sectors, and existing sectors don’t have clear regulatory homes for AI. Consider a company developing general-purpose large language models. These models could be used in healthcare (FDA oversight?), finance (FCA oversight?), employment (ICO and equality law?), general consumer use (CMA and ICO?). Which regulator has responsibility? It’s not clear. The DSIT framework attempts to assign responsibility based on the primary use, but for general-purpose models that could be used in many ways, this assignment is ambiguous. Different regulators might have different views on what’s required. The company might need to engage with multiple regulators and navigate inconsistent guidance.

Additionally, sector-specific regulators are typically focused on their domain, not on AI innovation broadly. A financial regulator cares primarily about financial stability and consumer protection in finance. They might regulate AI conservatively to protect their domain, even if this slows AI innovation in finance. A healthcare regulator cares about patient safety and will require rigorous validation of medical AI. These are appropriate concerns for the regulators, but they’re sectoral concerns rather than AI-innovation concerns. The result can be sectoral over-caution that slows innovation more than necessary, combined with gaps where no sector clearly has responsibility. The UK approach relies on DSIT playing a coordinating role, but DSIT is a relatively new department and doesn’t have the enforcement power of sector regulators.

What Good AI Regulation Actually Needs to Achieve

Before evaluating regulatory approaches further, let’s clarify what AI regulation should actually be trying to achieve. First, it should prevent clear harms. If an AI system is likely to cause serious injury or death, regulation should prevent or substantially constrain it. AI systems with clear potential for discrimination in high-stakes domains should be subject to fairness testing and oversight. This is the harm-prevention goal. Second, regulation should create transparency and accountability. If an AI system makes decisions affecting people, those people should understand how and why decisions were made, and there should be clear responsibility when things go wrong. This is the accountability goal.

Third, regulation should enable responsible innovation. Rather than shutting down all AI applications that aren’t yet fully understood, regulation should create pathways for responsible development and deployment. This might involve testing requirements, monitoring requirements, ability to modify or withdraw systems if they’re not working as expected. The goal is not to prevent innovation but to create conditions where innovation happens responsibly. Fourth, regulation should be proportionate—requiring more stringent oversight for high-risk applications and less stringent oversight for lower-risk applications. Not all AI requires the same level of regulation. A chatbot providing customer service doesn’t need the same level of oversight as an AI system making criminal justice decisions. Proportionate regulation creates appropriate incentives.

Fifth, regulation should be adaptive. AI is developing rapidly and new applications and concerns emerge constantly. Regulation needs to be flexible enough to address new issues without requiring complete overhauls. Static regulation written today will become obsolete as technology evolves. Good regulation creates frameworks and principles that can evolve, rather than prescribing specific technical requirements that become outdated. Sixth, regulation should be consistent globally to the extent possible. When different jurisdictions have radically different requirements, it creates problems for companies operating globally and creates opportunities for regulatory arbitrage where companies move operations to minimise regulation. Complete global consistency is probably impossible, but alignment on principles and similar approaches would be valuable.

Evaluating the Three Approaches Against These Criteria

Against these criteria, how do the three approaches perform? The US light-touch approach does well on enabling responsible innovation (few barriers) and being adaptive (flexible), but poorly on harm prevention (limited oversight) and accountability (unclear responsibility). The EU prescriptive approach does well on harm prevention (strict rules) and accountability (clear requirements), but less well on enabling innovation (barriers and costs) and adaptability (prescriptive rules require amendments to change). The UK sector-specific approach theoretically does well on proportionality and transparency, and reasonably well on enabling innovation, but requires strong coordination to achieve consistency and relies on sector regulators having expertise and commitment to AI oversight.

Each approach has genuine advantages and disadvantages. The question for the UK is whether it can implement its pro-innovation approach effectively enough to outweigh the disadvantages. This requires several things: strong coordination between sector regulators so guidance is consistent; sufficient investment in regulatory expertise so agencies understand AI and can provide thoughtful guidance; willingness to enforce where necessary to make clear that responsibility is required; and willingness to adapt as new issues emerge. If the UK can achieve these, the sector-specific approach might prove more effective than both the EU’s prescriptiveness and the US’s permissiveness. If coordination fails or agencies lack expertise, the sector-specific approach could leave gaps and inconsistencies.

The International Competitive Dimension

One of the factors driving regulatory approaches is competitive concern about whether a jurisdiction will fall behind in AI development. The US light-touch approach is motivated by concern that heavy regulation would allow China and other countries to develop AI faster. The EU AI Act is motivated by concern that the US is ahead in AI and regulation is needed to ensure ethical development, even if it slows innovation. The UK pro-innovation approach is motivated by belief that you can have both responsible development and rapid innovation, that being trusted and responsible is a competitive advantage. These competitive motivations are understandable but can sometimes distort good regulation.

The reality is that China and other countries are also developing AI rapidly and are not constrained by the same regulation as Western countries. If the West over-regulates and slows innovation, it might indeed fall behind. However, if the West under-regulates and develops AI that’s unreliable, biased, or harmful, it might lose social licence and face crises. The competitive calculation is complex. The UK’s approach of trying to be both responsible and pro-innovation is reasonable, but it requires acknowledging that there might be genuine trade-offs. Trying to achieve both without accepting trade-offs can lead to insufficient regulation and innovation-without-oversight. True pro-innovation responsible development requires making difficult choices about what kinds of innovation to enable and what kinds to constrain.

The Role of Self-Regulation and Corporate Responsibility

Both the UK pro-innovation approach and the US light-touch approach rely significantly on corporate self-regulation and responsibility. The idea is that companies will develop AI responsibly because it’s in their interest to do so. Companies want customer trust. Companies want to avoid regulatory crises. Companies want to recruit talent that cares about responsible development. These incentives should push companies toward responsible development. However, relying on self-regulation has obvious limitations. Some companies prioritise short-term profit over long-term responsibility. Some companies don’t fully understand risks in their AI systems. Some companies face competitive pressure to deploy quickly and might cut corners on safety. Self-regulation alone isn’t sufficient.

The most effective approach probably combines self-regulation incentives with regulatory oversight. Government should create frameworks and principles that companies should follow, provide guidance on how to implement them, but then audit and enforce to ensure compliance. Companies should invest in responsible development internally, knowing that external oversight is also happening. The UK’s sector-specific approach relies on this combination—companies are expected to develop responsibly, but sector regulators provide oversight and can enforce where necessary. If this works well, it creates better outcomes than either pure self-regulation or pure prescriptive regulation. But it requires regulatory commitment to actually oversee and enforce, not just issue guidance and hope companies comply.

Recommendations for UK AI Regulation Going Forward

My recommendation for the UK is to stick with the pro-innovation sector-specific approach but to strengthen implementation. First, increase investment in regulatory expertise. Sector regulators need to hire people with AI expertise and keep their knowledge current. This requires budget allocation and willingness to compete with private sector salaries. Second, establish clear coordination mechanisms and frameworks between regulators so guidance is consistent. The DSIT should facilitate regular coordination meetings and clear protocols for handling applications that span multiple sectors. Third, develop clear guidance from each sector regulator on what responsible AI development looks like in their domain, so companies know what’s expected. This guidance should be public and evolve as technology and understanding evolve.

Fourth, establish clear enforcement mechanisms and commit to enforcement where companies aren’t meeting standards. The approach only works if companies believe regulation is real and has consequences. If regulators issue guidance but never enforce, companies will ignore it. Fifth, remain engaged internationally and advocate for coordinated approaches that balance responsible development with innovation. The UK should work with the EU to ensure that companies operating in both jurisdictions don’t face completely contradictory requirements. Sixth, invest in research on AI risks and effectiveness of interventions, so regulation is based on evidence about what actually works. Seventh, establish clear procedures for when novel AI applications emerge and it’s unclear which regulator has responsibility. There should be a mechanism to assign responsibility and develop appropriate oversight quickly.

Nexatech’s Perspective on Regulation

From Nexatech’s perspective as an investor in AI companies, we believe that clear, proportionate regulation is actually beneficial for our investments. Companies that operate within clear regulatory frameworks and build responsible development into their practices are less likely to face crises. They’re more likely to maintain customer trust and employee morale. They’re less likely to face regulatory action that damages value. We’d prefer a UK regulatory environment that’s clear about what’s required rather than one that’s completely ambiguous. The pro-innovation approach is theoretically attractive, but only if it actually delivers clarity. If companies are uncertain about what regulators expect, that’s worse than clear requirements they need to meet.

We’d also prefer that the UK avoids becoming a regulatory haven for companies unwilling to meet standards elsewhere. Companies should come to the UK because it’s a centre of excellence for responsible AI innovation, not because it’s a place to evade oversight. Regulatory arbitrage—companies choosing jurisdictions specifically to avoid regulation—is bad for the UK long-term because it attracts companies without genuine commitment to responsibility, and it invites international criticism and potential trade friction. The UK should position itself as the jurisdiction with the best combination of clear regulation and support for responsible innovation, not as the jurisdiction with the lightest regulation.

The Future Shape of AI Governance

Looking ahead, I expect that AI regulation will continue to evolve across jurisdictions. The EU AI Act will likely be adjusted as experience with implementation reveals what works and what doesn’t. The US will eventually face pressure to develop more comprehensive federal regulation as harms from unregulated AI accumulate. China and other countries will develop their own approaches. The UK’s pro-innovation sector-specific approach will prove successful if it delivers on its promises—if companies have clarity on requirements, if innovation proceeds, if harms are prevented. If it proves successful, other jurisdictions might adopt similar approaches. If it proves unsuccessful and coordination fails or harms emerge, the UK might need to move toward more prescriptive regulation like the EU.

Ultimately, good governance of emerging technology requires balance: enough regulation to prevent serious harms and create accountability, but not so much regulation that innovation is strangled. Enough flexibility to adapt as technology evolves, but enough structure to provide clarity to companies and public. Enough coordination across jurisdictions to avoid regulatory chaos, but enough space for different approaches so good ideas can emerge. The UK’s pro-innovation sector-specific approach is a reasonable attempt to achieve this balance. The next few years will show whether it actually delivers. If it does, it could become a model for other jurisdictions. If it doesn’t, the UK might need to adjust its approach. The important thing is that we remain thoughtful, evidence-based, and adaptive as we learn what actually works in governing AI development and deployment.

Related reading: How Emotional AI Claims to Read Your Feelings — and Why It Probably Can’t, What is information communication technology ict: A concise guide to ICT basics and Improving Diagnostic Accuracy with AI Technologies.


Discover more from Scott Dylan

Subscribe to get the latest posts sent to your email.

Written by
Scott Dylan