HomeBlogThe EU AI Act’s Prohibited Practices: What Gets Banned and Why

The EU AI Act’s Prohibited Practices: What Gets Banned and Why

The EU AI Acts Prohibited Practices: What Gets Banned and Why - Scott Dylan

The EU AI Act Takes Effect: A Watershed Moment for Regulation

On 29 February 2026, the EU AI Act’s prohibited practices provisions came into full force. For those working in technology, this moment represents something genuinely significant. We’re witnessing the first major regulatory framework that doesn’t just require compliance—it fundamentally reshapes what artificial intelligence can and cannot do within one of the world’s largest economic blocs.

I’ve spent the last decade working with emerging technologies, building ventures that sit at the intersection of innovation and responsibility. What strikes me most about the EU AI Act isn’t that it’s overly restrictive. It’s that it’s remarkably specific about where it draws lines. The regulations didn’t emerge from a vacuum. They reflect genuine concerns about how AI systems can cause real harm to real people.

This isn’t theoretical scaremongering. The practices now banned were already being deployed, often without transparency or proper scrutiny. The EU has made a deliberate choice: certain applications of AI are incompatible with human dignity and rights, regardless of how efficient or profitable they might be.

What Exactly Is Prohibited Under the Act?

The prohibited practices list is surprisingly focused. The EU didn’t ban AI itself. They banned specific deployments that posed identifiable harms. Understanding this distinction matters enormously.

Social scoring systems are at the top of the list. These are AI systems designed to rate the social or economic behaviour of individuals or groups. Think about what this means in practice: a system that assigns you a score based on your creditworthiness, trustworthiness, or compliance with rules, then uses that score to restrict your opportunities. The EU recognises this as a form of digital authoritarianism. Whether deployed by government or private companies, social scoring systems create perverse incentives and erode human autonomy. If you’re scored as unreliable, you lose opportunities. That score might be wrong, opaque, or based on discriminatory patterns in the training data. You may never know why you’ve been rated as you have.

Certain biometric categorisation systems are prohibited. This covers AI that attempts to infer sensitive characteristics like race, ethnicity, gender identity, or political views from biometric data—facial features, voice, gait, and similar markers. The reasoning here is straightforward: AI systems are often inaccurate with biometric categorisation, they reflect historical biases in their training data, and they enable discrimination at scale. Companies have been caught using these systems to discriminate in hiring, lending, and access to services.

Emotion recognition in the workplace and education settings is banned. Deploying AI to detect emotions in employees or students—ostensibly to gauge engagement or compliance—represents a troubling form of surveillance. It assumes machines can accurately read inner emotional states, which is scientifically questionable. More importantly, it normalises the monitoring of workers’ and students’ emotional responses in real-time. This chills authentic behaviour and creates asymmetric power dynamics.

Predictive policing systems targeting individuals are prohibited. These AI systems attempted to forecast which specific people would commit crimes, then enable preventive intervention. The problem is glaring: if your algorithm is trained on historical crime data that reflects discriminatory policing patterns, you’re automating discrimination. You’re also deploying interventions against people who have done nothing wrong, based on algorithmic predictions that may be unreliable. Pattern-based area policing remains permissible under the Act, but individual-level prediction is out.

Scraping facial images to build facial recognition databases is prohibited. Before the Act took effect, companies freely harvested facial images from the internet or social media to train biometric systems. This happened without consent from the people whose faces were captured. The scale of this practice is staggering—billions of images scraped and processed without permission. The ban closes this loophole decisively.

The Literacy Requirements That Come With Enforcement
The EU AI Acts Prohibited Practices: What Gets Banned and Why - Scott Dylan

Alongside the prohibited practices, the Act introduces mandatory AI literacy standards. Users of high-risk AI systems must understand how they work, what they’re supposed to do, and what they shouldn’t do. Providers must document their systems transparently. This isn’t mere compliance theatre—it’s about building institutional knowledge that actually prevents misuse.

For organisations deploying AI, this means investing in training. Your team needs to understand the legal boundaries. You need documented risk assessments. You need clear lines of accountability for what your AI systems do. These requirements genuinely separate companies that are serious about responsible AI from those treating compliance as a checkbox.

What This Means for UK Companies and the Broader Market

If your company sells products or services into the EU market, the prohibited practices ban applies to you. This isn’t optional. The UK may be outside the EU now, but its market remains enormously important. Many UK tech companies have built their business models partly around the EU. Some will need to fundamentally rethink how they deploy AI.

There’s an interesting secondary effect here: once you build a product that complies with EU standards, it often makes sense to deploy that version globally. The EU is large enough that compliance with its rules frequently becomes the path of least resistance. This means the EU AI Act is reshaping global AI practices, not just European ones.

At Nexatech Ventures, we’re tracking how venture-backed AI companies respond to these changes. Some founders view the regulations with suspicion, seeing them as anti-innovation. I see them differently. The companies that figure out how to build genuinely useful AI systems within these constraints are the ones that will have sustainable business models. The rules aren’t eliminating enormous value—they’re eliminating practices that eroded trust.

How We Approach Compliance at Nexatech

Our investment thesis at Nexatech emphasises responsible AI development. That’s not just moral philosophy—it’s sound business practice. Startups that build AI systems with compliance built in from day one avoid costly refactoring later. They avoid regulatory enforcement actions. They avoid the reputational damage that comes from discovering you’ve been deploying prohibited systems.

When we evaluate AI companies for investment, we now ask pointed questions about their approach to high-risk deployments. How are you handling bias testing? What’s your documentation process? How are you engaging with users of your systems to ensure transparency? These questions separate founders who’ve thought seriously about responsible development from those cutting corners.

The prohibited practices themselves are straightforward to avoid if you’re thoughtful about what you’re building. You don’t need to be deploying social scoring or facial scraping to create enormous value with AI. The companies building the most successful AI products are solving genuine problems: improving diagnoses, optimising logistics, helping businesses make better decisions. These applications aren’t threatened by the EU AI Act. The threatened applications are often the ethically questionable ones that were probably going to face regulatory action eventually anyway.

What Comes Next

The EU AI Act is phase one of a broader regulatory shift. The United States is developing its own approach. The UK continues to develop its framework. Countries globally are recognising that AI deployment can’t remain entirely unregulated.

What matters now is implementation. The principles are clear, but enforcement will determine whether the Act is genuinely transformative or merely performative. The EU has established an AI Office to oversee compliance. There will be fines for violations. There will be cases. The regulatory landscape will clarify through practice.

For organisations operating across multiple jurisdictions, the smart move is to treat the EU standard as the baseline. Build for compliance there, and you’re building for the regulatory future everywhere else. The companies that recognised this early are already ahead.

Related reading: How Emotional AI Claims to Read Your Feelings — and Why It Probably Can’t, Developing Ethical Frameworks for AI Implementation and What is information communication technology ict: A concise guide to ICT basics.


Discover more from Scott Dylan

Subscribe to get the latest posts sent to your email.

Written by
Scott Dylan