HomeBlogDeepSeek V4 and the Democratisation of AI: Why a Trillion Parameters Matter

DeepSeek V4 and the Democratisation of AI: Why a Trillion Parameters Matter

DeepSeek V4 and the Democratisation of AI: Why a Trillion Parameters Matter - Scott Dylan

The Moment We Realised the AI Landscape Had Changed

On 3 March 2026, DeepSeek released DeepSeek V4. The numbers alone are remarkable: one trillion parameters, which would make it one of the largest language models ever trained. But here’s the detail that matters more than the headline number—only 32 billion parameters are active per token. The other 968 billion are dormant until needed.

This architectural innovation changes something fundamental about how we should think about AI capability and deployment. And it’s happening in a Chinese startup competing against the world’s best-resourced tech companies, operating under Western chip restrictions designed to slow Chinese AI development.

I’ve spent years watching AI development closely. I’ve made investments, tracked progress, talked to researchers and entrepreneurs building at the frontier. The release of DeepSeek V4 represents a turning point in how I think about the competitive landscape and what’s actually possible in AI development.

Understanding the Trillion Parameter Architecture

Language models work by making predictions. They’re trained on enormous amounts of text, learning patterns in how language works. Larger models, with more parameters (the tuning variables that allow the model to make better predictions), generally perform better. They can learn more nuanced patterns, handle more complex tasks, understand context more deeply.

The prevailing model in AI development over the last few years has been simple: bigger is better, more parameters means more capability. Companies trained models with 70 billion parameters, then 100 billion, then 200 billion, competing on sheer scale. The largest models reached into the hundreds of billions of parameters.

DeepSeek V4’s innovation is architectural. Instead of a single large model where all parameters are active all the time, it uses a mixture-of-experts architecture. Think of it as a system with many specialist networks. When processing a particular token, only the relevant specialist networks activate. The others remain dormant. This means you can have an enormous total parameter count—a trillion—but only use a fraction of it for any given query.

The advantage is elegant: you get the benefit of massive scale—the model has learned from operating at that scale—but the computational cost per query is much lower. A trillion-parameter model using 32 billion parameters per token uses roughly 1/30th the compute of a conventional 1 trillion-parameter model. That changes economics fundamentally.

This isn’t a minor efficiency improvement. This is a shift in what’s computationally feasible. Deploying a trillion-parameter conventional model would require enormous compute infrastructure. Deploying a trillion-parameter mixture-of-experts model is significantly cheaper. That difference ripples through the entire competitive landscape.

The DeepSeek R1 Shock and Rising Chinese AI Capability

DeepSeek V4 didn’t emerge in a vacuum. In late January 2026, DeepSeek released R1, a reasoning-focused model, and startled the entire AI development community. R1 achieved performance on reasoning tasks that matched or exceeded leading models from OpenAI and Anthropic. This was shocking not because reasoning-focused models are new, but because DeepSeek achieved it under severe constraints.

China faces chip sanctions. NVIDIA’s most advanced chips cannot be exported to China. Chinese companies cannot legally purchase the cutting-edge processors that power frontier AI development elsewhere. Yet DeepSeek built R1 using older-generation chips, achieving performance parity or better with models built on unrestricted access to the best technology.

This created a reckoning moment. The assumption in Western tech policy was that chip restrictions would slow Chinese AI development significantly, preserving Western technical advantage. R1 suggested that assumption was wrong. DeepSeek had found ways to achieve world-class performance despite the restrictions. Not by being slightly behind—by being competitive.

V4 escalates this story. A trillion-parameter mixture-of-experts model from a Chinese startup, deployed openly, challenging the technical approach dominant in the West. This is the moment where the competitive reality becomes impossible to ignore. Chinese AI development isn’t something that’s going to happen in the future. It’s happening now.

The Open Source Implication: Democratising Frontier AI

DeepSeek released V4 with weights openly available. This is not a proprietary model locked behind an API. This is a model that researchers, developers, and companies can download, fine-tune, and deploy themselves. The implications are enormous.

For the last couple of years, frontier AI capability has been concentrated in a handful of companies: OpenAI, Anthropic, Google, Meta, and now DeepSeek. These companies maintain proprietary models that users access through APIs or interfaces they control. Innovation happens in their labs. Competition is vertical—building bigger models, better infrastructure, more capability—but the barrier to entry remains extremely high.

Open source models disrupt this dynamic. When frontier-capable models are available openly, the competitive field becomes horizontal. Everyone can use the same underlying model. Innovation shifts from building the model to building with the model—finding better applications, better fine-tuning approaches, better deployment strategies. A researcher at a university can work with frontier-capable models. A startup with limited resources can build competitive products. The playing field shifts from ‘who can afford to train trillion-parameter models’ to ‘who can best use these models’.

Meta’s Llama models opened this possibility in principle. DeepSeek V4 demonstrates it at scale. A genuinely frontier-capable model, openly available, is reshaping what’s possible in AI development. This is democratisation in the literal sense: concentrating power held by a few is being distributed more widely.

Cost Efficiency Advances and the Economics of AI
DeepSeek V4 and the Democratisation of AI: Why a Trillion Parameters Matter - Scott Dylan

The economics of AI development have been brutal. Training a frontier model costs tens of millions of dollars. Running inference at scale costs millions daily. The only entities that can sustain these costs are large tech companies with enormous revenue. This created a natural monopoly—whoever can afford the most compute wins.

DeepSeek’s approach changes this dynamic in multiple ways. The mixture-of-experts architecture means lower inference costs. Open sourcing means companies don’t need to train their own frontier models. Using efficient training approaches—which DeepSeek has pioneered—means lower training costs. Cumulatively, these changes make frontier AI capability accessible to far more organisations than previously possible.

What does this mean concretely? A startup can now build sophisticated AI products without needing to raise billions for model training. An academic lab can run cutting-edge AI research without negotiating access from OpenAI or Anthropic. A company in a developing country can deploy frontier AI capability locally, in their language, optimised for their use cases.

This doesn’t mean everyone can train their own frontier model. That still requires substantial resources. But it means frontier models are no longer gated by proprietary access. That’s a significant shift.

What This Means for European AI Startups

Europe has struggled to produce AI companies that compete at the frontier level. The US dominates through scale—a combination of capital availability, engineering talent concentration, and large technology company backing. China is rapidly advancing through state investment and focused effort. Europe has pockets of excellence but lacks the concentrated resources to compete at the largest scale.

DeepSeek V4 changes the game for European startups. They no longer need to build frontier models to access frontier capability. They can build applications on top of DeepSeek’s models, fine-tune them for European languages and use cases, integrate them with European data and infrastructure. This shifts competition from ‘can you build the largest model’ to ‘can you build the best application’.

At Nexatech Ventures, we invest across Europe and the US. The implications for European AI startups are immediately practical. Previously, European AI companies were fighting with one arm tied behind them—they didn’t have access to the best models except through APIs controlled by US companies. Now they can access frontier models openly, build natively in their ecosystems, compete on capability rather than on access.

This doesn’t mean the problem is solved. European startups still face challenges around access to sufficient compute for training, attracting world-class talent away from the US. But the fundamental barrier—access to frontier models—is now removable.

The Geopolitical Dimension

You can’t discuss DeepSeek without discussing geopolitics. The US implemented chip restrictions specifically to slow Chinese AI development. The assumption was that without access to cutting-edge processors, Chinese companies would fall progressively further behind. DeepSeek’s achievements suggest this assumption was flawed.

There are two possible interpretations of this fact. One: the chip restrictions are ineffective and should be tightened further. The other: the chip restrictions are driving Chinese AI development toward different approaches—more efficient models, novel architectures, open-source strategies—that might ultimately prove more competitive than the brute-force approach of simply scaling larger models.

I don’t have certainty about which interpretation is correct. But the pattern is worth noting. Constraints often drive innovation. DeepSeek isn’t succeeding despite restrictions—they’re succeeding partly because restrictions forced them to think differently about problems that the better-resourced West was solving with sheer computational power.

The geopolitical implications extend further. If frontier AI capability is increasingly accessible through open models rather than proprietary APIs, geopolitical control through proprietary technology becomes harder. If any competent engineer can download a frontier model and run it locally, the leverage held by any single company or country diminishes. This could be genuinely significant.

The Capability Question: Is DeepSeek V4 Actually Better?

The trillion-parameter headline generates attention. But what matters is whether V4 is actually better than existing models. The answer is: in some ways, yes. In some ways, it’s competitive. In some ways, it’s probably not ahead.

On reasoning tasks and complex problem-solving, V4 shows strong performance, competitive with OpenAI’s latest models and Anthropic’s Claude. On creative tasks like writing, it appears somewhat weaker. On benchmark performance, it’s in the top tier but not clearly dominant. On speed and efficiency—getting decent results with lower compute—it’s arguably the best in the world.

What matters for impact is not whether V4 is objectively the best model—that’s a complex comparison—but whether it’s good enough for most applications, accessible to most users, and efficient to deploy. By those measures, V4 is genuinely transformative.

The Pattern of Frontier AI Development

Looking back over the last few years of AI development, a pattern emerges. In 2021, large language models were in the realm of academic research. In 2022, large language models became products. In 2023, they became widely used. In 2024, open-source models became competitive with proprietary models. In 2025-2026, frontier-capable open models became publicly available.

This is a consistent pattern of democratisation. What was once the province of frontier labs becomes increasingly accessible. The pace of this democratisation is accelerating. Each cycle, capability spreads faster.

DeepSeek V4 represents the latest step in this progression. Where next? My guess is that within another 18-24 months, open models will be clearly superior to the proprietary alternatives for many applications. The competitive advantage of proprietary models will shift from capability to specific services—better interfaces, better fine-tuning support, better integration with other tools. The frontier of capability becomes open.

What This Means for AI Investment and Strategy

For investors like myself, DeepSeek V4 reshapes the landscape. Companies trying to build value by controlling access to frontier models face a declining value proposition. Companies that are profitable because they have exclusive access to the best AI capability need to rethink. Companies building on top of proprietary models need to consider their dependency on those models and how that dependency changes if open alternatives become clearly superior.

What becomes valuable is different. The ability to fine-tune models for specific domains becomes valuable. Integrations and applications built on top of frontier models become valuable. Data becomes valuable—the training data, the domain-specific data, the ability to continuously improve models through user interaction. The direct capability of the underlying model matters less.

For entrepreneurs, this is genuinely liberating. You don’t need to compete with OpenAI or Anthropic on raw model capability. You can build products that use frontier capability to solve real problems. That’s a far more achievable goal for a startup.

The Broader Implication: AI Capability Becomes Commodity

The long-term trajectory of DeepSeek V4 and models like it is clear. Frontier AI capability is becoming a commodity. Like electricity, like computing power, like internet access, advanced AI capability is becoming something that’s generally available to anyone with the resources to access it.

Commodity doesn’t mean cheap—electricity is subsidised by massive infrastructure investment and isn’t costless to deploy. But it means accessible, available, deployable by diverse organisations with different business models and different goals.

When capability becomes commodity, competitive advantage shifts. It’s no longer about who has the best AI model. It’s about who builds the best solutions on top of commodity capability. It’s about who understands users’ needs best, who integrates AI capability most elegantly into solutions that solve real problems, who builds trust and reliability.

This is the world we’re moving toward. DeepSeek V4 accelerates that trajectory. It’s not the end of AI development—frontier research continues, capability keeps improving. But it is the moment where we shifted from ‘AI is rare and valuable’ to ‘AI is becoming ubiquitous’. That’s a genuinely significant transition.

Related reading: How Emotional AI Claims to Read Your Feelings — and Why It Probably Can’t, Developing Ethical Frameworks for AI Implementation and What is information communication technology ict: A concise guide to ICT basics.


Discover more from Scott Dylan

Subscribe to get the latest posts sent to your email.

Written by
Scott Dylan