The Open-Source Momentum
Meta’s decision to open-source the Llama family of models stands as one of the most significant inflection points in recent AI history. By making these models freely available, Meta didn’t just hand out software—they fundamentally altered the playing field. Suddenly, startups and researchers without billion-dollar R&D budgets could build sophisticated AI applications on a foundation that actually works.
The impact has been immediate and visible. Hugging Face, the platform serving as the de facto marketplace for open-source AI models, has become an essential infrastructure layer. Researchers push models, companies fine-tune them for specific use cases, and an entire ecosystem flourishes around this open exchange. It’s reminiscent of how Linux transformed server infrastructure, but potentially with much broader implications.
Meta’s Llama wasn’t the only catalyst, though it’s probably the most influential. The broader movement toward open-source AI has been building for years, driven by researchers who believe that transparency and accessibility accelerate innovation. But it took Meta’s weight behind the idea to make it impossible to ignore.
European Innovation Finding Its Footing
Europe has traditionally played second fiddle to the US in the AI race. The continent has the talent and the ambition, but American venture capital and Big Tech’s resources created a gravity well that pulled the best work westward. That’s starting to change, and open-source AI is part of why.
Mistral AI, the French startup, is a perfect example. They’ve released competitive open-source models that rival or exceed proprietary alternatives on certain benchmarks. They’re not trying to outspend OpenAI or Google—they’re being smarter about how they allocate resources. By engaging with the open-source community rather than fighting it, Mistral is building a company that works with the current rather than against it.
There are others too. European researchers and startups are increasingly comfortable building on open foundations because those foundations are legitimately good. You don’t need to wait for API access to a proprietary model. You don’t need to wonder if your use case will be rate-limited or suddenly become unavailable. You can run the model yourself, modify it, and own your data.
For investors like us at Nexatech Ventures, this is genuinely exciting. It means European founders have better starting positions than ever before. The moat isn’t just capital anymore—it’s execution, creativity, and understanding specific market needs.
The DeepSeek Question
And then there’s DeepSeek. The Chinese AI company has emerged with models that are genuinely impressive, particularly considering the constraints they’re operating under. US export restrictions on advanced semiconductors mean that Chinese AI companies don’t have access to the latest GPUs. Yet somehow, DeepSeek is producing competitive results.
How? That’s the question everyone’s asking, and there are a few possible answers. Better algorithmic efficiency. Different training approaches. Smarter resource allocation. The company clearly understands how to get the maximum value from limited compute. Whether you’re bullish or bearish on China’s AI prospects, DeepSeek’s emergence suggests that the race isn’t won by who has the best chips—it’s won by who uses them smartest.
There’s also a strategic dimension here worth considering. DeepSeek is engaging with the open-source model. They’re not hoarding their innovations behind proprietary walls. That approach has its own advantages, particularly for building influence and attracting talent. It also happens to align with something the broader AI research community values.
Open vs. Closed: The Tension
The rise of open-source AI has resurfaced an old debate with new urgency: should advanced AI models be open or closed?
The open-source advocates make a strong case. Open models democratise access to powerful technology. They allow smaller companies and individual researchers to build amazing things. They reduce the barriers to entry, which has historically been where innovation flourishes. Open source forced Microsoft to take Linux seriously. It’s fundamentally changed how software gets built. Why shouldn’t AI follow the same path?
But the concerns aren’t dismissible either. There are legitimate questions about safety. An open model can be misused. Someone could fine-tune it for harmful purposes. These aren’t hypothetical risks—they’re real considerations that responsible AI developers should take seriously. How do you balance enabling legitimate innovation with preventing harmful applications? It’s not an easy answer.
The people I talk to in the industry tend to land somewhere in the middle. Few argue that AI should be completely closed. Few argue that all models should be completely open, with zero safeguards. The practical question is more nuanced: which models, at which capabilities, under which conditions should be open? How do you structure open-source development in a way that enables innovation while maintaining reasonable safety standards?
I suspect the answer isn’t uniform. Different types of models might have different governance approaches. A small language model for specific tasks might make sense as fully open. A general-purpose model with significantly broader capabilities might benefit from more structured access. Neither approach is obviously wrong—they serve different purposes.
What This Means for Startups
From a purely practical standpoint, open-source AI models have transformed what’s possible for early-stage companies. You can now start an AI business with a much smaller initial budget than you could two years ago. You can experiment with state-of-the-art models without negotiating with gatekeepers or worrying about API rate limits that could kill your product experience.
But there’s a flip side. As more founders get access to the same foundational models, success depends increasingly on other factors. You need to understand your customers’ problems deeply. You need to fine-tune and adapt models for your specific use case. You need to build product experiences that make the underlying AI useful rather than just technically impressive. These are harder problems than just having access to good base models.
The most successful AI startups we’re backing aren’t the ones just wrapping a UI around a language model. They’re the ones solving genuine problems with AI, often by combining different models, adding domain expertise, and building better data pipelines. Open-source models are the foundation they build on, but the real value creation happens on top.
At Nexatech, we’re seeing opportunities across the board. Companies building infrastructure for fine-tuning and deploying open models. Companies using open models as a starting point for specialised applications. Companies combining open models with proprietary data or methods to create defensible advantages. The playing field has levelled somewhat, which actually benefits the most thoughtful founders.
The Geopolitics of Open Source
There’s a geopolitical dimension to this shift that’s worth acknowledging honestly. For years, American companies and the US government could point to AI dominance as a source of national advantage. That picture is complicating. Chinese companies are advancing despite chip restrictions. European companies are building viable AI businesses. Smaller countries are getting access to world-class AI tools.
Open-source AI makes this inevitable. You can’t restrict the spread of knowledge when it’s published openly. Once a model exists and is publicly available, anyone with adequate compute can use it, adapt it, and build on it. That’s fundamentally democratising in a way that proprietary systems can’t be.
I don’t think this means American AI companies are in trouble—far from it. Companies like OpenAI, Anthropic, and others are still pushing the frontier. But the frontier is no longer so clearly fenced off. The competition is real, global, and increasingly sophisticated.
The Infrastructure Play
One of the most interesting opportunities right now is in infrastructure for open-source AI. If the models themselves are becoming commoditised—available and similar across many different providers—then the competitive advantage shifts elsewhere.
Hugging Face is the obvious example. They’ve become invaluable by being the place where open models live and get shared. But there are other opportunities too. Better tools for fine-tuning. Easier deployment. Better monitoring and evaluation. Ways to combine multiple models. Optimisations for specific hardware or use cases. These are the unsexy but critical components that determine whether a startup actually succeeds in deploying AI at scale.
We’re particularly interested in European companies that can build these tools. They don’t need to compete with American companies on raw capital—they can compete on understanding specific customer needs, building better developer experiences, and moving faster on specific problems.
What We’re Watching
As we look ahead from this point in 2026, there are several things worth tracking closely.
First, will the safety concerns around open-source models get managed in a way that enables continued openness? This isn’t hypothetical. If there are high-profile harms that people can tie to open-source AI development, the political pressure to restrict it could grow. The responsible thing is for the open-source community to be thoughtful about safety from the outset.
Second, how will different regions approach AI regulation? The EU has a regulatory framework emerging. The US is taking a more hands-off approach. China will have its own approach. These different regulatory paths could create incentives for different models of AI development. That’s actually interesting for investment purposes—different rules create different opportunities.
Third, will open-source models actually drive the kind of innovation and democratisation that advocates hope for? Or will we find that proprietary models maintain their edge through better funding, better training data, and better security? This isn’t settled yet. My instinct is that the answer is both—open and closed models will coexist and compete, each winning in different domains.
Finally, what’s the trajectory for companies like DeepSeek? Are they building toward something genuinely transformative? Are we seeing the early stages of serious competition in AI development from outside the traditional American tech ecosystem? Or is the current moment a particular advantage that will erode as competition intensifies? The next year or two should provide some answers.
Why This Matters Now
We’re at an inflection point. For several years, the narrative around AI has been concentrated power—the idea that you need massive resources and proprietary data to build cutting-edge AI systems. Open-source models are challenging that narrative with results. That doesn’t mean resources don’t matter. Obviously they do. But it means the story is more complicated and more interesting.
For entrepreneurs, it means now is an unusually good time to start something in AI. The foundation is there. You don’t need to reinvent the base model. You can focus on the problems that actually interest you and the customers that actually need your solution.
For investors, it means we need to think carefully about where value actually gets created in AI. It’s not always in the models themselves. It’s often in the applications, the infrastructure, the data, the domain expertise, and the ability to execute better than everyone else. That opens up possibilities for founders everywhere, not just in Silicon Valley.
And for the industry broadly, open-source AI represents a shift toward a more competitive, dynamic landscape. That’s good for innovation. It’s challenging for the companies that built their advantages on gating access to models. But it’s genuinely beneficial for everyone interested in seeing AI technology advance quickly and responsibly.
The Road Ahead
The rise of open-source AI and the emergence of capable competitors like DeepSeek represent a real shift in how AI development is happening globally. This isn’t a minor adjustment to how the industry works—it’s a fundamental change in access, incentives, and competition.
What comes next depends partly on decisions that get made in the coming months. How will different companies approach open sourcing? What will the regulatory environment look like? How will safety concerns get balanced against democratisation benefits? How will the competition between open and closed approaches actually play out in real products and real markets?
What I’m certain about is that the landscape is more interesting now than it was a year ago. There’s genuine competition. There are real opportunities for founders who think carefully about where value gets created. And there’s a real chance that AI innovation gets distributed more widely—geographically, demographically, and across company sizes.
For anyone building in AI right now, the open-source movement isn’t just a trend. It’s the foundation on which the next generation of AI companies will be built. Understanding it, engaging with it thoughtfully, and finding where your particular advantage lies within that landscape—that’s the work that separates the winners from everyone else.
Related reading: How Emotional AI Claims to Read Your Feelings — and Why It Probably Can’t, Developing Ethical Frameworks for AI Implementation and What is information communication technology ict: A concise guide to ICT basics.
You May Also Like
- DeepSeek V4 and the Democratisation of AI: Why a Trillion Parameters Matter
- The Growth of AI Patents: What It Means for Innovation
Discover more from Scott Dylan
Subscribe to get the latest posts sent to your email.






