HomeBlogResponsible AI Investment: What Nexatech Ventures Looks For in 2026

Responsible AI Investment: What Nexatech Ventures Looks For in 2026

Responsible AI Investment: What Nexatech Ventures Looks For in 2026 - Scott Dylan

When Nexatech Ventures launched as a £100M AI-focused venture fund, we made a deliberate choice about what kinds of companies we would invest in and what kinds we wouldn’t. We committed to seeking AI companies that take responsibility seriously—companies thinking carefully about the implications of their work, about potential harms, about alignment with human values. This choice puts us somewhat at odds with much of venture capital culture, which tends to prioritise growth, disruption, and speed above most other considerations. Yet over the past few years of deploying capital into AI companies, I’ve become more convinced that responsible AI investment isn’t just ethically superior—it’s also better business. Companies taking AI development seriously are more likely to build products that customers trust, to avoid regulatory problems, to hire and retain top talent, to maintain social licence for their work, and to build sustainable competitive advantages. This post explores what responsible AI investment actually looks like in practice, what we look for in companies, what red flags we watch for, and why we believe this approach ultimately creates better returns.

The Baseline: What We Mean By Responsible AI

First, let me clarify what we mean by responsible AI investment, because the term has become somewhat loaded and means different things to different people. For Nexatech, responsible AI means: developing AI systems with explicit consideration of potential harms and how to mitigate them; building interpretability and transparency into systems so that people can understand how decisions are being made; being honest about limitations and failure modes; engaging with affected communities about implications of the technology; committing to reasonable constraints on use cases even when those constraints reduce revenue; and being willing to say no to applications of technology that we believe would cause net harm. This is different from performative responsibility—saying nice things about AI safety whilst actually just rushing to deploy systems at scale. True responsibility involves trade-offs: slower deployment timelines, smaller addressable markets, higher development costs, willingness to walk away from some business opportunities.

Not all AI companies need to commit to all these dimensions equally. A company working on medical AI diagnostics might prioritise interpretability and clinical validation, where a company working on customer service chatbots might prioritise different dimensions. But every company should be actively thinking about responsible development in their domain. Every company should be able to articulate why their approach is responsible, not just claim responsibility as a marketing exercise. And every company should be willing to make concrete trade-offs that demonstrate commitment to responsibility rather than just talking about it.

Due Diligence on Responsibility: What We Actually Investigate

When we conduct due diligence on an AI company, we look at several specific dimensions of how they’re approaching responsibility. First, we examine the founding team and whether they have people with expertise in AI safety, ethics, or the domain where they’re applying AI. We look for teams where at least one person’s primary focus is on safety and responsibility, not where it’s been bolted onto a product team’s responsibilities. We interview team members about their thinking on failure modes and risks. Can they articulate what could go wrong with their system? Have they thought about misuse cases? Are they thinking about this proactively or only if prompted?

Second, we examine their approach to evaluation and testing. Do they have rigorous processes for testing systems before deployment? Do they test for bias and fairness, not just accuracy? Do they test failure modes and edge cases, not just happy paths? Do they have external review of their systems, or only internal testing? We look for companies that are spending significant resources on evaluation, because cutting corners on evaluation is where irresponsible AI happens. Third, we examine their data practices. Where does their training data come from? Has it been cleaned and validated? Are there biases in the data they’re aware of and working to address? Do they have processes for updating models when they find problems? Data quality is foundational to responsible AI, and many startups cut corners here to move faster.

Fourth, we examine their governance structure. Is there oversight of AI systems beyond the product team? Do they have an ethics board or process for evaluating potentially problematic applications? Do they have policies on what they will and won’t do with their technology? Many startups treat governance as a compliance burden, but we look for companies that see governance as valuable. Fifth, we look at their communication and transparency. Are they honest about limitations of their systems? Do they communicate appropriately with customers about what their AI can and can’t do? Are they transparent with regulators and the public? We’re wary of companies that oversell their capabilities or that try to hide limitations.

Red Flags: When We Walk Away From AI Companies

Several patterns cause us to pass on AI companies, regardless of how promising they otherwise look. The first red flag is when founders say they’re not thinking about responsibility or safety because their customers don’t care. This tells us that the founders are willing to internalise responsibility only when they feel external pressure. The moment pressure decreases, they’ll cut corners. We want founders who care about responsibility because they genuinely think it matters, not just because it’s currently fashionable. The second red flag is when founders seem genuinely ignorant of potential failure modes and harms of their system. We don’t expect them to have thought of everything, but we expect them to have thought seriously. If they’re building a system that makes decisions affecting people’s lives and they haven’t considered fairness, that’s concerning.

The third red flag is when companies are explicitly trying to evade regulation or operate in grey areas to avoid oversight. There’s a difference between regulatory strategy that’s sophisticated and regulatory strategy that’s evasive. We want companies that engage constructively with regulators, that are transparent about what they’re doing, and that work within the spirit of regulations even where the letter of regulations might allow evasion. The fourth red flag is when we investigate and find examples of cutting corners on safety for speed. If there are examples of the company deploying systems that hadn’t been adequately tested, or that had known issues that weren’t fixed because it would delay launch, we generally pass. That pattern suggests the company will continue to prioritise speed over safety, and we’ll end up eventually involved in a failure or crisis.

The fifth red flag is when diversity and inclusion are absent from the team and appear unimportant to leadership. Diverse teams catch biases in AI systems that homogeneous teams miss. If a team building AI is entirely men, or entirely from a single ethnicity, or entirely from elite technical backgrounds, there will be blind spots. These blind spots lead to systems that work poorly for underrepresented populations and that replicate or amplify social inequities. We won’t invest in companies that appear unwilling to take diversity seriously. The sixth red flag is when the business model seems dependent on exploiting regulatory uncertainty or information asymmetries. If a company’s core value proposition depends on doing something regulators don’t want but can’t yet prevent, we’re wary. Those companies tend to face sudden crises when regulation catches up.

Why Responsibility Actually Creates Business Value

The deeper argument we make to fellow investors is that responsibility isn’t just ethically sound—it’s good business. Consider customer trust. AI systems are being deployed in domains where customers need to trust that the system is working properly and fairly. A customer using an AI recruiting system needs to believe it’s not biased against women or minorities. A customer using an AI medical diagnostic system needs to believe it’s been properly validated. A customer using an AI system for content moderation needs to believe it’s not making arbitrary decisions. Companies that demonstrate genuine commitment to responsible development build customer trust more effectively than companies that just claim responsibility. This trust translates into competitive advantage, into customer loyalty, into willingness to pay premium prices for products from trusted providers.

Consider regulatory risk. Regulators around the world are increasingly focused on AI. The EU AI Act, the AI Bill of Rights discussion in the US, the UK’s approach to AI regulation—all of these are creating a landscape where regulatory scrutiny will increase. Companies that have already built responsibility into their processes will navigate this landscape more easily than companies trying to retrofit responsibility after the fact. A company that’s been doing fairness testing from the beginning won’t be shocked to find it’s required by regulation. A company that’s documented their decision-making processes won’t struggle to demonstrate accountability. Companies that are proactively ahead of regulation tend to do better when regulation arrives than companies that are constantly one step behind.

Consider talent recruitment and retention. The best AI researchers and engineers in the world have options. They can work at many companies. Many of them care about the work being done responsibly. Companies that take responsibility seriously tend to attract and retain better talent than companies where engineers feel they’re being pushed to cut corners. Talented engineers will leave companies where they feel the work is irresponsible. This creates churn, loss of institutional knowledge, reduced ability to innovate. Companies with stable, talented teams build better products than companies with high turnover. Responsibility contributes to talent retention, which contributes to better products and better business outcomes.

Due Diligence on Use Cases and Market Fit

Beyond examining how companies are developing AI, we also examine what they’re actually building and whether the use case is one we believe is responsible. Some application domains are inherently more fraught than others. We’re cautious about AI applications in criminal justice—these systems have clear potential to amplify existing biases and injustice. We’re cautious about AI-based surveillance systems. We’re cautious about high-consequence healthcare applications that haven’t been properly validated. We’re cautious about systems that would affect credit decisions or other determinations that significantly impact people’s lives, unless the company can demonstrate genuine commitment to fairness testing. This doesn’t mean we never invest in companies working on difficult applications. But we examine whether the company is approaching these difficulties seriously or just hoping they won’t come up.

We also examine whether the problem the company is solving is genuinely important. Not all AI applications create value. Some use AI because it’s trendy and because venture capital is available, not because AI is the right solution. Some applications of AI create value only by shifting costs onto others—by making certain decisions faster but less fairly, by automating things that shouldn’t be automated, by replacing human judgement with algorithmic judgement in ways that harm people. We want to invest in companies solving problems where AI actually creates value, not just disruption. This requires looking beyond the pitch and understanding what the actual impact of the company’s work would be if they succeeded.

The European AI Landscape and Responsible Innovation
Responsible AI Investment: What Nexatech Ventures Looks For in 2026 - Scott Dylan

I’ve spent significant time in recent years examining the AI startup landscape across Europe, and I’m struck by a difference in approach compared to some other regions. European founders tend to be more cautious about responsible development, more attentive to regulatory compliance, more concerned with fairness and ethics. This is partly cultural—European attitudes toward business regulation are generally less libertarian than American attitudes. It’s partly regulatory—the EU AI Act and GDPR have created an environment where responsibility is required, not optional. It’s partly strategic—European companies trying to compete globally often position themselves as trustworthy and responsible, in contrast to American companies sometimes positioning on disruption and move-fast-break-things mentality.

This difference doesn’t mean European AI companies are better than American or Asian ones. But it does suggest that there are multiple viable approaches to building AI companies, and that the move-fast-break-things approach isn’t the only path to success. We actively look for opportunities in European AI startups because we believe responsible innovation is both viable business and a competitive advantage in an increasingly regulated environment. Some of our best investments have been in European companies thinking carefully about how to build AI responsibly whilst remaining competitive. These companies tend to be slightly slower to market, slightly more expensive to build, but ultimately more sustainable and valuable.

Evaluating AI Infrastructure Companies

Beyond application-specific AI companies, we also invest in AI infrastructure—companies building tools, platforms, and services that enable other AI companies to build more responsibly. This is important because responsibility needs to be baked into infrastructure, not just added on top. We look for infrastructure companies building tools for fairness testing, for model evaluation, for interpretability, for governance and compliance. These are companies enabling responsible development rather than companies applying AI to specific problems. The due diligence on infrastructure companies is different because their responsibility matters at a different level—they’re responsible for building tools that enable responsibility in other companies.

For infrastructure companies, we look for whether they’re actually talking to the companies that would use their tools, whether they understand the real problems practitioners face, whether their tools are genuinely useful rather than theoretically elegant but practically cumbersome. We look for whether the company has thought about how their tools might be misused or ignored. We look for whether the company has realistic expectations about adoption—will companies actually use fairness testing tools if they slow down deployment, or do they need to be integrated into standard workflows to become adopted? Good infrastructure companies understand adoption problems and build tools that are easy to use, because otherwise responsibility remains an optional extra rather than becoming standard practice.

The Trade-off Between Speed and Responsibility

One of the central tensions in AI investment is the trade-off between speed and responsibility. The venture capital industry typically rewards speed. Companies that move fast, deploy at scale, capture markets quickly, tend to get the highest valuations and the most capital. Responsibility sometimes slows things down. More testing takes time. Fairness evaluation takes time. Governance processes take time. Stakeholder engagement takes time. Companies that insist on these activities will deploy more slowly than competitors who skip them. From a pure speed perspective, responsible companies will lose. But we believe this analysis is too short-term. Responsible companies that deploy properly will ultimately build better products, maintain customer trust, navigate regulation more easily, and create more sustainable businesses. A company that moves fast but cuts corners will move faster initially but will eventually face crises—regulatory action, public backlash, customer defection—that slow them down more than responsibility would have.

What we’ve found is that responsibility-focused companies often reach profitability faster than speed-focused companies, even if they grow more slowly initially. This is because they build customer trust, avoid regulatory crises, maintain talent stability, and don’t waste resources on fixes for problems that should have been caught in development. The venture capital industry tends to focus on growth rate rather than profitability, which is why speed-focused companies can attract capital even if their underlying business model is suboptimal. We consciously look past growth rate and evaluate the sustainability and profitability of the business model. This leads us to companies that might grow more slowly but are more likely to actually be successful long-term.

Red Flags in Market Dynamics: When Hype Gets Dangerous

We’ve all witnessed AI hype cycles multiple times. Large language models are heralded as AGI and will solve all problems. Then they’re revealed to have significant limitations and people swing to the opposite extreme. In this environment, there’s enormous pressure on AI companies to meet inflated expectations. Investors expect exponential growth. Customers expect capabilities that don’t yet exist. Regulators are confused about what’s actually possible versus what’s hype. Companies operating in this environment face pressure to overpromise and overdeliver in ways that aren’t actually possible, or to cut corners to seem to deliver what was promised. When hype is extreme, we become more cautious about all investments in that space, not less. High hype suggests high probability of disappointment and crash, which will take down even good companies that got caught up in the wave.

We’ve also learned to be wary of companies whose entire pitch depends on hype or on future capabilities that don’t yet exist. A company pitching transformative capabilities that will arrive in two years, funded on the premise that these future capabilities will materialise, seems riskier than a company delivering real value today. This doesn’t mean we never invest in companies working on ambitious long-term research. But we look for these companies to have near-term value propositions as well, not to be entirely dependent on future breakthroughs. We want teams that can deliver value incrementally, demonstrating progress along the way, rather than teams dependent on a single big breakthrough that might or might not happen.

International Considerations: Responsible AI in a Global Context

Responsible AI investment also requires thinking internationally. The UK and Europe are increasingly regulated around AI, but this regulation doesn’t apply to companies in other jurisdictions. Companies operating across borders need to think about which standards to apply. Do you apply different standards to users in different countries? Do you have different versions of products for regulated and unregulated markets? Do you only operate in jurisdictions with strong regulation? Different companies answer these questions differently. We prefer companies that apply consistent standards globally rather than having different versions for different markets. A company that’s responsible in the EU but irresponsible in less regulated markets is still irresponsible—it’s just cynically exploiting regulatory differences.

We also think about how geopolitical tensions affect AI investment. As AI becomes central to national competition and security, governments are paying increasing attention to who owns and controls AI companies. There are real questions about whether AI companies can remain neutral participants in global markets, or whether they’ll inevitably be pulled into geopolitical competition. We want to invest in companies that think seriously about these dynamics, that understand potential for their work to become weaponised or politicised, and that have thought through how they’ll respond. Naïveté about geopolitics is a form of irresponsibility in the AI space, because it fails to account for real risks and real harms that can result from companies being caught in geopolitical conflicts.

The Role of Impact and Purpose Beyond Profit

One of the questions we ask when evaluating AI companies is whether the founders care about impact beyond profit. This doesn’t mean we expect founders not to want to be successful financially. Of course they do. But we look for founders who have some genuine commitment to whether their work is helping people, whether it’s making the world better or worse. This can sound naive in the context of venture capital, which is explicitly about generating financial returns. But our experience is that founders with genuine commitment to impact beyond profit are more likely to make good decisions when they face trade-offs between speed and responsibility, between growth and ethics, between short-term revenue and long-term sustainability.

Purpose-driven founders also tend to be more resilient when facing adversity. Building an AI company is hard. You’ll face setbacks, criticism, regulatory challenges, technical problems, talent difficulties. Founders who are motivated primarily by making money will get discouraged and make poor decisions when the going gets tough. Founders who are motivated by believing their work matters are more likely to persist and to maintain their values under pressure. We actively look for this kind of purpose in founders, and we’re willing to work with teams where purpose is central to why they’re building what they’re building. This doesn’t mean we only invest in non-profit companies or companies with explicit social missions. It means we look for commercial companies where the founders genuinely care about impact beyond just returning money to investors.

How We’re Thinking About Competition in Responsible AI

One of the questions we ask ourselves is whether responsible AI investment puts our portfolio at a competitive disadvantage if competitors are making less responsible choices. This is a legitimate concern. If one company is cutting corners on fairness testing and deploying faster, they might capture market share and grow faster than a responsible competitor who takes time for proper evaluation. If one company is willing to sell to any customer whilst another refuses certain use cases, the less principled company might make more revenue. These are real dynamics that could hurt our returns.

However, we believe this competitive pressure is overstated in the long term. History suggests that companies cutting corners eventually face crises that destroy value. We’ve seen this with social media companies that grew rapidly through irresponsible practices and are now spending billions on reputation management and trying to retrofit responsibility. We’ve seen it with financial companies that cut corners on responsibility and faced regulatory action and customer defection. We’ve seen it with healthcare companies that prioritised growth over patient safety and faced catastrophic failures. The companies that thrive long-term are generally those that took responsibility seriously early, even if they grew more slowly initially. We’re willing to accept slower growth if it leads to more sustainable, defensible businesses that are less likely to face catastrophic crises.

Our Track Record and Learning

Looking back over the life of Nexatech Ventures so far, I can say that our commitment to responsible AI has shaped both our portfolio composition and our approach to working with companies after investment. We’ve had companies we’ve invested in that have faced unexpected challenges and required us to push them on responsibility. We’ve had conversations with portfolio companies about slowing down to do proper testing, about refusing certain applications, about improving diversity on their teams. These conversations aren’t always welcome—entrepreneurs are usually eager to move fast. But we’ve seen cases where these conversations prevented larger problems down the road. We’ve had regulatory agencies ask us about how we vet our investments for responsibility, and being able to explain our process gives us credibility and standing.

We’ve also learned that responsibility isn’t a binary state. It’s not like a company is either responsible or irresponsible. It’s more like a direction you’re moving. Companies are always on a spectrum, and they’re constantly making choices about whether to invest more in responsibility or to cut corners. Our role as investors is to push companies toward greater responsibility, to make it clear that we value this, and to provide support and resources for doing it well. Some of our best returns have come from companies we pushed toward greater responsibility, because taking responsibility seriously helped them build better products and maintain customer trust.

Looking Forward: The Future of Responsible AI Investment

As AI becomes more central to the economy and to society, I expect investment in responsible AI to become the norm rather than the exception. Regulators will increasingly require companies to demonstrate responsibility. Customers will increasingly demand it. Talent will increasingly seek out companies taking it seriously. The question isn’t whether responsibility will matter in AI investment—it clearly will. The question is whether companies are ahead of these curves or behind them. We’re betting that being ahead is better business, and so far the evidence supports that bet.

For other investors and founders considering this approach, I’d say that responsible AI investment isn’t some nice-to-have ethical overlay on venture capital. It’s about building businesses that are genuinely defensible, that create real value, that maintain customer and employee trust, and that avoid catastrophic risks. It’s about understanding that in a world where AI is increasingly regulated and increasingly scrutinised, the companies that will win are those that have built responsibility into their DNA from day one. That’s not naive idealism about venture capital. That’s realistic business thinking based on how industries evolve and which companies ultimately create value. And that’s why we’re committed to it.

Related reading: How Emotional AI Claims to Read Your Feelings — and Why It Probably Can’t, Developing Ethical Frameworks for AI Implementation and What is information communication technology ict: A concise guide to ICT basics.


Discover more from Scott Dylan

Subscribe to get the latest posts sent to your email.

Written by
Scott Dylan