HomeBlogBuilding Ethical AI Startups: What Investors Should Demand

Building Ethical AI Startups: What Investors Should Demand

Building Ethical AI Startups: What Investors Should Demand - Scott Dylan

The Investor Reckoning

Three years ago, pitching an AI startup meant demonstrating technical capability and market opportunity. The pitch was straightforward: we’ve built something powerful, here’s why people need it, here’s how we’ll scale it. Ethics was mentioned, if at all, as an afterthought. An awkward slide about responsible AI, perhaps. A vague commitment to fairness. Then quickly back to the metrics that mattered: user growth, processing speed, accuracy rates.

That world has shifted dramatically. Today, when startups pitch to Nexatech Ventures and serious investors broadly, we spend as much time on governance questions as on technical capabilities. How are you addressing bias in your training data? What’s your transparency framework? Who’s on your ethics advisory board? How do you handle edge cases where your system produces harmful outputs? These aren’t peripheral concerns anymore—they’re central to whether we’ll invest.

This shift hasn’t happened because investors suddenly became virtuous. It’s happened because the business case has become clear. AI systems without proper governance frameworks face regulatory action, legal liability, reputation damage, and ultimately, market rejection. The companies that will win the next decade are those that build ethics into their DNA from day one, not those that treat it as a compliance problem to solve after they’ve built something impressive. I’ve watched enough technology cycles to know that the winners are never the ones trying to cut corners or hoping regulators won’t notice. The winners are those building on solid foundations.

ESG Frameworks and AI: A New Frontier

Environmental, Social, and Governance frameworks have been central to institutional investment for years. Large funds have incorporated ESG criteria into their analysis of traditional companies. But applying ESG to AI is relatively new territory. What does environmental responsibility mean for a company training massive neural networks? What does social responsibility mean when your system affects millions of people? What does governance mean when your product is essentially a black box that even your engineers don’t fully understand?

The environmental question is straightforward in some ways, complex in others. Training large language models requires enormous computational resources, which translates to energy consumption. Some estimates suggest training a large model produces carbon emissions equivalent to an entire household’s annual usage, maybe more. This isn’t trivial. For AI companies, demonstrating commitment to renewable energy, to efficient training methods, to carbon offsets becomes part of the investment case. We ask startups: what’s your energy consumption? Are you training on renewable power? Have you considered your carbon footprint? These questions matter, and the answers differentiate responsible companies from those that haven’t thought beyond the technical challenge.

The social dimension is more interesting and probably more important. An AI system’s decision about whether to approve a loan, whether to recommend someone for a job, whether to diagnose a medical condition—these affect real people’s lives. If the system has bias baked in, it perpetuates inequality. If it’s opaque, people can’t understand or challenge its decisions. The ESG framework asks: what’s your impact on society? Are you actively checking for bias? Are you thinking about fairness and accuracy across different demographic groups? Are you transparent about your system’s limitations? Companies that can demonstrate thoughtful answers to these questions are far more investable than those that haven’t considered them.

The EU AI Act Changed Everything

When the EU AI Act came into force in January 2024 and began phasing in requirements through 2026, it fundamentally changed the landscape for AI startups. This wasn’t just another privacy regulation like GDPR—though it incorporated those lessons. This was a comprehensive regulatory framework that categorised AI systems by risk level and imposed different requirements on each category.

High-risk AI systems—those used in hiring, lending decisions, law enforcement, safety-critical applications—face substantial requirements. They need documented training data. They need bias impact assessments. They need human oversight mechanisms. They need transparency documentation. They need to be able to explain their decisions to users. They need continuous monitoring for performance degradation. These requirements are expensive to implement. They slow down product development. They create compliance overhead. But they’re also becoming the de facto global standard.

What’s remarkable is how quickly the EU AI Act has influenced investment globally. Even investors focused on US or Asian markets are asking about EU compliance because compliance with the strictest framework creates optionality. A company that meets EU requirements can sell to Europe. A company that only meets US requirements might struggle if it eventually wants to expand internationally. We’ve started telling startups: design for EU compliance from the start. It’s harder initially, but it positions you to operate globally and gives you a competitive advantage in a world where regulatory scrutiny of AI is increasing.

The practical impact is that startups can no longer treat ethics and compliance as something they’ll bolt on after they’ve proven the technical concept. The best startups are building governance into their architecture. They’re documenting their training data sources. They’re running bias testing as part of their development pipeline. They’re designing systems with explainability from the start rather than trying to understand black boxes after the fact. This is more expensive, but it’s also more defensible and ultimately more sustainable.

What We Actually Look for in Ethical AI Companies
Building Ethical AI Startups: What Investors Should Demand - Scott Dylan

At Nexatech Ventures, we’ve developed specific criteria for evaluating AI startups through an ethics and governance lens. It starts with team. Does the founding team include someone genuinely focused on responsible AI? Not someone whose title is “Chief Ethics Officer” but who’s a secondary concern. Someone who has decision-making authority, whose voice shapes product development, who can tell the CEO no. We’ve learned that ethical AI happens when it’s central to the company’s identity, not when it’s delegated to the compliance department.

Data quality matters enormously. Where did your training data come from? Have you verified that it was properly licensed or that you have consent to use it? Have you done work to understand what biases might be present? Have you audited whether the data fairly represents the populations your system will affect? Many startups are careless about data sourcing, viewing it as an implementation detail rather than as foundational to whether their system will be fair and accurate. We ask hard questions here because bad data creates bad AI, and no amount of ethical intent can fix that later.

Testing and monitoring matter. Have you benchmarked your system’s performance across different demographic groups? Do you have processes for detecting when the system starts performing poorly or behaving unexpectedly? Do you have mechanisms for users to report problems? Do you have a real plan for what you’ll do if you discover bias after deployment? The companies that think seriously about these questions are building robustness. The ones that don’t are building liability.

Transparency and explainability are critical. Some AI systems are inherently more explainable than others. A decision tree or a linear regression model is straightforward to explain. A deep neural network is harder. But companies can design for explainability—they can build monitoring systems that help them understand what the model is using to make decisions, they can provide users with explanations for decisions that affect them, they can document their system’s limitations. The companies that can’t or won’t do this are betting that their systems will never fail in ways that matter. That’s not a bet we’re interested in making.

Responsible AI Frameworks: From Theory to Practice

Over the last few years, various organisations have developed responsible AI frameworks. The OECD has AI Principles. UNESCO has guidelines. Industry groups have proposed standards. These frameworks are genuinely useful because they translate abstract ideas about fairness and transparency into concrete practices. When we’re evaluating a startup, we look at whether they’ve adopted a framework and whether they’re implementing it seriously.

A good framework should address several key areas. Fairness: is the system treating different groups equitably? Transparency: can users and regulators understand how decisions are made? Accountability: if something goes wrong, who’s responsible and what’s the redress mechanism? Safety: has the system been tested for failure modes? Privacy: are user data being protected appropriately? A startup doesn’t need to implement every framework perfectly, but they should have a coherent answer to each of these questions.

What we’ve learned is that the frameworks aren’t actually that hard to implement if you’re thinking about them from the start. It’s when companies build something without any governance framework and then try to retrofit ethics that things get expensive and often impossible. The startups we’re most excited about are those that say: we’re building a system that will affect people’s lives, so we need to think about fairness and safety and transparency from day one. Here’s our framework, here’s how we’re implementing it, here’s who’s overseeing it. Those companies move faster ultimately because they’re not spending time later dealing with failures they could have prevented.

Bias Auditing: Moving Beyond Benchmarks

Bias auditing sounds technical, but it’s fundamentally about fairness. An AI system that makes better decisions for some groups than others isn’t just ethically problematic—it’s a business problem. It creates liability. It creates reputational risk. It likely violates regulations. Yet many startups are still treating bias auditing as optional. We’re demanding that it become mandatory.

Effective bias auditing requires several elements. First, understanding what outcomes matter. If you’re building a hiring system, what constitutes a fair outcome? Equal hiring rates across demographic groups? Or equal likelihood of being called for an interview if you’re equally qualified? These aren’t technical questions—they’re values questions that need to be answered before you can audit. Second, getting data that lets you measure across groups. If your training data doesn’t include information about protected characteristics, you can’t even see whether your system has bias. Third, testing performance across different groups. Is the accuracy equal? Are the false positive rates equal? Are the outcomes equitable? Fourth, having a process for addressing problems when they’re identified.

We’re also starting to see more sophisticated auditing that looks beyond just accuracy to outcome distributions. A system might be equally accurate for different groups but still produce discriminatory outcomes because of how decisions are made. A lending model might approve loans at similar rates for different demographic groups but charge higher interest rates to some, which perpetuates inequality in a different way. Good auditing catches these second-order effects.

The companies that are doing this well are building auditing into their development process. They run audits regularly, not just once before launch. They have processes for responding to findings. They’re transparent about what they’ve found. This doesn’t mean their systems are perfect—bias is often impossible to eliminate entirely. But it means they’re identifying and addressing problems rather than hoping nobody notices.

Transparency: The Tool That Builds Trust

If I had to name one thing that separates the AI companies we’re confident in from the ones we’re wary of, it’s transparency. Companies that are honest about what their systems can and can’t do, about what data they’re trained on, about how they handle problems, that build trust. Companies that are secretive or evasive create doubt.

Transparency takes several forms. Technical documentation: can someone understand how your system works, what data it was trained on, how it makes decisions? This doesn’t mean publishing all your proprietary code, but it means providing enough detail that stakeholders can understand whether the system is trustworthy. Model cards have become standard—documents that describe a model’s performance characteristics, known limitations, and use cases where it’s appropriate. We ask every startup to have them.

Transparency with users also matters. If your system is making decisions that affect someone’s life, they should understand why. If you’re using data about them to train a model, they should know it and be able to opt out if they choose. This isn’t just ethical—it’s increasingly required by regulation and expected by sophisticated users. The EU’s AI Act explicitly requires transparency documentation. GDPR requires disclosure of automated decision-making. These requirements are becoming global norms.

Internal transparency matters too. Your board should understand what your AI systems are doing. Your employees should understand your governance framework. There’s a tendency in tech companies to have the technical experts speak an incomprehensible language that prevents oversight. We push back on that. A well-governed company can explain its AI systems to its board in a way that a non-technical person can understand. If you can’t do that, you don’t understand your system well enough to be deploying it.

The Accountability Question

Transparency and fairness matter, but they’re incomplete without accountability. Someone needs to be responsible when things go wrong, and there needs to be a mechanism for affected people to get redress. This is one of the harder problems in AI governance because it’s not always clear who should be responsible. The company that built the system? The company that deployed it? The individual who used the system to make a decision? The person whose data was used in training?

Good governance requires clarity here. A startup should have documented decision-making processes for how accountability works in different failure modes. If the system makes a discriminatory decision that affects someone, how does that person find out? How do they challenge it? Is there a human in the loop who can review the decision? Can the company provide an explanation? Can the decision be appealed? These processes need to exist before deployment, not after problems emerge.

We also care about how companies handle transparency when things go wrong. If a system experiences bias or starts making poor decisions, do they acknowledge it or try to hide it? Do they work with affected communities or dismiss concerns? Do they iterate and improve or do they insist the system is working as designed? The companies we trust are those that treat problems as learning opportunities and are willing to change course if evidence suggests they should.

Why This Matters for Returns

It would be reasonable to ask whether this focus on ethics is just virtue signalling or whether it actually matters for investment returns. The honest answer is that it matters enormously. Companies that have strong governance frameworks face lower regulatory and legal risk. They can expand into new markets without rewriting their systems. They build trust with customers and partners. They attract better talent—the best engineers and researchers want to work on systems they believe in. They weather crises better because they’ve built credibility.

Conversely, companies that cut corners on ethics often face expensive problems. They face regulatory action that requires remediation. They face lawsuits from affected parties. They face public backlash that damages their reputation. They have difficulty hiring and retaining talent. Their systems fail in predictable ways that could have been prevented. Over a five to ten year horizon, the ethically-built companies outperform the ones that prioritised speed over responsibility.

This isn’t theoretical. We’ve watched it play out with social media companies, with fintech, with healthcare AI. The winners are those that built properly. The losers are those that hoped governance problems wouldn’t catch up to them. We’ve gotten better at learning from these cycles, and that learning is built into how we evaluate startups now.

Building the Next Generation of Ethical AI

When we launched the Nexatech Ventures fund, we committed to investing in AI companies that are building ethically from day one. This means we’re passing on some companies that have impressive technical capabilities but weak governance. It means we’re investing in companies that might grow more slowly because they’re building robustness alongside capability. It means we’re helping develop frameworks and processes that other investors can adopt.

The exciting thing is that this is becoming mainstream. The companies attracting the best talent, the most partnerships, the most customer interest are increasingly those with strong governance. The tide is turning toward responsible AI not because of regulation or legal pressure alone, but because it’s good business. Companies that build systems they can explain and defend are fundamentally more trustworthy than those that can’t. Customers want that. Partners want that. Investors want that.

For founders building AI startups, the lesson is clear: treat ethics and governance as core, not peripheral. Get good people focused on responsible AI. Implement frameworks that let you test for fairness and understand your systems. Document everything. Be transparent about what you know and what you don’t know. Build mechanisms for accountability and redress. This will slow you down initially, but it will position you to build something that lasts, that scales globally, that you can be genuinely proud of.

The AI industry is maturing. The days when you could build something powerful and worry about consequences later are ending. The future belongs to companies building ethically from the start—companies that understand that trust is more valuable than hype, that responsibility is good business, and that the most powerful AI systems will be those that people actually want to use.

Related reading: How Emotional AI Claims to Read Your Feelings — and Why It Probably Can’t, Developing Ethical Frameworks for AI Implementation and What is information communication technology ict: A concise guide to ICT basics.


Discover more from Scott Dylan

Subscribe to get the latest posts sent to your email.

Written by
Scott Dylan