HomeBlogWhy AI Ethics Cannot Be an Afterthought: Lessons from 2025

Why AI Ethics Cannot Be an Afterthought: Lessons from 2025

Why AI Ethics Cannot Be an Afterthought: Lessons from 2025 - Scott Dylan

The Year Ethics Became Unavoidable

2025 was the year the technology industry could no longer pretend that AI ethics was optional. Throughout 2024 and into 2025, AI systems built without careful ethical consideration created problems that captured public attention. Bias in hiring algorithms led to discrimination lawsuits. Generative AI trained on copyrighted material led to legal challenges from creators. Deepfakes using AI created real confusion and potential harm. Automated content moderation using AI censored legitimate speech. Facial recognition systems built with insufficient attention to accuracy disparities led to wrongful arrests. The pattern was clear: when companies cut corners on ethics, people were harmed. The regulatory response followed. By the end of 2025, it was obvious: ethical development is not optional. It’s foundational. Companies that haven’t made it central to their processes are going to face consequences.

The problem is that ethics-as-afterthought is deeply baked into technology industry culture. The motto ‘move fast and break things’ is famous for a reason—it’s how the industry has operated. Ethics gets added later if it becomes necessary due to pressure or regulation. This works fine until the thing you break is people’s lives. Then it’s no longer a business problem. It’s a human problem. And then regulators get involved. Companies that operated with ethics as an afterthought are now facing fines, legal liability, reputational damage, and loss of customer trust. The lesson is costly but clear: ethics needs to be built in from the start, not bolted on later.

The Bias Problem Reaching Critical Mass

2025 saw increased documentation and enforcement action against bias in AI systems. Hiring algorithms that discriminated against women or minorities, many of which were trained on biased historical data, continued to cause harm despite being known problems for years. Bail and sentencing algorithms that reflected and amplified racial disparities in criminal justice led to higher bail recommendations for defendants of colour. Healthcare algorithms that underestimated health needs of Black patients led to inferior care. Credit scoring algorithms that discriminated based on protected characteristics. All of these problems existed previously. But in 2025, there was less tolerance for them. Regulators took action. Civil rights organisations launched litigation. News coverage increased. The industry could no longer claim ignorance.

The technical solutions are known. Testing for bias before deployment. Examining training data for representation and balance. Designing systems that can explain their decisions so bias can be detected. Involving affected communities in development. Having diverse teams designing and reviewing systems. These aren’t mysterious solutions. They’re well-established practices. The problem is that they cost time and money. They slow development. They require expertise and process. Companies optimising for speed and cost minimisation don’t do them. They face consequences. The lesson that bias in AI is not acceptable is becoming the baseline. The question now is whether it will translate into actual practices across the industry or whether it will remain a rhetoric while companies continue shipping biased systems.

Data Privacy and Training Data Issues

Generative AI systems are trained on massive datasets scraped from the internet. These datasets often include copyrighted material, personal information, and data collected without consent. Lawsuits from artists and writers against generative AI companies claimed their work was used without permission to train systems that would potentially displace them. These lawsuits were advancing through courts in 2025. Meanwhile, regulators were examining whether training data practices complied with data protection laws. EU regulators in particular were aggressive in investigating whether systems training on EU data without consent violated GDPR. The issue was clear: if you train an AI system on data that includes people’s personal information, health data, or other sensitive information without their consent, you’re violating privacy expectations.

The technical and legal landscape is still being settled. Companies arguing that using data for training is fair use because the trained model doesn’t reproduce the original data. Rights holders arguing that any use without consent violates their rights. Regulators arguing that processing personal data without consent violates privacy law. There’s no consensus yet on how these competing interests should be balanced. But the trend is toward requiring more consent, more transparency about what data is being used, more restrictions on using certain categories of data. Companies that trained systems without carefully considering data provenance and consent are facing pressure and potential liability. The lesson is that data matters. You can’t build ethical AI systems without ethical data practices.

Deepfakes and Synthetic Media Harms

Deepfake technology advanced substantially in 2024 and 2025. Creating convincing synthetic media became accessible to more people. This created obvious harm: non-consensual deepfake pornography of real people, particularly women; fraudulent deepfakes used to manipulate markets or incite violence; misinformation campaigns using deepfakes. The harms were real and affected real people. The response from AI companies was mixed. Some worked on detection technology to identify deepfakes. Some imposed terms of service restricting creation of harmful deepfakes. But the fundamental problem—that powerful synthetic media generation technology existed and was accessible to anyone willing to download it—remained. The technology doesn’t distinguish between ethical and unethical uses. It’s a tool. Bad actors use it harmfully.

The policy response in 2025 was to begin restricting access to deepfake-capable systems and imposing legal liability for harmful uses. Some jurisdictions made it illegal to create or distribute non-consensual deepfakes, particularly sexual deepfakes. Some companies restricted access to their most powerful models, requiring authentication and imposing use restrictions. The approach recognises that powerful technology can cause harm and that some responsibility lies with makers of the technology. The lesson is that when you build technology that can be easily misused to harm people, you have responsibility to consider that. You can’t just build and release and claim you’re not responsible for misuse. Technology with high potential for specific harms requires careful thought about how to minimise those harms.

Environmental Costs Becoming Visible
Why AI Ethics Cannot Be an Afterthought: Lessons from 2025 - Scott Dylan

A less discussed but important ethical issue around AI is its environmental impact. Training large language models requires enormous computational resources. This requires electricity, often generated through fossil fuels, contributing to carbon emissions. The environmental cost of generative AI has become more visible as models have become more powerful. Estimates of the carbon footprint of training large models vary, but they’re substantial. For environmental ethics, this matters. If your AI system contributes to climate change, that’s a real cost that needs to be accounted for. In 2025, companies were starting to report on environmental impacts of their AI systems. Some were beginning to explore more efficient training methods, to offset carbon impacts, to transition to renewable energy for data centres. But this remained a secondary consideration for most companies.

The lesson is that AI ethics includes environmental ethics. If your AI system is contributing to climate change, that’s an ethical problem. The solution requires either building more efficient systems, using renewable energy, or offsetting carbon. It requires thinking about the environmental cost of the technology, not just the benefits. Most AI companies haven’t yet made this a central consideration. But as environmental awareness increases and climate change impacts become more obvious, this will become harder to ignore. Companies that build efficiency and environmental responsibility into their systems from the start will have advantages as environmental costs become factors in purchasing decisions.

The Regulatory Response Hardening

By the end of 2025, regulatory response to AI ethics failures was hardening. The EU’s AI Act was being implemented with real enforcement mechanisms. US regulators were taking action against companies for discriminatory AI. UK regulators were issuing guidance and enforcement against bad practices. Other countries were developing their own frameworks. The pattern was clear: governments were not waiting for industry self-regulation to address ethics problems. They were imposing requirements. Companies were being required to assess AI system risks, to test for bias, to maintain documentation, to be able to explain their systems’ decisions. Non-compliance resulted in significant fines. The age of light-touch self-regulation was ending.

This is actually good news for companies that take ethics seriously. If ethics becomes regulated requirement, it creates a level playing field. Companies that were already doing the right thing won’t be disadvantaged by competing with companies cutting corners. The barrier to entry increases—you need ethics expertise, testing infrastructure, compliance processes—but the cost of non-compliance also increases. This should theoretically drive industry-wide adoption of ethical practices. The question is whether companies will implement these requirements in spirit or just check boxes. Implementation matters. A company that performs bias testing but doesn’t actually fix bias problems hasn’t solved anything. Enforcement will be necessary to ensure compliance is meaningful.

Copyright and Creator Rights

The issue of generative AI trained on creator’s work without compensation was increasingly contentious in 2025. Artists, writers, musicians, and other creators objected to their work being used to train systems that could potentially replace their work or displace their income. Major lawsuits were filed. Legislation was being proposed to require consent and compensation for use of creator’s work in training. The ethical issue is straightforward: if your AI system’s capabilities come from millions of pieces of other people’s creative work, those people deserve compensation. Using their work without permission or compensation is theft, even if the legal boundaries are still being defined.

Some companies responded by licensing data from creators. Some by building systems trained exclusively on data they created. Some by developing mechanisms to identify and compensate creators whose work was used. Others continued using data without compensation while the legal landscape remained unsettled. But the trajectory was clear: creators will be compensated for their work used in training. This is not optional. The question is whether it happens through litigation and regulation, forcing expensive retrofits, or whether companies get ahead of it and build consent and compensation into their systems from the start. The companies that do the latter will avoid liability and maintain better relationships with creator communities.

AI Ethics as Competitive Advantage

One of the most interesting shifts in 2025 was recognition that AI ethics can be a competitive advantage rather than a cost. Companies with demonstrated commitment to ethical AI development attract talent, attract customers, attract investment. Government agencies and enterprises increasingly want to work with AI vendors that have strong ethical practices. Consumers increasingly prefer products from ethically responsible companies. The companies that positioned themselves as leaders in responsible AI development found it differentiated them positively. This is important because it changes the economics. If ethical development is a cost with no benefit, companies minimise it. If ethical development is a source of competitive advantage, companies invest in it.

At Nexatech, we explicitly invest in companies with strong ethical practices and governance. We believe that as regulation tightens and stakeholders increasingly care about responsible AI, companies with strong ethics built in will outcompete companies that cut corners. We’ve seen evidence supporting this thesis. Ethical practices often correlate with better engineering practices, better team dynamics, better long-term thinking. Companies that care about responsible AI often care about building durable, high-quality systems. The companies that cut ethical corners often cut other corners too. The ethical frontier is often the quality frontier.

Ethics-by-Design as Standard Practice

The concept of ethics-by-design—building ethical considerations into systems from the start rather than bolting them on later—is becoming standard expectation in 2026. Leading companies are integrating ethics into their development processes. They’re building bias testing into testing infrastructure. They’re involving ethicists in product design. They’re documenting system impacts and limitations. They’re training developers in responsible AI practices. They’re building governance structures that consider ethical implications. This is becoming table stakes for serious AI companies. Companies not doing this are increasingly seen as behind.

Ethics-by-design requires investment: time, money, expertise, process. It slows development slightly. It requires people trained in AI ethics—a speciality that didn’t exist ten years ago and is still in short supply. It requires interdisciplinary teams that include not just engineers but people with ethical expertise, social science expertise, legal expertise. It requires building ethical considerations into requirements and design rather than trying to retrofit them later. But the cost of not doing it is higher. Regulatory fines, lawsuits, reputational damage, loss of customer trust—these are expensive. Ethics-by-design is the economically rational choice.

Nexatech’s Ethical Investment Framework

At Nexatech, we’ve developed an explicit framework for evaluating the ethical practices of AI companies and funds we consider investing in. We assess companies on: governance structure for ethical decision-making; diversity of teams building systems; transparency about what systems do and limitations; processes for identifying and mitigating bias and other ethical problems; impact on affected communities; data practices and consent; environmental impact; legal compliance with emerging regulations; and commitment to ongoing improvement rather than claiming to have solved ethics. We don’t invest in companies that treat ethics as rhetorical rather than operational. We do invest in companies that are genuinely working to build responsible AI systems.

This framework serves multiple purposes. It disciplines our investment process—we’re not funding companies that will later face regulatory problems or reputational damage due to ethical failures. It signals to the market that ethical practices are important to sophisticated investors. It provides benchmarks that other investors can use. It creates pressure for companies to demonstrate ethical practices. We believe that companies with strong ethical practices will outperform over long timeframes. That’s the bet we’re making. And the evidence from 2025 suggests that bet is increasingly justified.

The Path Forward

The lessons from 2025 are clear. Ethical development of AI is not optional. It’s foundational. It’s increasingly regulated and enforced. It’s becoming a competitive advantage. It requires integration into development processes from the start. Companies that continue treating ethics as an afterthought will face consequences. Companies that embrace ethics-by-design will be competitive. The technology industry is in the process of transitioning from ‘ethics is optional’ to ‘ethics is required.’ That transition is painful but necessary. The alternative—powerful technology being built without careful consideration of ethical implications—leads to harm. Better to build responsibly from the start.

For founders, investors, employees in AI companies: push for ethical practices. Make it a condition of employment or investment that ethics is taken seriously. For regulators: clarify requirements so companies know what’s expected. Enforce against violations. For customers and users: demand ethical practices from the companies you work with. Use your purchasing power to reward responsible companies. For the industry: make ethics a cultural value, not just a compliance checkbox. Build it into incentive structures. Celebrate companies doing it right. The stakes are high. The technology is powerful. It affects people’s lives. We need to get this right.

Related reading: How Emotional AI Claims to Read Your Feelings — and Why It Probably Can’t, Developing Ethical Frameworks for AI Implementation and What is information communication technology ict: A concise guide to ICT basics.

You May Also Like


Discover more from Scott Dylan

Subscribe to get the latest posts sent to your email.

Written by
Scott Dylan