HomeBlogAI Arms Race: Why Anthropic’s Stand Against Military AI Use Matters

AI Arms Race: Why Anthropic’s Stand Against Military AI Use Matters

AI Arms Race: Why Anthropics Stand Against Military AI Use Matters - Scott Dylan

The technology industry operates under a particular mythology: that innovation is inherently good, that faster development is better than slower development, that more applications and use cases are always preferable to fewer. This mythology has become so embedded in how we think about technology that questioning it feels almost heretical. Yet some of the most important technology companies are starting to challenge this mythology, and perhaps none more clearly than Anthropic. The company’s explicit policy against military applications of its AI systems, and its broader commitment to developing AI safely and responsibly, represents a significant departure from industry norms. In a sector where venture capital pushes for rapid growth and expansion of use cases, Anthropic has chosen to explicitly constrain its market by refusing military contracts. This decision deserves serious examination, not just for what it says about Anthropic specifically, but for what it reveals about responsible innovation in an era where AI is becoming central to how we organise society, including how we conduct warfare.

Understanding the Military AI Market

Before discussing why Anthropic’s stance matters, it’s useful to understand what’s actually at stake. Military organisations around the world are investing heavily in artificial intelligence applications. These applications range from relatively benign uses—using AI to analyse satellite imagery, to process logistical information, to optimise supply chains—to potentially catastrophic uses like autonomous weapons systems that can make targeting decisions without human input. The global military AI market is projected to reach between $19 billion and $29 billion by 2030, depending on the research methodology. That’s not a niche market. That’s a massive financial opportunity, which is precisely why so many technology companies are pursuing military contracts, and why companies that refuse them are making a notable choice.

The military has always been an early adopter of technology. The internet, GPS, touchscreen technology, and countless other innovations now central to civilian life originated in military research and development programmes. This historical pattern creates a argument that military funding drives innovation that eventually benefits civilians. In some cases, this is certainly true. Yet this argument conveniently ignores that military applications are also where technology first causes large-scale harm. If AI is going to be used in ways that cause suffering, militaries will be among the first to deploy it at scale. The question isn’t whether military AI will be developed—it will be, by every major military power. The question is whether private technology companies should participate in this development, and whether they should do so without constraints.

The Autonomous Weapons Problem

The most concerning military application of AI is autonomous weapons systems—systems capable of selecting and engaging targets without human decision-making in the moment of engagement. Currently, international conventions require ‘meaningful human control’ over weapons systems, but the definition of ‘meaningful’ is contested and weakening. Many military organisations are actively developing systems where humans set parameters and targets but AI systems make the actual decision to fire. The argument in favour of such systems is that they’re faster and potentially more precise than human decision-making. The arguments against are more compelling: they remove human moral agency from life-and-death decisions, they can make errors that humans would catch, they create new types of warfare that may violate international humanitarian law, and they make escalation dynamics unpredictable.

Consider what happens when multiple military powers deploy autonomous weapons. The incentive for humans to maintain control over targeting decisions decreases if your adversary is deploying fully autonomous systems that can act faster than human-controlled systems. This creates a classic arms race dynamic where the pressure to automate and remove human decision-making accelerates regardless of whether anyone actually thinks this is a good idea. The result could be warfare where decisions about whether to kill are made by algorithms, where escalation happens too fast for human diplomacy to intervene, where civilian casualties occur and no one is clearly responsible because the decision was made by an algorithm designed by engineers thousands of miles away. This isn’t science fiction—it’s a realistic extrapolation of current development trends if the technology companies developing these systems don’t impose constraints on their own work.

Why Technology Companies Matter in This Context

Some argue that if technology companies refuse military contracts, governments will just develop the technology themselves or fund smaller companies willing to do the work. This is true to some extent—governments have access to significant resources and can fund research directly. Yet this misses the point of why technology company involvement matters. The most capable AI systems are developed by private companies like Anthropic, OpenAI, Google DeepMind, Meta, and others. These companies have the best researchers, the largest training datasets, the most sophisticated computing infrastructure, and the most advanced capabilities. If these companies refuse military contracts, it slows military AI development and constrains capabilities. It doesn’t stop it, but it does constrain it in ways that matter.

Moreover, the choices made by leading companies shape industry norms. When Anthropic explicitly refuses military contracts, it’s not just about Anthropic’s contribution to military capabilities. It’s a statement to the broader industry: you can be successful and profitable without selling to militaries. It provides cover for other companies to make similar decisions. It changes what seems normal and expected. If every major AI company refused military contracts, the entire landscape would shift. Governments would have to do more of their own development, and private sector people would face different career choices. The fact that this seems unlikely to happen is precisely why the companies that do make this choice deserve attention—they’re swimming against powerful currents of financial incentive and industry norms.

Anthropic’s Specific Approach to Responsible AI Development

Anthropic’s stance against military use is part of a broader commitment to responsible AI development. The company was founded explicitly with the goal of developing AI systems that are more interpretable, safer, and aligned with human values. This isn’t marketing language—it’s reflected in the company’s research priorities, which focus heavily on AI safety, interpretability, and understanding how to make AI systems more transparent and controllable. The company’s constitution-based approach to training AI systems, where models are trained to follow a set of principles and values, represents a genuine attempt to build AI systems that can explain their reasoning and can be better understood by humans.

What’s notable is that this commitment to responsible development has clear costs for Anthropic. The company could be making more money with fewer constraints. They could be selling AI systems to any customer willing to pay, including military organisations. They could be developing more dangerous capabilities that would be marketable to more customers. Instead, they’ve chosen to prioritise responsible development over maximum profit. This isn’t because it’s profitable—in the short term, it clearly isn’t. It’s because the company’s founders and leadership genuinely believe that how AI is developed matters for humanity’s future. You can disagree with their specific approach, but you should acknowledge that they’re making real sacrifices based on their values.

Comparing Approaches: Who Else Is Taking Stands?

Anthropic is not alone in taking explicit stances on military AI, but most technology companies are much less clear about their commitments. Google has a stated policy against military applications of AI, though the company remains involved in other government contracts. OpenAI has policies against military use, though enforcement and clarity remain questions. Meta has been less clear. Most smaller AI companies have no explicit policies, and many are actively pursuing military contracts. What makes Anthropic somewhat unusual is the clarity and the willingness to sacrifice revenue for principle. Many companies want to appear ethical without actually constraining their revenue streams.

Comparing across countries and companies reveals how much variation there is in approaches to military AI. China and Russia are investing heavily in military AI development with minimal participation from private companies, because state control over research is stronger in those countries. The US has more private sector involvement, which creates both benefits (more innovation) and risks (more profit motive driving development). The UK and EU are trying to chart middle paths, with regulations attempting to constrain the most dangerous uses whilst allowing other applications. These different approaches will likely lead to different outcomes in terms of how military AI develops, which is precisely why individual companies’ choices matter. They’re not just about that company; they’re about what becomes normalised in the sector.

The International Treaty Question
AI Arms Race: Why Anthropics Stand Against Military AI Use Matters - Scott Dylan

There are ongoing discussions at the United Nations and other international forums about creating treaties to regulate autonomous weapons. The Campaign to Stop Killer Robots, a coalition of NGOs and some governments, has been advocating for a comprehensive ban on fully autonomous weapons. Discussions at the UN have been slow and contentious, with military powers reluctant to constrain their own potential advantages in developing autonomous systems. However, there is growing international consensus that some constraints are necessary. The question is whether those constraints will be meaningful and enforceable, or whether they’ll be honoured more in the breach than in practice.

This is where responsible technology companies matter. If private sector AI companies decide they won’t develop certain capabilities, they effectively constrain what’s technically possible, which makes it harder for any military to deploy those capabilities. If treaties try to regulate autonomous weapons whilst the underlying technology continues to improve at the same pace, the treaties become unenforceable—militaries will find ways to redefine their systems as compliant even if they’re effectively autonomous. But if the technology companies developing AI refuse to build the most dangerous capabilities, they create technical constraints that make evasion harder. Anthropic’s stance is a bet that responsibility from the private sector can be more effective at constraining dangerous military AI than treaties and regulation alone.

The Market Pressure Problem

One of the challenges with relying on corporate responsibility to constrain military AI development is that market pressures point in the opposite direction. Military contracts are lucrative, available, and don’t require the same kind of consumer marketing that other AI applications do. There’s no public scrutiny the way there is with social media algorithms or hiring systems. Military budgets are large and willing to pay premium prices for advanced technology. From a pure financial perspective, refusing military contracts makes no sense. This is precisely why so few companies do it.

For Anthropic to maintain this stance, the company needs to remain profitable through other revenue streams. This means succeeding in the commercial AI market, in government advisory roles that don’t involve military applications, and in other areas. It means being willing to accept slower growth and smaller profit margins than competitors who take military contracts. It means potentially losing talent to competitors who offer higher compensation based on military contracts. It means tolerating criticism from those who argue that refusing military contracts is irresponsible because it doesn’t stop militaries from developing AI—it just means they’ll develop it with less sophisticated private sector involvement. These are real costs, which is why Anthropic’s commitment to this position is worth taking seriously.

What This Means for AI Safety as a Field

Anthropic’s approach also has implications for how we think about AI safety as a field. AI safety is often discussed abstractly—how do we ensure AI systems are aligned with human values, how do we make them interpretable, how do we prevent unintended consequences. But AI safety has an unavoidably political dimension. The values that AI systems are aligned with, whose values they’re aligned with, who gets to decide—these are political questions. A military AI system that’s perfectly safe from a technical perspective could still be deeply harmful if it enables warfare that violates international norms or causes civilian casualties. So when Anthropic talks about responsible AI development, part of what that means is taking positions on how AI should and shouldn’t be used, not just how to make AI systems more technically safe.

This is uncomfortable for many people in the AI safety field, who would prefer to maintain neutrality on political questions and focus purely on technical safety. But neutrality in the face of major decisions about how AI is deployed is actually a political position—it’s the position of accepting the status quo and whatever direction profit motives push the technology. Anthropic’s position implicitly argues that AI safety requires taking positions on use cases and policies, not just technical improvements. Whether you agree with their specific positions, this is a more honest approach to AI safety than pretending technical safety can be completely separated from political questions about how technology is deployed.

The Problem of Measurement and Enforcement

A legitimate question about Anthropic’s military AI policy is how it’s enforced. If a customer claims they’re using the system for a non-military purpose, how does Anthropic verify that? Military applications of AI often involve significant civilian applications as well—satellite image analysis can serve both military and civilian purposes. Autonomous systems can be used for civilian transportation or military logistics. There’s always a question about whether explicit policies actually constrain behaviour or just create plausible deniability. Companies that refuse to sell to militaries directly might still sell to companies that sell to militaries, through multiple degrees of separation.

Anthropic addresses this partly through contractual terms that prohibit military use and partly through having some visibility into how their systems are used. But this is genuinely difficult to enforce perfectly. This doesn’t make the policy pointless—even imperfect constraints are better than no constraints. But it’s worth acknowledging that any company’s policy against military use is limited by the difficulty of verifying compliance across complex supply chains and uses. What matters is whether the company is genuinely trying to enforce the policy, which appears to be the case with Anthropic, rather than using it as a rhetorical cover whilst actually enabling military applications.

Lessons for Nexatech Ventures’ Investment Approach

From my perspective as someone investing in AI companies, Anthropic’s approach is instructive. As an investor, I’m increasingly convinced that the companies that will be most valuable long-term are those that develop AI responsibly and earn genuine trust, rather than those that maximise short-term revenue at the expense of systemic risks. Investors like us at Nexatech have a choice about which companies we support, which kind of development we enable. We can invest in companies pushing the most aggressive capabilities regardless of consequences, or we can invest in companies that take responsible development seriously. We can use our capital to signal which approach we think matters for the future.

This doesn’t mean only investing in companies with explicit military refusal policies—many responsible AI companies might not take such strong positions. But it does mean asking questions about how companies are thinking through the consequences of their work, what constraints they’re building into their systems, how they’re thinking about societal impact. It means being willing to accept lower short-term returns for companies taking principled stands on responsible development. It means recognising that the technology companies that win long-term will be those that earn trust, and trust is built through acting according to your stated values, not just stating values as marketing.

The Broader Implication: Can Tech Companies Be Trusted with Power?

Anthropic’s stance raises a bigger question about whether we should trust technology companies to self-regulate on critical issues like military AI. The fact that Anthropic can refuse military contracts depends on the company’s structure and funding model. As a company backed by venture capital from investors like us, they’re not ultimately beholden to shareholders demanding maximum profit at all costs (at least not yet—this might change if the company becomes public). This makes it possible for the company to take principled stands. But most technology companies are either publicly traded or venture-backed with significant return expectations, which creates pressure to maximise revenue.

The broader answer, I think, is that we shouldn’t rely entirely on corporate responsibility to address systemic problems like military AI development. We also need regulation. We need international treaties that constrain autonomous weapons. We need governments taking stronger positions on how military AI should be developed and used. We need transparency requirements that let us see how AI systems are being deployed. Corporate responsibility is important and worth supporting, but it’s not sufficient. Companies like Anthropic are making responsible choices partly because their founders and leadership genuinely care about AI safety, but also because the broader ecosystem hasn’t yet created overwhelming pressure to abandon those principles. As AI becomes more central to military and national security, that pressure will likely increase, and we’ll need more than corporate responsibility to resist it.

Looking at the Competitive Dynamics

One concern about Anthropic’s military refusal policy is that it might put them at a competitive disadvantage relative to companies willing to take military contracts. In some sense, this is certainly true—they’re leaving money on the table. However, there’s also a case that refusing military contracts is actually a competitive advantage in other markets. Governments increasingly care about AI safety and responsible development. Companies demonstrating genuine commitment to these principles might be preferred suppliers for non-military government applications. Similarly, consumers and companies increasingly care about ethics and responsibility in technology. Anthropic’s stance provides a genuine differentiator in the market.

Moreover, there’s a reputational dimension. Anthropic can recruit and retain the best AI safety researchers precisely because the company is willing to take principled stands on how AI should be developed. Many of the most talented people in AI want to work on problems they believe matter and on teams they believe are doing this work responsibly. A company known for cutting corners and willing to build dangerous capabilities might face more difficulty attracting top talent over time. This isn’t deterministic—plenty of excellent people work at companies taking less principled stands. But there’s a genuine competitive advantage to being the company known for thinking seriously about responsibility.

Historical Parallels and Lessons

History offers some useful parallels and lessons about technology companies and military applications. After the development of nuclear weapons, many of the scientists involved recognised the dangers and advocated for international controls. Some of the founding figures in computing, like Alan Turing, were deeply concerned about the implications of artificial intelligence. More recently, we’ve seen Google researchers quit over military contracts, and employees at various tech companies organise against military applications. These historical examples suggest that technology professionals increasingly recognise that how their work is deployed matters, and they’re willing to push back against applications they find ethically problematic.

This creates a potential ally in the fight against irresponsible military AI: the technology professionals themselves. If enough people working in AI refuse to work on military applications, or only accept such work under strict constraints, it becomes harder for any company or government to pursue the most aggressive military AI development. Anthropic’s commitment to not using military applications, combined with its commitment to recruiting and retaining people who care about responsible development, creates a virtuous cycle where the company attracts people who have principled objections to military AI and doesn’t hire people who want to work on such applications.

What Needs to Happen Beyond Individual Company Responsibility

Whilst celebrating Anthropic’s stance, it’s important to be realistic about what it can achieve. No single company’s military refusal policy can constrain international military competition around AI. If Anthropic refuses military contracts, China, Russia, and the United States will still develop military AI, using whatever talent and resources they can access. European AI companies might also pursue military applications. Individual corporate responsibility, however principled, can’t solve problems that require international coordination. This is why regulation and international treaties matter.

What we need is a multi-layered approach: corporate responsibility from technology companies, regulation constraining the most dangerous applications, international treaties that all major military powers agree to respect, transparency requirements that let us see how AI is being deployed, and support for researchers working on AI safety and governance. No single approach is sufficient. Anthropic’s stance is valuable as one part of this broader ecosystem, but it can’t be the solution on its own.

My Own Perspective on Technology and Responsibility

Having spent my career in technology, I’m deeply aware of how powerful technology can be for good, and how easily it can be misused if we’re not intentional about how we develop and deploy it. I’ve also spent significant time working on issues like prison reform and mental health, where I’ve seen how technology affects society in ways that technology professionals don’t always consider. This has reinforced my conviction that technology development requires moral seriousness and intentional thinking about consequences, not just technical brilliance and innovation for its own sake. Anthropic’s approach resonates with me because it takes this moral dimension seriously.

I want to see more technology companies, more investors, more technology professionals take similarly serious positions on responsible development. Not necessarily identical to Anthropic’s—different companies might take different stands based on their analysis of risks and opportunities. But similar in spirit: genuine commitment to thinking through implications of technology, willingness to constrain profit for principle, recognition that how technology is developed and deployed matters for human flourishing. This is the future I’m investing in, and it’s why Anthropic’s stance, whatever its limitations, represents the kind of serious engagement with responsibility that I think the AI industry needs.

The Call Forward

As artificial intelligence becomes increasingly central to military capabilities, economic competition, and human welfare, the choices made by technology companies about how to develop this technology become increasingly consequential. Anthropic’s explicit refusal of military applications, combined with its broader commitment to responsible AI development, represents an important signal about what’s possible in the industry. The question now is whether other companies will follow this lead, whether investors will support companies taking principled stands on responsibility, whether governments will create regulatory environments that reward responsible development rather than punishing it. The stakes are high enough to warrant serious engagement with these questions, not just at the level of corporate policy but at the level of how we collectively decide AI should be developed and deployed.

Related reading: How Emotional AI Claims to Read Your Feelings — and Why It Probably Can’t, Developing Ethical Frameworks for AI Implementation and What is information communication technology ict: A concise guide to ICT basics.


Discover more from Scott Dylan

Subscribe to get the latest posts sent to your email.

Written by
Scott Dylan