Technology & AI

Will the Pentagon’s Anthropic argument scare it away from defense work?

In just over a week, negotiations over the Pentagon’s use of Anthropic’s Claude technology have come to a head, the Trump administration has designated Anthropic a procurement risk, and the AI ​​company said it will fight that designation in court.

OpenAI, on the other hand, quickly announced its deal, which created a backlash that saw users ditch ChatGPT and push Anthropic’s Claude to the top of the App Store charts. And at least one OpenAI executive has resigned over concerns that the announcement was made too quickly without due diligence.

In a recent episode of TechCrunch’s Equity podcast, Kirsten Korosec, Sean O’Kane, and I discussed what this means for other startups looking to work with the federal government, particularly the Pentagon, as Kirsten wondered, “Are we going to see a bit of a tune-up?”

Sean pointed out that this is an unusual situation in many ways, in part because OpenAI and Claude are making products that “nobody can close.” And most importantly, this is an argument about “how their technology is or isn’t used to kill people” so naturally it will attract more scrutiny.

Still, Kirsten argued, this is a situation that should “give any initiative a moment’s notice.”

Read a preview of our interview, edited for length and clarity, below.

Kirsten: I wonder if other startups are starting to look at what happened to the federal government, especially the Pentagon and Anthropic, that debate and the wrestling match, and [take] pause about whether they want to go after federal dollars. Will we see a bit of a song change?

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Sean: I wonder about that, too. I think no, to some extent, in the near term, if only because when you really try to think about all the different companies, whether they’re startups or more established Fortune 500s that work with the government and especially with the Department of Defense or the Pentagon, [for] most of them, that work flies under the radar.

General Motors makes defense vehicles for the Army and has done so [that] for a very long time and worked on all electric versions of those cars and autonomous versions. There are things like that that go on all the time and never hit the zeitgeist. I think the problem that OpenAI and Anthropic ran into over the last week is that, these are companies that make products that people use – and more importantly, [that] no one can be silent.

So there’s such a spotlight on them, which naturally highlights their involvement on a level that I think most other companies that contract with the federal government — and, in particular, any of the federal government’s warfighting elements — don’t have to deal with.

The only caveat I would add is that a lot of the heat surrounding this discussion between Anthropic and OpenAI and the Pentagon is focused on how their technology is or isn’t used to kill people, or parts of machines that kill people. It’s not just the attention that we have and the familiarity that we have with their products, there’s something more there that I feel is a mystery when you think of General Motors as a defense contractor or whatever.

I don’t think we’re going to see, like, Applied Intuition or one of these companies that have been doing it as a dual use move back too much, because I don’t see the focus on it and there’s just not kind of a shared understanding of what that impact would be.

Anthony: This story is very different and specific to these companies and personalities in many ways. I mean, there’s been a lot of really interesting thinking about: What is the role of technology in government? [Of] AI in government? And I think those are all good and important questions to ask and explore.

I also think, though, that this is a curious lens through which to explore some of those things because Anthropic and OpenAI aren’t really that different in many of the ways or contexts they take. That’s right not as one company says, “No, I don’t want to work with the government” another says, “Yes, I do.” Or someone says, “You can do whatever you want.” again [the other is] saying, “No, I want to have boundaries.” Both, at least publicly, said, “We want limits on how our AI is used.” It seems Anthropic is digging their heels in about: You can’t change goals this way.

And then on top of that, there seems to be a human layer there, the CEO of Anthropic and, Emil Michael – who many TechCrunch readers may remember from his Uber days, and now [chief technology officer for the Department of Defense]. Apparently, they really don’t like each other. It is reported that.

Sean: Yes, there is a huge “girls fighting” thing here that we should not ignore.

Kirsten: Yes, a little bit. There is, but the effects are a little stronger than that. Again, backtracking a bit, what we’re talking about here is the Pentagon and Anthropic getting into a conflict where Anthropic seems to have lost, although I have to say they’re still being used a lot by the military. They’re considered important technologies, but OpenAI has come a long way, and this is evolving and will change by the time this episode comes out.

The blowback was great for OpenAI, where we saw a lot of ChatGPT releases that I think went up 295% after OpenAI locked in a deal with the Department of Defense.

To me, all of this is the sound of something really serious and dangerous, namely that the Pentagon wanted to change the existing terms of the existing contract. And that’s very important and should give any startup pause because the political machine that’s going on right now, especially with the DoD, seems different. This is unusual. Contracts take forever to get through to the government level and the fact that they want to change those terms is a problem.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button