OpenAI’s Sam Altman announces Pentagon deal with ‘tech defenses’

OpenAI CEO Sam Altman announced late Friday that his company has reached an agreement that allows the Department of Defense to use its AI models on the department’s classified network.
This follows a standoff between the department — known under the Trump administration as the Department of Defense — and OpenAI rival Anthropic. The Pentagon has pushed AI companies, including Anthropic, to allow their models to be used for “all legitimate purposes,” while Anthropic wants to draw a red line around massive domestic surveillance and fully autonomous weapons.
In a lengthy statement released Thursday, Anthropic CEO Dario Amodei said the company has “never opposed any military activity or attempted to limit the use of our technology exclusive way,” but argued that “in several cases, we believe that AI can undermine, rather than protect, democratic values.”
More than 60 OpenAI employees and 300 Google employees signed an open letter this week asking their employers to support Anthropic’s position.
After Anthropic and the Pentagon failed to reach an agreement, President Donald Trump criticized “Anthropic’s nut job” in a social media post that also ordered federal agencies to stop using the company’s products after a six-month period.
In another post, Defense Secretary Pete Hegseth said Anthropic is trying to “take veto power over the operational decisions of the US military.” Hegseth also said he designates Anthropic as a supply-chain risk: “Effective immediately, no contractor, supplier, or partner that does business with the United States military can do any commercial work with Anthropic.”
On Friday, Anthropic said it “has not received direct communication from the Department of Defense or the White House regarding the status of our negotiations,” but stressed that it will “challenge any designation of procurement risk in court.”
Techcrunch event
Boston, MA
|
June 9, 2026
Surprisingly, Altman said in the X post that OpenAI’s new security contract includes protections that address the same issues that became Anthropic’s flashpoint.
“Two of our most important security principles are the ban on mass surveillance at home and the individual’s responsibility to use force, including autonomous weapons systems,” Altman said. “The DoW agrees with these principles, they are expressed in the law and policies, and we include them in our agreement.”
Altman said OpenAI will “build technical safeguards to ensure that our models behave properly, which the DoW also wants,” and will send engineers with the Pentagon “to help with our models and ensure their safety.”
“We are asking the DoW to offer these same terms to all AI companies, in our opinion we think everyone should be willing to accept them,” said Altman. “We have expressed our strong desire to see things come down to legal and governmental action and reach appropriate agreements.”
Fortune’s Sharon Goldman reports that Altman told OpenAI employees at the all-hands meeting that the government would allow the company to create its own “security stack” to prevent misuse, and that “if the model refuses to do a job, the government won’t force OpenAI to do that job.”
Altman’s post came shortly before news broke that the US and Israeli governments had begun bombing Iran, with Trump calling for the overthrow of the Iranian government.



