OpenAI is sharing more details about its agreement with the Pentagon

By CEO Sam Altman’s own admission, OpenAI’s deal with the Department of Defense was “definitely rushed,” and “the optics don’t look good.”
After talks between Anthropic and the Pentagon ended on Friday, President Donald Trump ordered federal agencies to stop using Anthropic’s technology after a six-month transition period, and Defense Secretary Pete Hegseth said he was designating the AI company as a procurement risk.
After that, OpenAI quickly announced that it had reached its agreement for the models to be used in decentralized environments. With Anthropic saying it was drawing red lines about using its technology with fully autonomous weapons or mass surveillance at home, and Altman saying OpenAI had similar red lines, there were some obvious questions: Was OpenAI honest about its defenses? Why was it able to reach a deal when Anthropic wasn’t?
So as OpenAI executives defended the deal on social media, the company also published a blog post explaining its approach.
In fact, the post identified three areas where OpenAI models cannot be used – mass surveillance at home, autonomous weapons systems, and “highly automated decision-making (eg systems like ‘public debt’).”
The company said that unlike other AI companies that have “reduced or eliminated their security measures and relied heavily on usage policies as their primary safeguards against national security deployments,” the OpenAI agreement protects its red lines in an “extended, multi-layered way.”
“We maintain full understanding of our security stack, use the cloud, have a dedicated OpenAI workforce, and have strong contractual safeguards,” the blog said. “All of this adds up to the strongest protections available in US law.”
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
The company added, “We don’t know why Anthropic didn’t reach this agreement, and we hope they and other labs will look into it.”
After it was published, Techdirt’s Mike Masnick said the agreement “entirely allows for domestic surveillance,” because it says the collection of private data will be subject to Executive Order 12333 (and a number of other laws). Masnick described the order as “how the NSA hides its domestic surveillance by intercepting communications *outside the US* even if they contain information from/on US people.”
In a LinkedIn post, OpenAI’s head of national security relations Katrina Mulligan said much of the discussion about contract language assumes “the only thing standing between the American people and the use of AI to employ more domestic and autonomous weapons is the provision of a single policy of use in a single contract with the Department of Defense.”
“None of this works,” Mulligan said, adding, “The architecture is more important than the language of the contract. […] By limiting our deployment to a cloud API, we can ensure that our models cannot be directly integrated into weapon systems, sensors, or other active computing hardware.”
Altman also asked questions about the deal on X, where he admitted that it was rushed and led to a massive attack on OpenAI (until Anthropic’s Claude passed OpenAI’s ChatGPT on Apple’s App Store on Saturday). So why did you do that?
“We really wanted to lower things, and we thought the deal was good,” Altman said. “If we are right and this leads to a decline between the DoW and the industry, we will be seen as intellectuals, and a company that struggled to do things to help the industry, if not, we will continue to be seen as […] quick and reckless.”



