A new court filing reveals that the Pentagon told Anthropic that the two sides had almost reached an agreement — a week after Trump announced the putative relationship.

Anthropic filed two affidavits in a California federal court late Friday afternoon, disputing the Pentagon’s claim that the AI company poses an “unacceptable risk to national security” and saying the government’s case rests on technical misunderstandings and claims that were never raised during months of negotiations that preceded the dispute.
The announcements were filed with Anthropic’s response to its lawsuit against the Department of Defense and come ahead of a hearing next Tuesday, March 24, before Judge Rita Lin in San Francisco.
The dispute dates back to late February, when President Trump and Defense Secretary Pete Hegseth publicly announced they were cutting ties with Anthropic after the company refused to allow unrestricted military use of its AI technology.
The two people who sent these announcements are Sarah Heck, Head of Policy at Anthropic, and Thiyagu Ramasamy, Head of Public Affairs at the company.
Heck is a former National Security Council official who worked in the White House under the Obama administration before moving to Stripe and Anthropic, where he managed the company’s government relations and policy work. He was personally present at the February 24 meeting where CEO Dario Amodei sat down with Defense Secretary Hegseth and Pentagon Undersecretary Emil Michael.
In his announcement, Heck calls out what he describes as a central lie in government literature: that Anthropic has sought some sort of mandated role in military operations. He says that is not true. “At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee say that the company wanted that type of role,” he wrote.
He also points out that the Pentagon’s concerns about Anthropic potentially disabling or altering its technology during its operation were not raised during the negotiations. Instead, he says, it appeared for the first time in federal courts, giving Anthropic a chance to respond.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Another detail in Heck’s announcement that is sure to draw attention is that on March 4 – the day after the Pentagon officially finalized its supply chain designation against Anthropic – Under Secretary Michael Amodei sent an email saying that the two sides were “very close” on the two issues that the government cited as evidence that Anthropic is a national security threat: its positions in private arms and the stockpile of American weapons.
The email, which Heck attached as an exhibit to his announcement, is worth reading along with what Michael said publicly in the days that followed. On March 5, Amodei published a statement saying the company had “productive discussions” with the Pentagon. The next day, Michael wrote to X that “there are no active negotiations between the Department of War and Anthropic.” A week later, he told CNBC there was “no chance” of renewed negotiations.
Heck’s point seems to be: If Anthropic’s stance on those two issues is what makes it a national security threat, why did a Pentagon official say the two sides almost squared off on those issues shortly after the nomination was finalized?
Ramasamy brings a different kind of expertise to the case. Prior to joining Anthropic in 2025, he spent six years at Amazon Web Services managing AI deployments for government clients, including decentralized environments. At Anthropic, he is credited with building the team that brought Claude’s models to national security and defense, including a $200 million contract with the Pentagon that was announced last summer.
His announcement is consistent with the government’s claim that Anthropic could disrupt military operations by disabling technology or changing its behavior, which Ramasamy says is technically impossible. According to him, once Claude is planted inside a government-protected, “air-gapped” system operated by a third-party contractor, Anthropic has no access to it; there is no remote kill switch, no backdoor, and no way to push unauthorized updates. Any kind of “operational veto” is a myth, he suggests, explaining that a change in the model would require the Pentagon’s express approval and action to enact it.
Anthropic, he says, can’t even see what government users are typing into the system, let alone extract that data.
Ramasamy also disputes the government’s suggestion that Anthropic’s hiring of foreigners makes the company a security risk. He notes that Anthropic employees have undergone US government security clearance checks – the same background check process required to access classified information, adding to his statement that “to my knowledge,” Anthropic is the only AI company where cleared employees are building AI models designed to operate in classified environments.
Anthropic’s lawsuit alleges that the procurement risk designation — the first ever applied to an American company — amounts to government retaliation for the company’s publicly expressed views on AI safety, in violation of the First Amendment.
The government, in a 40-page filing earlier this week, flatly denied that, saying that Anthropic’s refusal to allow all legitimate military uses of its technology was a business decision, not protected speech, and that the designation was a direct national security call and not a punishment for the company’s views.



