Technology & AI

Tech workers urge DOD, Congress to withdraw Anthropic label as supply chain risk

Hundreds of tech workers signed an open letter urging the Department of Defense to withdraw its designation of Anthropic as a “supply chain risk.” The letter also asks Congress to step in and “assess whether the use of these extraordinary powers against America’s tech companies is appropriate.”

The book includes signatories from major technology and architecture companies including OpenAI, Slack, IBM, Cursor, Salesforce Ventures, and more. It follows a dispute between the DOD and Anthropic after the AI ​​lab last week refused to give the military unrestricted access to its AI systems.

Two of Anthropic’s red lines in its negotiations with the Pentagon were that it did not want its technology to be used for mass surveillance of Americans or to power autonomous weapons that make target decisions and fire without a human in the loop. The DOD said it has no plans to do either of those things, but said it doesn’t believe it should be limited by vendor rules.

In response to Anthropic CEO Dario Amodei’s refusal to bend to Hegseth’s threats, President Donald Trump on Friday ordered federal agencies to stop using Anthropic’s technology after a six-month transition period. Hegseth said he would make good on his threats and designate Anthropic a supply chain threat — a term often reserved for foreign adversaries that would put the AI ​​company on a blacklist from any agency or company that does business with the Pentagon.

In Friday’s post, Hegseth wrote: “Beginning immediately, no contractor, supplier, or partner doing business with the U.S. military may conduct any commercial activity with Anthropic.”

But posting to X doesn’t automatically make Anthropic a supply chain risk. The government needs to complete a risk assessment and notify Congress before our military partners cut ties with Anthropic or its products. Anthropic said in a blog post that the destination is both “legally absurd” and that it will “challenge any designation of risk procurement in court.”

Many in the industry see management’s handling of Anthropic as cruel and blatant retaliation.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

“When two parties cannot agree on terms, the usual course is to split up and work with a competitor,” reads the open letter. “This situation sets a dangerous precedent. Punishing an American company by refusing to accept contract changes sends a clear message to every technology company in America: accept whatever terms the government wants, or face retaliation.”

Despite concerns about the government’s harsh treatment of Anthropic, many in the industry are still concerned about possible government exploitation and use of AI for nefarious purposes.

Boaz Barak, an OpenAI researcher, wrote in a social media post on Monday that preventing governments from using AI for mass surveillance is also his “red line” and “should be ours.”

Shortly after Trump publicly attacked Anthropic, OpenAI announced that it had reached an agreement for its models to be used in classified areas of the DOD. OpenAI CEO Sam Altman said last week that the company has similar red lines to Anthropic.

“If there is any good to come from the events of the past week, it would be if we in the AI ​​industry start treating the issue of using AI for government abuse and surveillance as a catastrophic risk in itself,” Barak wrote. “We’ve done a good job of screening, mitigation, and procedures, for threats like bioweapons and cyber security. Let’s use the same procedures here.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button