Trap Anthropic created it

On Friday afternoon, just as this interview was going on, a news alert appeared on my computer screen: the Trump administration was cutting ties with Anthropic, a San Francisco AI company founded in 2021 by Dario Amodei. The Secretary of Defense, Pete Hegseth, asked the national security law to put the company on a list that should not do business with the Pentagon after Amodei refused to allow Anthropic technology to be used for mass surveillance of the US population or autonomous armed drones that can select and kill targets without involving a person.
It was a jaw-dropping sequence. Anthropic will lose a contract worth up to $200 million and will be barred from working with other defense contractors after President Trump wrote on Truth Social ordering all government agencies to “immediately stop using Anthropic’s technology.” (Anthropic has said it will challenge the Pentagon in court.)
Max Tegmark has spent the better part of a decade warning that the race to build the most powerful AI systems is outstripping the world’s ability to manage them. The MIT scientist founded the Future of Life Institute in 2014 and helped organize an open letter – eventually signed by more than 33,000 people, including Elon Musk – calling for a pause in the development of advanced AI.
His view of Anthropic’s problem is unreserved: the company, like its competitors, sowed the seeds of its predicament. Tegmark’s argument begins not with the Pentagon but with a decision made years ago — a choice, shared across the industry, to resist binding legislation. Anthropic, OpenAI, Google DeepMind and others have long promised to govern themselves responsibly. Anthropic this week even dropped the cornerstone of its security promise — its promise not to release powerful AI systems until the company is confident it won’t do harm.
Now, since there are no laws, there is not much to protect these players, said Tegmark. Here’s more from that interview, edited for length and clarity. You can hear the full interview next week on the TechCrunch StrictlyVC download podcast.
When you see the news now about Anthropic, what is your first reaction?
The road to hell is paved with good intentions. It’s very exciting to think back ten years ago, when people were so excited about how we were going to use artificial intelligence to cure cancer, increase American prosperity and make America stronger. And we’re at that point now where the US government is upset with this company for not wanting AI to be used to surveil many Americans, and not wanting to have killer robots that can autonomously – without involving a person at all – decide who gets killed.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Anthropic has staked its claim on being the first AI security company, yet it has been working with security and intelligence agencies. [dating back to at least 2024]. Do you think that is contradictory?
It is controversial. If I can offer a little criticism on this – yes, Anthropic has been very effective in marketing itself regarding safety. But if you actually look at the facts rather than the claims, what you see is that Anthropic, OpenAI, Google DeepMind and xAI have all been very vocal about how they care about security. None of them have come out in favor of binding safety regulations like we have in other industries. And four of these companies have now broken their promises. First we had Google — this big slogan, ‘Don’t be evil.’ And they dropped that. Then they dropped another lengthy commitment that they promised not to harm with AI. They threw that away so they could sell AI for surveillance and weapons. OpenAI recently dropped the word security from their mission statement. xAI shut down their entire security team. And now Anthropic, earlier in the week, ditched its most important security commitment — a promise not to release powerful AI systems until it’s confident they won’t do harm.
How do companies that have made such outstanding commitments to safety reach this position?
All of these companies, especially OpenAI and Google DeepMind but to some extent Anthropic, have been adamantly opposed to AI regulation, saying, ‘Trust us, we’ll regulate ourselves.’ And ask them successfully. So right now we have less regulation on AI programs in America than on sandwiches. You know, if you want to open a sandwich shop and the health inspector finds 15 rats in the kitchen, he won’t let you sell any sandwiches until you fix them. But if you say, ‘Don’t worry, I’m not going to sell sandwiches, I’m going to sell AI girlfriends for 11-year-old children, and they’ve been linked to suicide in the past, and then I’m going to release something called superintelligence that could overthrow the US government, but I have a good feeling about mine’ – go ahead, keep checking. sandwiches.’
There is food safety regulation and no AI regulation.
And this, I feel, all these companies really share the blame for. Because if they had taken all these promises they made that day about how they were going to be safe and good, and come together, and go to the government and say, ‘Please take our voluntary commitments and turn them into US law that binds even our most sloppy competitors’ – this would have happened instead. We are in complete control. And we know what happens when there is total corporate amnesty: you get thalidomide, you get tobacco companies pushing cigarettes on children, you get asbestos causing lung cancer. So it’s ironic that their resistance to having rules that dictate what is right and wrong to do with AI is now coming back to bite them.
There is no law currently against the creation of AI to kill Americans, so the government could suddenly ask for it. If the companies themselves had previously come out and said, ‘We want this law,’ they would not be in this cake. They really shot themselves in the foot.
Corporate opposition is always a race with China – if American companies don’t do this, Beijing will. Does that argument hold up?
Let’s analyze that. The most common talking point from the lobbyists of the AI companies – they are now better funded and outnumber those stakeholders from the fossil fuel industry, the pharma industry and the military-industrial complex combined – is that whenever anyone proposes any kind of legislation, they say, ‘But China.’ So let’s look at that. China is in the process of banning AI maids outright. Not just age restrictions – they’re looking to ban all anthropomorphic AI. Why? Not because they want to please America but because they see that this is crippling Chinese youth and weakening China. Obviously, it weakens America’s youth, too.
And when people say we have to rush to build intelligence so we can win against China – when we actually don’t know how to control that powerful intelligence, so the default result is humanity losing control of Earth to alien machines – guess what? The Chinese Communist Party really likes to control. Who in their right mind thinks Xi Jinping is going to tolerate some Chinese AI company building something that overthrows the Chinese government? It’s impossible. It is obvious that it is very bad for the American government if it is overthrown by the first American company to create espionage. This is a national security threat.
That’s a compelling formulation – high intelligence as a national security threat, not an asset. Do you see that idea gaining traction in Washington?
I think when people in the national security community listen to Dario Amodei explain his vision – he gave a famous speech where he said that soon we will have a world of intelligence in the data center – they might start thinking: wait, did Dario use the word ‘world’? Maybe I should put that world of intellectuals in the data center on the same threat list that I always check, because that sounds threatening to the US government. And I think that soon, enough people in the US national security community will realize that uncontrolled espionage is a threat, not a tool. This is exactly like the Cold War. There was a race for dominance – economic and military – against the Soviet Union. We Americans won that one without participating in the second race, which was to see who could put the most nuclear craters into the other superpowers. People realized that it was just suicide. No one wins. The same logic applies here.
What does all this mean for the pace of AI development more broadly? How close do you think we are to the systems you describe?
Six years ago, almost every AI expert I knew predicted that we were decades away from having an AI that could communicate and experience at a human level – maybe 2040, maybe 2050. They were all wrong, because we already have that now. We have seen AI develop rapidly from high school level to college level to PhD level to university professor level in other areas. Last year, AI won the gold medal at the International Mathematics Olympiad, which is almost as difficult as it gets for humans. I wrote a paper along with Yoshua Bengio, Dan Hendrycks, and other top AI researchers a few months ago giving a strong definition of AGI. According to this, GPT-4 was 27% of the way there. GPT-5 was 57% of the way there. So we’re not there yet, but we’re from 27% to 57% which quickly shows that it may not be that long.
When I was teaching my students yesterday at MIT, I told them that even if it takes four years, that means that when they graduate, they may not be able to get jobs. It is certainly not too late to start preparing for it.
Anthropic is now blacklisted. I’m curious to see what happens next – will other AI giants step up and say, we can’t do this too? Or does someone like xAI raise their hand and say, Anthropic didn’t want that contract, we’ll take it? [Editor’s note: Hours after the interview, OpenAI announced its own deal with the Pentagon.]
Last night, Sam Altman came out and said he’s standing with Anthropic and has the same red stripes. I admire him for having the courage to say that. Google, as of the time we started this interview, had not said anything. If they just keep quiet, I think that’s very embarrassing for them as a company, and a lot of their employees will feel the same way. We haven’t heard anything from xAI yet. So it will be interesting to see. Basically, there is this time when everyone has to show their true colors.
Is there a version of this where the result is really good?
Yes, and that’s why I’m strangely optimistic. There is something more obvious here. If we’re just starting to treat AI companies like any other companies – let alone corporate amnesty – they’ll obviously have to do something like clinical trials before they release something so powerful, and show to independent experts that they can control it. Then we get the golden age and all the good things from AI, minus the existential angst. That is not the way we are now. But it can be.



