An AI roadmap, if anyone will listen

While Washington’s split with Anthropic has revealed the complete lack of any coherent rules governing artificial intelligence, a coalition of intellectuals has put together something that the government has refused to reveal until now: a framework for how the development of AI should really look.
The Pro-Human Declaration was finalized before the Pentagon-Anthropic conflict last week, but the conflict of the two events was not lost on anyone involved.
“Something amazing has happened in America in the last four months,” said Max Tegmark, an MIT scientist and AI researcher who helped organize the effort, in an interview with this editor. “Sudden voting [is showing] that 95% of all Americans oppose an uncontrolled spy race.”
This newly published document, signed by hundreds of experts, former officials, and public officials, opens with the unfounded ideas that humanity is on the way. Another approach, which we call the “race to be changed,” leads to the adoption of people first as workers, then as decision makers, as power is concentrated in unaccountable institutions and their machinery. Some lead to AI that greatly augments human capabilities.
The latest situation rests on five key pillars: keeping people in charge, avoiding concentration of power, protecting human information, preserving individual freedom, and holding AI companies legally accountable. Among its muscular provisions is a total ban on intelligence development until there is scientific consensus it can be safely and truly democratically bought; forced shutdown of power systems; and the prohibition of self-replicating, self-improving, or shutdown-resistant structures.
The release of this announcement coincides with a period that makes its urgency very easy to see. Last Friday in February, Secretary of Defense Pete Hegseth designated Anthropic – whose AI is already working on isolated battlefields – a “supply chain risk” after the company refused to give the Pentagon unlimited use of its technology, a label reserved for firms with ties to China. Hours later, OpenAI cut its deal with the Department of Defense, which legal experts said would be difficult to implement in any meaningful way. What has been revealed is how costly Congressional inaction has been on AI.
As Dean Ball, chief executive officer at the Foundation for American Innovation, told the New York Times afterward, “This is not just a dispute over a contract. This is the first conversation we’ve had as a country about regulating AI systems.”
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Tegmark came up with an analogy that most people can relate to when we speak. “You don’t have to worry that some drug company is going to release another drug that’s causing more harm before people figure out how to make it safer,” he said, “because the FDA won’t let them release anything until it’s safe enough.”
Washington turf wars rarely generate the kind of public pressure that changes the rules. Instead, Tegmark sees child safety as a potential pressure point to break the current barrier. Indeed, the declaration requires the necessary tests before the deployment of AI products – especially chatbots and related applications aimed at young users – which cover risks including increased suicidal thoughts, the spread of mental health conditions, and emotional manipulation.
“If a scary old man writes to an 11-year-old child pretending to be a little girl and tries to persuade the boy to kill himself, the young man could go to jail for that,” said Tegmark. “We already have laws. It’s illegal. So why is it different if a machine does that?”
He believes that once a pre-release testing system is established for children’s products, the range will almost inevitably increase. “People will come and be like – let’s add a few more requirements. Maybe we should also check that this won’t help terrorists make bioweapons. Maybe we should check to make sure that superintelligence doesn’t have the power to overthrow the US government.”
It is no small thing that former Trump adviser Steve Bannon and Susan Rice, President Obama’s National Security Advisor, signed the same document – along with former Chairman of the Joint Chiefs Mike Mullen and progressive religious leaders.
“In fact, what they agree on is that they are all human,” said Tegmark. “If it comes down to whether we want the future of people or the future of machines, of course they will be on the same side.”



