Technology & AI

Elon Musk’s only AI expert witness in the OpenAI lawsuit fears an AGI arms race

When do we take AI doomers seriously?

That’s a key part of Elon Musk’s effort to shut down the for-profit AI business OpenAI. His lawyers argued that the organization was founded as a charity focused on AI security, and lost its way in the pursuit of profit. To prove that, they cited old emails and statements from the organization’s founders about the need for a community-minded counterweight to Google DeepMind.

Today, they called the only expert witness to speak directly to AI technology: Stuart Russell, a University of California, Berkeley computer science professor who has studied AI for decades. His job was to provide background on AI, and to establish that this technology is dangerous enough to be concerned about.

Russell co-signed an open letter in March 2023 calling for a six-month moratorium on AI research. In a sign of conflict here, Musk also signed a similar letter, as he launched xAI, his for-profit AI lab.

Russell told the jury and Judge Yvonne Gonzalez Rodgers that there have been a variety of risks associated with the development of AI, from cybersecurity threats to problems of misunderstanding and the winner-take-all nature of developing Artificial General Intelligence (AGI). Ultimately, he said, there was a tension between the pursuit of AGI and safety.

Russell’s serious concerns about the existential threats of unfettered AI were not expressed in open court after objections from OpenAI’s lawyers led the judge to limit Russell’s testimony. But Russell has long criticized the arms race created by frontier labs around the world competing to get to AGI first, and called on governments to regulate the field more tightly.

OpenAI’s attorneys used their questions to find that Russell was not directly examining the organization’s organizational structure or its specific security policies.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

But this reporter (and the judge and jury) will weigh how much value to place on the relationship between corporate greed and AI safety concerns. Almost all the founders of OpenAI have warned strongly about the risks of AI, while stressing the benefits, trying to build AI as quickly as possible – and lease plans for AI-focused businesses to manage them.

Besides, the clear problem here is the growing realization within OpenAI after its founding that the organization simply needs more computing capital if it is to succeed. That money could only come from profitable investors. The founding group’s fear of AGI at the hands of a single organization drove them to seek capital that eventually split the group, creating the arms race we know today—and bringing us to this case.

A similar dynamic is already playing out at the national level: Senator Bernie Sanders’ push for legislation to stop data center construction cites AI fears expressed by Musk, Sam Altman, Geoffrey Hinton and others. Hoden Omar, who works at the trade organization Center for Data Innovation, argued that Sanders is expressing their fears without their hopes, telling TechCrunch that “it is not clear why the public should discount everything said by tech billionaires unless their voices can be mobilized to fill the gaps in a dangerous debate.”

Now, both sides of the lawsuit are asking the court to do just that: take part of Altman and Musk’s arguments seriously, but discount the less useful parts of their legal argument.

Correction: The article was updated to correct the name of Stuart Russell, University of California, Berkeley computer science professor.

If you shop through links in our articles, we may earn a small commission. This does not affect our editorial independence.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button