At a standstill, Elon Musk can’t escape his tweets

Elon Musk appeared in a California federal court on Wednesday to argue that Sam Altman and his co-founders “stole charity.” He admitted, under oath, that Tesla is currently not following conventional artificial intelligence – a direct contradiction to a tweet he sent a few weeks ago.
It was this kind of day for Musk.
The lawsuit challenging the structure of OpenAI alleges that Sam Altman and its co-founders tricked him into supporting the nonprofit, then launched the for-profit arm of the border lab and let it take control of the organization.
After Musk testified for hours on Wednesday in a California federal court, it seems that the case may come down to how much difference between the judges and Judge Yvonne Gonzalez Rogers between the investors in OpenAI whether their potential profits are closed or not.
According to Musk, when he joined the lab with Sam Altman, Ilya Sutskever, Greg Brockman and others, he hoped they would build AI for humanity, but later became suspicious of their motives, and ultimately concluded that they were “robbing a non-profit organization.”
OpenAI’s attorney William Savitt tried to complicate the story during cross-examination, trying to show that Musk had supported various efforts to turn OpenAI into a profitable position so that it could raise the necessary capital to compete with firms like Google, including merging the AI lab with Tesla.
Musk revealed that he discussed turning the company into a for-profit in early 2016, and that in 2017, he explored creating a for-profit arm of OpenAI in which he would hold more equity and control the company. When those plans fell apart, he stopped donating regularly to OpenAI, though he continued to pay for office space until 2020.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Musk emphasized that there is a big difference between investors with limited returns and those with unlimited returns. Microsoft’s large early investment in OpenAI limited the utility of the software interface, but those limitations have been rolled back over the years. Musk says those changes ultimately led him to bring this lawsuit.
Savitt tried to find out if Musk had been approached by Altman and Shivon Zillis — his longtime adviser and the mother of his four children — about subsequent fundraising efforts, and he didn’t object. Zillis was also a board member of OpenAI when it commissioned some of those projects.
That test reached Tesla’s AI ambitions. Notably, Musk was asked about Tesla’s efforts to develop competing AI technologies and found himself, not for the first time, on the wrong side of one of his posts on X. After Musk said that Tesla’s AI work was only focused on self-driving and not general artificial intelligence, or AGI (the term for AI systems in question about the latest AI work), they said that “Tesla will be one of the companies doing AGI.” “We’re not pursuing AGI right now,” Musk told the court. (Tesla shareholders may want to take note.)
Musk was also asked about the post in which he said he invested $100 million in OpenAI, instead of the $38 million that changed hands. He pointed out that his reputation and network is what makes this difference.
Savitt revealed emails in which Musk supported efforts by Tesla and his brain-processing company, Neuralink, to poach OpenAI employees while he was on that company’s board. One interview focused on Andrej Karpathy, who left OpenAI to lead the self-driving project at Tesla. One focused on Sutskever, who Zillis suggested Musk hire at Tesla.
However, the most important thread of the day was about safety. Part of Musk’s case is based on the idea that OpenAI’s move to a mainstream company is dangerous to society because it reduces the company’s focus on security. Savitt, in turn, Musk acknowledged that all AI companies, including his own, suffer from this risk.
Judge Gonzalez Rogers stopped that investigation, but in the words of the lawyers after the conclusion of the testimony, it was clear that it will start again, with restrictions. When Musk’s lawyers asked questions about ChatGPT’s role in the Tumbler Ridge shooting—a 2024 incident in Canada where a man killed his family after long conversations with a chatbot—he made it clear that he didn’t want to hear about the scandals created by AI models, but that xAI and OpenAI’s security measures were fair game.
Musk returns Thursday for another round of cross-examination. Others expected to testify are his family office manager, Jared Birchall; AI security expert Stuart Russel; and OpenAI president Greg Brockman.
If you shop through links in our articles, we may earn a small commission. This does not affect our editorial independence.



