Musk Accuses OpenAI of Safety Lapses as Legal Battle Intensifies
Elon Musk has leveled accusations against OpenAI, claiming its ChatGPT has been linked to user deaths, while asserting his own AI chatbot, Grok, has maintained a clear safety record. The claims surfaced in a recently released deposition as part of Musk’s lawsuit against the AI company.
Musk’s Allegations and the 2023 AI Safety Letter
During the deposition, Musk directly attacked OpenAI’s safety track record, stating, “Nobody has committed suicide due to the fact that of Grok, but apparently they have because of ChatGPT.” This statement arose during questioning about a public letter Musk signed in March 2023, urging AI labs to pause the development of AI systems more powerful than GPT-4 for at least six months. TechCrunch reports that the letter, signed by over 1,100 individuals including AI experts, expressed concerns about the lack of planning and control in the rapid development of increasingly powerful AI.
Lawsuit Centers on OpenAI’s For-Profit Shift
Musk’s lawsuit against OpenAI centers on the company’s transition from a non-profit AI research lab to a for-profit entity, which he alleges violated its founding agreements. He argues that OpenAI’s commercial relationships could compromise AI safety by prioritizing speed, scale, and revenue over safety concerns. The New York Post notes that a judge allowed the case to proceed to a jury trial in January 2026.
xAI’s Own Safety Concerns
Despite Musk’s defense of Grok, xAI has faced its own safety issues. Last month, Musk’s social media platform X was flooded with non-consensual nude images generated by Grok, some allegedly depicting minors. This incident triggered investigations by the California Attorney General and regulators in the European Union, with some governments imposing bans. The Times of India highlights these investigations.
Musk’s Founding Concerns and OpenAI’s Origins
Musk reiterated that OpenAI was initially founded as a counterweight to Google, which he believed wasn’t adequately prioritizing AI safety. He also clarified that his contribution to OpenAI’s early funding was approximately $38 million, less than his previously stated $100 million. He acknowledged the potential risks of artificial general intelligence (AGI) if not properly managed.
Trial Expected in March
The jury trial is expected to take place next month, following the filing of the deposition transcript in late February. The case will likely focus on whether OpenAI’s shift to a for-profit model compromised its original commitment to AI safety and public benefit.