OpenAI Inks Deal With Pentagon Amid Anthropic Clash

0 comments

OpenAI Secures Pentagon Deal Amidst Trump Administration’s Ban on Anthropic

OpenAI has reached an agreement with the U.S. Department of War (DoW) to deploy its artificial intelligence (AI) models on the department’s classified network, according to CEO Sam Altman [1]. This deal comes shortly after President Donald Trump ordered federal agencies to cease using technology from rival AI startup Anthropic [2].

Key Terms of the OpenAI-Pentagon Agreement

Altman emphasized that OpenAI’s agreement with the DoW is built upon key safety principles, including prohibitions on domestic mass surveillance and ensuring human responsibility for the utilize of force, particularly in autonomous weapon systems [1]. He stated that the DoW agrees with these principles and has incorporated them into the agreement, along with technical safeguards to ensure proper model behavior [1]. Altman has called on the DoW to extend these terms to all AI companies [1].

Trump Administration’s Actions Against Anthropic

The White House has directed all federal agencies to halt the use of Anthropic’s technology, with a six-month phase-out period [1]. President Trump expressed his disapproval of Anthropic in a post on Truth Social [1]. The conflict arose from Anthropic’s objections to the DoW’s potential use of its Claude model for autonomous weapons and domestic surveillance. The Pentagon has subsequently designated Anthropic as a supply-chain risk, a designation the company contests as illegal [2].

OpenAI’s Recent Funding

Concurrent with the Pentagon agreement, OpenAI announced a $110 million funding round led by Amazon, Nvidia, and Softbank [1]. Nvidia and Softbank each contributed $30 billion, whereas Amazon committed an initial $15 billion, with the potential for an additional $35 billion upon meeting certain conditions [1]. These conditions may include OpenAI going public or achieving artificial general intelligence (AGI) [1].

Addressing Concerns and Safeguards

Sam Altman acknowledged that the deal with the Department of Defense was “definitely rushed” and that “the optics don’t look good” [2]. OpenAI has outlined its safeguards, stating its models cannot be used for mass domestic surveillance, autonomous weapon systems, or “high-stakes automated decisions” [2]. The company differentiates itself from other AI firms by maintaining “full discretion over its safety stack,” deploying via cloud, utilizing cleared personnel, and implementing strong contractual protections [2].

Related Posts

Leave a Comment