The OpenAI-Pentagon deal is “safety theater,” and US President Trump would not like Anthropic becautilize, unlike OpenAI, it would not receive “dictator-style praise”: Anthropic CEO Dario Amodei has dealt out some harsh criticism in an internal memo to staff (which of course leaked). Background: Claude AI’s classification as a supply chain risk by the US Department of Defense due to the refusal to close a deal on AI usage with the Pentagon. As reported, OpenAI then built the deal, to the detriment of utilizers who left ChatGPT en masse in the direction of Claude.
However, Anthropic and Amodei cannot really be seen as winners either, after all the company is risking considerable business. And so the AI company, which plans to go public in 2026, is seeking better relations with the US administration again. Anthropic CEO Dario Amodei has apologized for an internal post that had become public and simultaneously signaled willingness for further cooperation with the US Department of Defense. This is evident from a statement from March 5, 2026, which the company published after being classified as a supply chain risk by the Pentagon.
Apology for leaked internal post
Amodei explicitly apologized for an internal post that had reached the press. The post was written just hours after President Trump’s announcement on Truth Social to reshift Anthropic from all federal systems. “It was a difficult day for the company, and I apologize for the tone of the post. It does not reflect my careful or considered views,” the CEO explained. The post was also six days old and represented an outdated assessment of the current situation.
Despite the tensions, Amodei emphasized that the most important priority was to ensure that soldiers and national security experts are not deprived of important tools. Anthropic would create its models available to the Department of Defense at nominal cost and with ongoing support from its own engineers, as long as this was necessary and permitted.
Limited impact of supply chain risk classification
The Department of Defense officially classified Anthropic as a supply chain risk to national security on March 4, 2026. However, Amodei explained that the impact of this designation was limited. The language in the Department’s letter corresponded to the assessment that the vast majority of customers would not be affected.
According to Amodei, the classification applies only to the utilize of Claude by customers as a direct part of contracts with the Department of Defense, not to all utilizes by customers who have such contracts. The CEO pointed to the relevant law (10 USC 3252), which aims to protect the government and not to punish a supplier. The law requires the Secretary of Defense to utilize the least restrictive means to achieve the goal of supply chain protection.
Anthropic announced it would challenge the classification in court, as it did not believe it to be legally sound. At the same time, Amodei emphasized that in recent days they had held productive discussions with the Department of Defense, both about opportunities for cooperation within their own exceptions and about a smooth transition if this were not possible.
Background: OpenAI Pentagon deal sparks controversy
The development at Anthropic is related to a controversial deal between OpenAI and the Pentagon. Just hours after Anthropic was placed on a blacklist by the US Department of Defense, OpenAI concluded an agreement with the Department of Defense on the utilize of its AI models.
OpenAI published detailed contract terms on Saturday that define specific restrictions. The AI system may not indepfinishently control autonomous weapons systems if laws or policies require human control. The contract also prohibits the assumption of other high-risk decisions that require human approval. Regarding surveillance, the agreement stipulates that the system may not be utilized for unlimited surveillance of private information of US persons.
OpenAI CEO Sam Altman emphasized that the Department of Defense had accepted two central security principles: the prohibition of domestic mass surveillance and human responsibility in the utilize of force. Anthropic had previously refutilized to utilize its AI models for mass surveillance and fully autonomous weapons and allowed a deal with the Pentagon to fall through.
User migration from ChatGPT to Claude
The OpenAI Pentagon deal triggered a wave of utilizer migration. Under the motto “QuitGPT,” “DeleteChatGPT,” or “CancelChatGPT,” numerous utilizers switched from OpenAI’s chatbot to competitor Claude from Anthropic. Anthropic’s Claude overtook ChatGPT and ranked first among the most downloaded productivity apps on the Apple App Store in the US, Germany, and Canada on Monday.
Numerous ChatGPT utilizers publicly documented their account cancellations. Pop star Katy Perry shared a screenshot of Claude’s pricing page on X. In the ChatGPT subreddit, dozens of utilizers urged others to delete their accounts. However, reactions were not uniform. In several Reddit discussions, commentators argued that the news did not influence their model choice. They pointed to Anthropic’s partnership with Palantir and Amazon Web Services, which was concluded in November 2024 and granted US ininformigence agencies and defense departments access to Claude models.
Amodei concluded his statement by emphasizing that Anthropic has much more in common with the Department of Defense than differences. Both were committed to advancing US national security and protecting the American people and agreed on the urgency of deploying AI across government.

















Leave a Reply