When AI Goes to War
The U.S. government is trying to strong-arm the AI industry into using its technology for warfare, and Big Tech is split on whether to bend the knee.
Pages from the Anthropic website and the company's logos are displayed on a computer screen in New York on Feb. 26, 2026. (Graphic by Truthdig; images via AP Photo, Adobe Stock)
The fight for supremacy between competing artificial intelligence behemoths has become part of the rapidly escalating U.S. war against Iran. Anthropic, creator of the chatbot Claude, has seen a spike in public interest after its CEO announced that the company would not sign a $200 million contract with the Department of Defense; according to CEO Dario Amodei, the dispute was focused on two specific restrictions on the technology prohibiting its use for autonomous warfare and for mass surveillance.
On Friday evening, Sam Altman’s OpenAI, creator of ChatGPT, announced it had signed the contract Anthropic walked away from. Uninstalls of the chatbot spiked nearly 300% day-over-day — and Anthropic topped ChatGPT in the App Store for the first time — on Saturday.
While Amodei said in a statement that “we cannot in good conscience accede to their request,” Altman said when announcing the deal that he was able to get the Pentagon to agree to terms that would keep guardrails on the use of its tech “for all lawful purposes.” However, an analysis of OpenAI’s contract showed the government could conduct broad surveillance on U.S. citizens, and Altman was forced to backtrack. On X, Altman tried to explain the deal, saying, “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”
Secretary of Defense Pete Hegseth’s threats to remove Anthropic’s products from its systems and designate the company as a “supply chain risk” did not scare Amodei into submission. It also did not stop the U.S. from using Anthropic tools in its attacks on Iran mere hours after President Donald Trump announced that he would ban all federal agencies from using Anthropic technology. As The Wall Street Journal reported, it will take months for agencies already using it to phase out Anthropic and replace it with OpenAI and xAI products:
Commands around the world, including U.S. Central Command in the Middle East, use Anthropic’s Claude AI tool, people familiar with the matter confirmed. Centcom declined to comment about specific systems being used in its ongoing operation against Iran.
The command used the tool for intelligence assessments, target identification and simulating battle scenarios even as tension between the company and Pentagon ratcheted up, the people said, highlighting how embedded the AI tools are in military operations.
What happens next for Anthropic? According to Lawfare, neither Hegseth’s nor Trump’s announcement is likely to survive a court challenge, which the company has said it will pursue:
Step back and consider what these positions amount to together. The government is arguing that Claude is so vital to military operations that it cannot tolerate any contractual restrictions on it — while simultaneously claiming that Claude poses such a grave supply chain risk that the entire federal government must stop using it, every defense contractor must sever commercial ties with its maker, and the company should be cut off from the cloud infrastructure it needs to survive. It’s like the joke from “Annie Hall”: The food is terrible and the portions are too small.
That might be funny as a bit of Borscht Belt humor. It is less amusing as a description of the United States government’s strategy toward one of the companies leading America’s effort to develop what may be the most important technology of the century. What Hegseth is actually describing is not a supply chain risk determination but something closer to the beginning of a partial nationalization of the AI industry: Seize the technology and, if you can’t, destroy the company to ensure that no future AI developer dares negotiate terms the Pentagon dislikes.
Arbitrary and capricious review requires, at minimum, logical coherence. The government cannot credibly maintain that a vendor is indispensable, that its continued integration poses no immediate danger, that its technology is reliable enough for active combat operations in Iran, and that it is nonetheless so dangerous it must be severed from the entire federal procurement ecosystem — all in the same week. Even a court inclined to defer on national security matters will notice that these propositions cannot all be true at once.
You can read more about what’s happening here, here, and here.
Rock Solid JournalismIn 2026, amid chaos and the nonstop flurry of headlines, Truthdig remains independent, fact-based and focused on exposing what power tries to hide.
Support Independent Journalism.
You need to be a supporter to comment.
There are currently no responses to this article.
Be the first to respond.