Senior Pentagon official breaks silence on Anthropic-Department of War dispute, says ‘I had a holy, holy….’


Senior Pentagon official breaks silence on Anthropic-Department of War dispute, says ‘I had a holy, holy….’

A senior Pentagon official has offered more details on what triggered the dramatic falling-out between the US military and AI company Anthropic. According to a report by news agency Reuters, Emil Michael, Under Secretary of Defense for Research and Engineering, said that when he reviewed the terms of AI contracts signed under the Biden administration, he was alarmed by what he found. Michael’s description in front of an audience at the American Dynamism Summit in Washington on Tuesday (March 3) paints a picture of a military that felt its hands were being tied in the middle of active operations.“I had a ‘holy, holy cow’ moment. There were things … you couldn’t plan an operation … if it would potentially lead to kinetics (or explosions),” Michael said at the summit. He described that the contract had dozens of restrictions baked in and they covered commands responsible for air operations over Iran, China and South America.He explained that the contracts, he explained, contained sweeping restrictions on how AI models could be used — restrictions so broad that they threatened to shut down military planning in real time. “There were things… you couldn’t plan an operation if it would potentially lead to kinetics,” he said — using military terminology for explosions and combat.He described dozens of restrictions baked into agreements covering commands responsible for air operations over Iran, China, and South America.Most critically, Michael said the contracts were structured so that if a military operator violated the AI provider’s terms of service, the model could theoretically “just stop in the middle of an operation.”He did not name the AI company whose contracts he was reviewing. However, at the time of his review, Anthropic’s Claude was the only AI model available to the Defense Department on its classified systems.A Flashpoint Over a High-Profile Military OperationMichael’s concerns escalated further after a senior executive at an unnamed AI company began asking questions about whether its software had been used in what he described as “one of the most successful military operations in recent memory.”That operation is widely understood to be the U.S. government raid in January that captured former Venezuelan President Nicolás Maduro. Reports have indicated that Anthropic’s Claude was used to help plan that mission.The suggestion that a private AI company might seek to scrutinize or challenge how its technology was used in a classified military operation appeared to be a turning point for Pentagon leadership.“What we’re not going to do is let any one company dictate a new set of policies above and beyond what Congress has passed,” Michael said firmly.How It All Came to a HeadMichael’s remarks offer the clearest explanation yet for why the dispute between Anthropic and the Pentagon escalated so rapidly. The conflict came to a head over Anthropic’s refusal to remove restrictions related to autonomous weapons and mass surveillance from its government contracts — limits the company said were essential ethical guardrails.Defense Secretary Pete Hegseth responded by declaring Anthropic a “supply-chain risk” to national security, a designation that would bar U.S. defense contractors from using the company’s tools. President Trump followed with an order banning Anthropic from government business entirely.OpenAI Steps InWithin hours of the fallout, rival OpenAI announced its own agreement with the Pentagon for deployment of its models on Defense Department networks. OpenAI CEO Sam Altman suggested in a statement that the Department had agreed to certain restrictions with OpenAI as well — though the precise terms of that deal have not been made public.The contrast was stark: one AI company lost its government contracts in a matter of days, while another moved quickly to fill the gap.The Bigger QuestionMichael’s disclosures raise a question that goes beyond Anthropic and OpenAI: who gets to set the rules for AI used in war?Should private AI companies be allowed to place ethical limits on how their technology is used by the military? Or does national security demand that the government — and not a tech startup — have the final word?For now, the Trump administration has made its position clear. But the debate is far from over.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Discover more from Live Update Hub

Subscribe now to keep reading and get access to the full archive.

Continue reading