Defense Secretary Pete Hegseth is urging Anthropic to ease rules that restrict the U.S. military from using its artificial intelligence system, Claude. The request spotlights a growing clash between national security goals and tech company safety policies. It also raises fresh questions about how far private firms should go in shaping the rules of war in the age of AI.
At issue is whether Anthropic’s safety guardrails should block certain defense applications. Hegseth’s push signals a desire for wider access to commercially built AI tools. That move could reshape how the Pentagon experiments with large language models across planning, logistics, training, and analysis.
“Defense Secretary Pete Hegseth wants Anthropic to drop restrictions on the military’s use of Claude.”
What Is Claude and Why It Matters
Claude is a large language model built by Anthropic. The company has set policy limits to reduce harmful uses. Those limits include bans on aiding weapons development, targeting, or other high-risk actions.
The Pentagon has moved quickly to test AI for safer operations, faster decision support, and better maintenance planning. Officials argue commercial models can help with translation, document sorting, wargaming, and cybersecurity analysis. Hegseth’s position suggests frustration with hurdles that block rapid field use.
Safety Policies Meet Security Demands
Anthropic and other AI firms argue that strict use policies reduce risks of misuse. Civil society groups support limits that prevent autonomous targeting or escalation during conflict. They warn that chatbots can generate plausible but wrong instructions at speed.
Defense leaders point to safeguards already in place. The Pentagon adopted Responsible AI principles and testing frameworks focused on safety, reliability, and human oversight. Supporters of wider access say these controls, paired with monitoring, can manage risks while unlocking value.
Competing Risks: Speed vs. Control
The debate turns on trade-offs. Moving fast could give U.S. forces an edge, but it may expose troops and civilians to new hazards if systems fail under stress. Slower, policy-bound adoption could reduce harm but leave the military behind rivals who adopt AI with fewer limits.
- Supporters say AI can speed analysis, cut red tape, and improve readiness.
- Critics fear model errors, bias, data leaks, and blurred lines of accountability.
- Industry faces pressure from both regulators and defense customers.
Legal and Ethical Questions
International humanitarian law demands distinction, proportionality, and accountability in war. Military AI must fit those rules. Private firms now act as gatekeepers through product policies. That shift puts corporate boards and trust-and-safety teams at the center of public defense choices.
If Anthropic loosens restrictions, it could face scrutiny from lawmakers and watchdogs. If it holds firm, the Pentagon may seek other vendors or push for tailored models hosted on secure systems. Either path carries legal and reputational risk.
Market and Industry Impact
AI vendors are splitting on military access. Some offer special models trained for defense uses with strict controls and on-premise deployment. Others refuse lethal applications outright. A change by Anthropic could nudge peers to revisit their rules or double down on bans.
For defense contractors, the opening is clear: package AI tools with testing, auditing, and human-in-the-loop features that meet military standards. For startups, the signal is mixed. There is demand from the Pentagon, but long sales cycles and compliance burdens remain high.
What to Watch Next
Observers will track any talks between the Pentagon and Anthropic on carve-outs or new oversight layers. They will also watch whether agencies propose certifications for defense-ready language models. Clearer evaluation methods for accuracy, failure modes, and red-teaming could shape any policy shift.
Congress may seek hearings on AI use in targeting, command support, and nuclear command-and-control safeguards. Allies could coordinate standards to avoid gaps that adversaries might exploit. Insurance markets and procurement rules may adapt to require safety audits and incident reporting.
Hegseth’s call sets up a test for how the United States balances safety with deterrence in AI adoption. The outcome will influence defense planning, tech company policies, and global norms. Expect more pressure for transparency, stronger oversight, and clear lines on what AI should and should not do in war.