Federal District Judge Rita Lin on Tuesday questioned the Department of War’s decision to label AI firm Anthropic a national security risk, a move that bars all government contractors from using the company’s technology. This legal battle follows a contract dispute where Anthropic refused to allow its AI tools to be used for lethal warfare or mass surveillance. The ruling will decide if the government can effectively shut down a domestic business by labeling it a threat to the country.
Judge Rita Lin questions Pentagon over "attempted corporate murder" claims
Lawyers for the Department of War and Anthropic argued in a California federal court on Tuesday over the government's decision to ban the AI firm. Anthropic is asking for an immediate order to stop the government from enforcing a "supply-chain risk" label against it. This label prevents any company doing business with the military from using Anthropic’s Claude AI software.
Judge Rita Lin expressed doubt about the broad power the Pentagon used to punish the company. She noted that an outside legal brief described the government's actions as "attempted corporate murder" because it forces partners to cut ties with the firm. This means the government is not just choosing a different vendor but is actively trying to damage Anthropic’s ability to exist in the private market.
Deputy Assistant Attorney General Eric Hamilton argued that the government has the right to choose its own partners. He claimed that Anthropic’s refusal to follow certain contract terms raised fears that the company could use a "kill switch" to stop its software during military missions. This argument suggests the government views a company's safety guardrails as a potential weapon against the state.
Contract dispute over lethal AI use led to national security ban
The conflict began in February during contract talks between Anthropic and the Department of War, which was recently renamed from the Department of Defense. The military wanted an "all lawful use" clause that would allow it to use the Claude AI tool for any legal purpose. Anthropic founder Dario Amodei refused this term, specifically wanting to block the AI from being used for lethal autonomous weapons.
Anthropic argued that it has not tested its AI for combat and does not believe the software is safe for those purposes. The Department of War rejected these safety limits, stating that military leaders need full control over how they use technology in the field. This disagreement moved from a private negotiation to a public ban within weeks.
On February 27, President Donald Trump ordered all federal agencies to stop using Anthropic’s tools immediately. On the same day, Secretary of War Pete Hegseth labeled the company a supply-chain risk on social media. This label is usually used for foreign enemies or hostile nations, making this the first time a major U.S. tech firm has been targeted this way by its own government.
Major tech firms and federal unions warn of industry-wide damage
The ban affects more than just Anthropic because it forces other large companies to change how they work. Microsoft, Nvidia, Amazon, and Google all have business ties to Anthropic that are now at risk. If the ban stays in place, these companies may have to stop working with Anthropic to keep their own government contracts.
The American Federation of Government Employees, which represents 800,000 workers, filed a brief supporting Anthropic. The union claimed the administration is using national security as an excuse to punish a company for its political or ethical views. This suggests that any company that disagrees with the government could face similar financial penalties.
Microsoft warned the court that this ban could stop future investment in defense technology. If companies fear the government will destroy their business over a contract disagreement, they may stop building tools for the military. This creates a situation where the government's attempt to secure the supply chain actually makes it harder to find new technology.
Immediate changes for government contractors and AI developers
The current order from Secretary Pete Hegseth creates several immediate problems for the tech industry:
- Government contractors must stop all commercial activity with Anthropic or lose their federal funding.
- Federal agencies like the National Endowment for the Arts are blocked from using Anthropic tools for basic tasks like website design.
- Investors are being warned that American AI companies may no longer be safe investments if they face government retaliation.
These changes mean that Anthropic is losing revenue from both the government and the private sector at the same time. Because the military is a massive buyer of technology, many private firms will choose to drop Anthropic rather than risk their relationship with the Pentagon. This creates a "blackball" effect that removes Anthropic from the wider market.
Uncertainty remains over AI safety and government control
The government has not yet proven that Anthropic’s software actually contains a "kill switch" or any technical threat. Instead, the Department of War is arguing that the risk comes from the company's refusal to be fully obedient to military demands. This leaves it unclear what specific technical flaws, if any, led to the security risk label.
Anthropic maintains that its safety guardrails are meant to prevent accidents, not to interfere with the government. However, the government argues that any limit on military use is a threat to national security. This creates a deadlock where neither side agrees on who should control the safety settings of powerful AI models.
There is also no clear process for how a company can remove a "supply-chain risk" label once it is applied. Anthropic claims the government skipped the usual legal steps required by the Administrative Procedure Act. Without a clear path to appeal, the company remains in a legal limbo that threatens its long-term survival.
Judge Lin is expected to issue a ruling this week
Judge Rita Lin confirmed on Tuesday that she will release her decision in the next few days. She must decide if the government’s reaction was a fair response to a security concern or an illegal act of retaliation. If she grants the injunction, the government will have to pause the ban while the full lawsuit moves forward.
The Department of War has not indicated if it will change its position if the judge rules against it. Secretary Pete Hegseth has maintained that the military will not work with companies that place limits on how their tools are used. A ruling is expected before the end of the week, which will provide the first legal test of the Trump administration's power to label domestic firms as security risks.
Key Numbers and Facts
The confirmed figures behind this story at a glance.
Key Fact Detail Main person or organisation Anthropic and the Department of War Main action or decision Designation of Anthropic as a "supply-chain risk" Date or period Hearing held March 24, 2026; Ban started Feb 27, 2026 Location California Federal District Court Amount, figure, or scale Affects 800,000 federal workers and all military contractors Previous status Anthropic was a standard government vendor Current status Banned from all federal and military-related business Primary effect Forced divestment by partners like Nvidia and Google Next confirmed step Judge Rita Lin to issue a ruling this week
The court's decision will set a precedent for corporate free speech
This case is about more than just a single contract; it is about whether the government can use its buying power to force companies to abandon their ethical standards. If the court allows the "supply-chain risk" label to stand, it gives the executive branch a tool to destroy any company that refuses a government demand. This would change the relationship between Silicon Valley and Washington D.C. forever.
The outcome will show if the First Amendment protects a company's right to set safety rules for its own products. If the judge finds that the government acted out of spite rather than security, it will limit the Pentagon's power to blackball American businesses. The final ruling will determine if the government can use security labels to silence companies that disagree with military policy.
Frequently Asked Questions
Why did the government ban Anthropic?
The government banned Anthropic after the company refused to allow its AI to be used for lethal warfare and mass surveillance. Secretary of War Pete Hegseth labeled the firm a "supply-chain risk" on February 27, 2026. This label prevents any government contractor from doing business with the company.
What is the Anthropic lawsuit about?
Anthropic is suing the government for allegedly violating its First Amendment rights and due process. The company claims the government is retaliating against it for expressing views on AI safety. Anthropic wants the court to stop the government from enforcing the ban and the security risk label.
When will the judge rule on the Anthropic case?
Judge Rita Lin stated on Tuesday that she will issue a ruling within the next few days. This decision will determine if the ban on Anthropic stays in place during the trial. The ruling is expected to be released before the end of the current week.