Washington Examiner

Judge blocks Pentagon’s punitive measures against Anthropic


Deprecated: str_getcsv(): the $escape parameter must be provided as its default value will change in /var/www/html/breaking-news/wp-content/plugins/wp-auto-affiliate-links/aal_engine.php on line 361

A Biden-appointed U.S. District Judge Rita Lin granted Anthropic a preliminary injunction blocking the Pentagon’s punitive measures that labeled the company a “supply chain risk,” calling the move Orwellian and baseless. the 43-page ruling rejected the claim that the designation was needed for national security, arguing the punishment stemmed from Anthropic’s public criticisms of the DoD rather than any actual security threat. Lin suggested the measures amounted to improper retaliation against speech protected by the First Amendment and gave the government a week to appeal, while indicating Anthropic is likely to succeed on the merits. The Pentagon had argued that Anthropic’s refusal to grant unfettered access to its AI poses a national security risk and could endanger servicemembers, but the judge found the justification unconvincing. Despite the dispute, the military has continued to rely on Anthropic’s Claude for operations against Iran (Operation Epic Fury), with reports that Claude helped target over 1,000 sites in the first 24 hours. Anthropic welcomed the ruling as essential to protecting its customers and its ability to work with the government.


Judge blocks Pentagon’s punitive measures against ‘supply chain risk’ Anthropic

A Biden-appointed judge granted Anthropic a preliminary injunction against the Pentagon’s punitive measures against it over a feud regarding the usage of its artificial intelligence, Claude.

U.S. District Judge Rita Lin didn’t mince words in a 43-page ruling, condemning the Pentagon as “Orwellian” and claiming its actions against Anthropic were fully without basis. She gave the government one week to appeal, though she indicated the AI firm was likely to succeed on the merits of its lawsuit.

“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” she wrote on Thursday.

Lin disputed the Pentagon’s argument that designating Anthropic a supply-chain risk was necessary on national security grounds, even arguing that the punitive measures were primarily due to the company’s hostile public approach to the Pentagon.

“These broad measures do not appear to be directed at the government’s stated national security interests,” she wrote. “The Department of War’s records show that it designated Anthropic as a supply chain risk because of its ‘hostile manner through the press.’”

“Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” Lin added.

Her ruling is hardly surprising, as she expressed heavy skepticism toward the Pentagon at a Tuesday hearing.

Anthropic celebrated the ruling, saying it was essential in protecting the company and its services.

“We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits,” an Anthropic spokesperson told the Washington Examiner. “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”

The Pentagon has argued that Anthropic’s unwillingness to give it unfettered access constitutes a national security threat, putting servicemembers’ lives in danger through the possibility of being faulty in the line of combat.

“The worry is that Anthropic, instead of merely raising concerns and pushing back, will say we have a problem with what DoW is doing and will manipulate the software … so it doesn’t operate in the way DoW expects and wants it to,” Trump administration attorney Eric Hamilton said at the Tuesday hearing.

JUDGE SUGGESTS PENTAGON’S ‘SUPPLY-CHAIN RISK’ LABEL IS ‘PUNISHING’ ANTHROPIC

The U.S. military continues to utilize Claude, primarily through Palantir’s Maven Smart System. Despite the feud, the United States heavily relied on Anthropic for targeting Iran in Operation Epic Fury.

Claude allowed the U.S. to strike over 1,000 targets in the first 24 hours of operations, the Washington Post reported, identifying and prioritizing key command and control and military installations.



" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."
*As an Amazon Associate I earn from qualifying purchases

Related Articles

Back to top button
Close

Adblock Detected

Please consider supporting us by disabling your ad blocker