Anthropic Just Told the Pentagon 'No' and Now the Government Is Trying to Destroy Them
The AI company behind Claude refused to let the military use its tech for weapons and surveillance. The Pentagon's response? Blacklist them entirely.
In what might be the biggest showdown between Silicon Valley and the U.S. military in years, Anthropic just sued the Trump administration after being labeled a "supply chain risk" by the Pentagon.
Here's what happened: The Department of Defense had a $200 million contract with Anthropic to use Claude, one of the most powerful AI systems in the world. But when the Pentagon wanted to remove all restrictions and use Claude for things like mass surveillance and autonomous weapons, Anthropic said no.
The Pentagon's response was swift and brutal. Defense Secretary Pete Hegseth declared Anthropic a national security risk, effectively blocking every federal agency and military contractor from doing business with them. That's not just losing one contract. That's being cut off from the entire government.
So Anthropic filed a lawsuit in U.S. District Court, arguing the blacklisting was retaliation for refusing to hand over unrestricted access to their AI. The company says it was willing to work with the military on many things, but drew the line at domestic surveillance and lethal autonomous weapons.
This story raises a question that's going to define the next decade: When AI companies build something powerful enough for war, do they get to decide how it's used? Or does the government just take what it wants?
As reported by The Washington Post, CNN, and The New York Times.
Source: CNBC
Sponsored