Anthropic Just Sued the Pentagon and Things Are Getting Wild
The AI company behind Claude told the government 'no' on mass surveillance and autonomous weapons. The government's response? Blacklisting them. Now Anthropic is fighting back in court.
If you thought AI drama was limited to chatbot fails and deepfakes, think again. Anthropic, the company behind the popular Claude AI, just filed a lawsuit against multiple federal agencies including the Department of Defense.
Here's the backstory: The Pentagon wanted to use Anthropic's AI technology for domestic mass surveillance and autonomous weapons. Anthropic said no. They had built-in restrictions on their tech specifically to prevent that kind of use.
The government's response was swift and brutal. Defense Secretary Pete Hegseth designated Anthropic as a "supply chain risk," essentially blacklisting the company. Their government contracts were canceled overnight.
This video covering the story racked up over 80,000 views on YouTube in just one day, and for good reason. It raises a question most of us haven't thought about: what happens when an AI company actually tries to do the right thing, and the government punishes them for it?
Anthropic is calling the government's actions "unprecedented and unlawful" in their lawsuit. Meanwhile, a new poll shows 95% of Americans oppose an unregulated race to superintelligence, suggesting the public might be on Anthropic's side.
As reported by Fortune and TechCrunch.
Source: Fortune
Sponsored