People Are Rage-Downloading Claude to Stick It to the Pentagon, and It Actually Worked
Anthropic's AI chatbot Claude just hit #1 on the App Store, beating ChatGPT for the first time ever. The reason? The U.S. government tried to punish the company for refusing to let the military use its AI for autonomous weapons.
In what might be the most unexpected protest movement of 2026, thousands of Americans downloaded an AI chatbot app this weekend just to make a political statement. And it actually worked.
Anthropic's Claude, one of the main competitors to OpenAI's ChatGPT, rocketed to the #1 spot on Apple's App Store over the weekend, overtaking ChatGPT for the first time ever. But this wasn't because of some fancy new feature or viral marketing campaign. It happened because the Pentagon tried to bully Anthropic into giving the military unrestricted access to its AI, and when the company said no, the Trump administration banned all federal agencies from using their technology.
Here's the backstory: Defense Secretary Pete Hegseth demanded that Anthropic remove its safety restrictions so the military could use Claude for things like fully autonomous weapons (think: AI-controlled systems that can kill without a human making the final call) and mass surveillance programs. Anthropic's CEO essentially said "absolutely not" and refused to budge, even when threatened with the Defense Production Act, a law normally used during wartime emergencies.
The public response was immediate and dramatic. People started downloading Claude in droves, many of them posting on social media about canceling their ChatGPT subscriptions after learning that OpenAI had eagerly stepped in to take the Pentagon deal Anthropic refused.
Anthropic has since announced plans to challenge the federal ban in court. This video from CNBC covering the story racked up nearly 80,000 views in just one day: https://www.youtube.com/watch?v=Ecmlh607KeA
As reported by TechCrunch, Axios, and Business Insider.
Source: TechCrunch
Sponsored