The US Government Just Picked a Fight With One of the Biggest AI Companies on Earth
Washington is drawing up strict new rules for AI companies after a very public falling out with Anthropic, the makers of Claude.
Things just got very awkward between the US government and Anthropic, the company behind the popular AI assistant Claude.
According to the Financial Times, the government is drafting strict new guidelines that would ban AI contractors from building "partisan or ideological judgments" into their systems. Companies would also have to disclose whether their AI models have been tweaked to comply with any non-US rules or regulations.
In plain English: the government wants to make sure the AI tools it buys aren't secretly biased or following someone else's rulebook.
This comes after what insiders are calling a "breakup" between Washington and Anthropic. While the exact details of what went wrong are still murky, a bipartisan group of experts has stepped in to create what the government hasn't been able to produce on its own: an actual framework for responsible AI development.
TechCrunch called it "a roadmap for AI, if anyone will listen." The big question now is whether these new rules will actually protect the public or just slow down American AI companies while competitors in China and elsewhere race ahead. As reported by The Financial Times and TechCrunch.
Source: TechCrunch
Sponsored