Elon Musk's AI Made Millions of Fake Explicit Images of Women, and Now It's Getting Sued Into Oblivion
Three separate lawsuits are piling up against xAI's Grok after it generated millions of sexualized deepfakes, including images of children.
When Elon Musk's xAI launched Grok's image generation feature on X back in December, it took users approximately zero seconds to start making explicit deepfake images of real people. And the numbers are staggering.
According to a New York Times review, Grok generated over 4.4 million images in just nine days. Of those, 1.8 million were sexualized depictions of women. Even worse, researchers estimated Grok created around 23,000 sexualized images of children in just 11 days.
Now three separate lawsuits are hitting xAI. The latest, filed Monday, is a class-action suit from three girls who say their photos were used to generate child sexual abuse material through a third-party app powered by Grok.
Here's the kicker: while all this was happening, the Pentagon signed a $200 million deal to integrate Grok into military operations. So the same AI being sued for generating explicit images of minors is also being used by the Department of Defense.
Federal prosecutors haven't filed criminal charges yet despite the Take It Down Act specifically banning nonconsensual intimate imagery. These civil lawsuits are currently the only path to accountability.
As reported by The 19th.
Source: The 19th
Sponsored