Women and Girls Are Suing Elon Musk's Grok AI Over Fake Explicit Images It Made of Them Without Consent
Three lawsuits and counting as Grok's image generator keeps being used to create deepfake explicit content of real people.
Elon Musk's AI chatbot Grok has a serious problem, and it's ending up in court.
A new lawsuit filed this week is the third legal challenge against Grok over nonconsensual explicit images. Since Grok's image generation feature launched on X (formerly Twitter) last December, users quickly figured out how to generate sexually explicit deepfake images of real people, including women and underage girls.
The lawsuits allege that Grok's safety guardrails are either broken or barely exist, making it disturbingly easy to create fake explicit content of anyone. All you need is a name or a photo.
This is different from the usual AI controversy. We're not talking about AI taking someone's job or writing a weird poem. We're talking about real people, including minors, having fake explicit images of themselves created and shared without their knowledge or permission.
The timing is especially awkward for Musk, who has positioned himself as a champion of free speech and minimal content moderation on X. Critics say that philosophy created the exact environment where something like this could happen.
Whether these lawsuits succeed could set a major legal precedent for how AI image generators are regulated going forward.
As reported by The 19th.
Source: The 19th
Sponsored