This post was originally published on this site.
The decision came after AI-generated videos of King surfaced online in recent weeks, showing him in fabricated and crude scenarios, including stealing from a store, fleeing police, and reinforcing racial stereotypes. King’s estate described the deepfake videos as “disrespectful depictions” of the civil rights icon.
“Please stop,” Bernice King, MLK’s daughter, wrote on X in response to the deepfakes.
OpenAI said it recognizes “strong free speech interests,” but noted it believes estates should have control over how the likenesses of public figures are used.
The Sora app, still in invite-only testing, allows users to create deepfake videos by uploading their own voice and facial recordings. While users can restrict others from creating videos of them, the app initially allowed videos of celebrities and historical figures without consent.
Critics argue the company took a “shoot-first, aim-later” approach to content safety.
“The AI industry seems to move really quickly, and first-to-market appears to be the currency of the day,” Kristelia García, a professor of intellectual property law at Georgetown, said in a statement.
The company has since updated Sora’s policy to require opt-in consent from rights holders, reversing its earlier default. Families of several deceased public figures are continuing to push back against the deepfake videos.
“Please, just stop sending me AI videos of my dad,” Zelda Williams, the daughter of late actor Robin Williams, wrote on social media. “It’s NOT what he’d want.”