It’s been not even a month since OpenAI launched its latest AI video generation model Sora 2, and the issues that have cropped up from its hyper-realistic depictions of both copyrighted characters and real-life public figures have emerged quicker than people can swat them down.
This very morning on Friday, October 17, OpenAI jointly issued a statement with Martin Luther King Jr.‘s estate saying it has “paused” Sora generating Dr. King after users made “disrespectful depictions of Dr. King’s image,” and that it would be strengthening “guardrails” for historical figures. We can only imagine what sort of nasty stuff had to be out there before OpenAI took action.
OpenAI maintained in its statement that it believes there are still “strong free speech interests” around depicting historical figures. We saw one video made by Sora that features the likeness of Dr. King saying he had a dream that one day he could get “that fucking light” to stop blinking above his bed, so yeah, call the First Amendment lawyers. OpenAI also is sticking to the idea that representatives or estate owners “can request that their likeness not be used in Sora cameos.”
That aspect of wanting people to opt-out, as we previously asked several attorneys, is not how any sort of legal right works. The idea that every public figure who doesn’t want their image depicted — especially the dead ones — needs to tell OpenAI to stop before they will comply is mind-boggling. And it doesn’t mean that OpenAI hasn’t already trained Sora on that public figure’s likeness, it just means you theoretically won’t be able to generate that person anymore.
But the AI issues aren’t going away. Someone else just generated a Sora video in the time it took you to read these last few paragraphs. So estates and rights owners are starting to fight fire with fire.
Earlier this week, talent management firm CMG Worldwide partnered with an artificial intelligence company called Loti AI that uses AI to scan the web for misappropriated content and issue takedowns, and the company works with both public figures and everyday individuals. The company’s boilerpate says it’s 95 percent effective at finding AI-generated videos, images, or audio and getting them taken down within a day.
It’s only a handful of clients so far that have quickly agreed to receive the protections, though it will be available to any of CMG’s other clients that ask. The short list of deceased public figures for which it manages the rights and who now get these protections include: Burt Reynolds, Christopher Reeve, Ginger Rogers, Harry Belafonte, Jimmy Stewart, John Wayne, Judy Garland, Mickey Rooney, Raquel Welch, Andre the Giant, Joe Louis, Macho Man Randy Savage, Rocky Marciano, Sugar Ray Robinson, David Ruffin, Albert Einstein, Gen. George Patton, Mark Twain, Neil Armstrong, and Rosa Parks.
Luke Arrigoni, the CEO of Loti AI, told IndieWire that it’s not lost on him that they’re using AI to fight AI. But his models are trained without needing to build entire models on all of an individual’s personal assets. The company uses voice and facial recognition tools to identify explicit content, deepfakes, impersonators, false endorsements, and even things that aren’t AI generated, and it automates the discovery and takedown process.
Arrigoni says that, while other companies that have specialized in deepfake detection are now having a harder time considering how lifelike Sora videos have become, the facial recognition tools Loti uses are making things easier to find, even with new videos coming rapidly.
“It is hard to play whack-a-mole at the scale in which Sora is creating the problem unless you’ve basically built a system that automates the takedowns, automates the discovery process, like we have,” Arrigoni said. “We kind of thought the world was going this way, and so about a year or so ago, we started building tools that would make this moment easy to manage for public figures and posthumous estates and intellectual property holders too. [It’s] really practical. What we do, it’s very easy for us to find, and it’s very easy for us to remove.”
That might be of interest to Zelda Williams, who quite recently pleaded with people to stop sending her AI videos of her late father Robin Williams, writing, “it’s dumb, it’s a waste of time and energy, and believe me, it’s NOT what he’d want.”
“To watch the legacies of real people be condensed down to ‘this vaguely looks and sounds like them so that’s enough’, just so other people can churn out horrible TikTok slop puppeteering them is maddening,” she wrote in an Instagram statement. “You’re not making art, you’re making disgusting, over-processed hotdogs out of the lives of human beings, out of the history of art and music, and then shoving them down someone else’s throat hoping they’ll give you a little thumbs up and like it. Gross.”
Arrigoni says Hollywood has been jumping on AI protection services like his, and he’s been taking calls from major agencies and rights holders. He has approached things cautiously yet optimistically to see how they can still collaborate with the tech giants, like OpenAI and Google. And while he has a bullish view that the tech industry is going to establish the rules it needs, OpenAI will need to rethink its opt-out strategy if it wants to play ball.
“Opt-in should have been the only thing. It shouldn’t have even been called a thing. If you want to participate, there’s a safe mechanism to make sure everyone can come on board and it is actually you and not someone also scamming the Sora system,” Arrigoni said. “Opt-out isn’t a good strategy. Opt-out is something that is going to, out of no fault of Sora, people are going to abuse those systems unless people can participate and say, these are the rules that I have for my likeness.”