OpenAI’s new text-to-video app, Sora, was supposed to be a social AI playground, allowing users to create imaginative AI videos of themselves, friends, and celebrities while building on the ideas of others.
The app’s social structure, which allows users to adjust the availability of their image in others’ videos, appeared to address the most pressing consent issues around AI-generated videos when it launched last week.
But with Sora sitting atop the iOS App Store with more than 1 million downloads, experts worry about its potential to flood the internet with historical misinformation and deepfakes of deceased historical figures who can’t consent to or opt out of Sora’s AI models.
In less than a minute, the app can generate short videos of deceased celebrities in situations they’ve never been in: Aretha Franklin making soy candles, Carrie Fisher trying to balance on a slackline, Nat King Cole ice skating in Havana, and Marilyn Monroe teaching Vietnamese to schoolchildren, for example.
This is a nightmare for people like Adam Streisand, a lawyer who has represented several celebrity estates. including Monroe’s at one point.
“The challenge with AI is not the law,” Streisand said in an email, noting that California courts have long protected celebrities “from AI-like reproductions of their images or voices.”
“The question is whether a judicial process without AI and that relies on human beings will one day be able to play an almost fifth-dimensional game of whack-a-mole.”
The videos about Sora range from the absurd to the charming to the confusing. Aside from celebrities, many Sora videos feature convincing deepfakes of manipulated historical moments.
For example, NBC News was able to generate realistic videos of President Dwight Eisenhower confessing to accepting millions of dollars in bribes, British Prime Minister Margaret Thatcher arguing that the “so-called D-Day landings” were exaggerated, and President John F. Kennedy announcing that the moon landing “was not a triumph of science but an invention.”
The ability to generate such deepfakes of deceased people without their consent has already sparked complaints from relatives.
In an Instagram Story posted Monday about Sora’s videos featuring Robin Williams, who died in 2014, Williams’ daughter Zelda wrote: “If you have any decency, stop doing this to him and me, even to everyone, period. It’s dumb, it’s a waste of time and energy, and believe me, it’s NOT what he would want.”
Bernice King, daughter of Martin Luther King Jr., wrote on X: “I agree with my father. Please stop.” King’s famous “I Have a Dream” speech has been continually manipulated and remixed on the app.
George Carlin’s daughter said in a BlueSky post that her family was “doing everything we can to combat” deepfakes of the late comedian.
Sora-generated videos depicting “horrific violence” involving renowned physicist Stephen Hawking have also gained popularity this week, with many examples circulating on X.
An OpenAI spokesperson told NBC News: “While there are strong free speech interests in depicting historical figures, we believe public figures and their families should ultimately have control over how their image is used. For public figures who have recently passed away, authorized representatives or owners of their estate may request that their image not be used in Sora cameos.”
In a blog post last Friday, OpenAI CEO Sam Altman wrote that the company would soon “give rights holders more granular control over character generation,” referring to broader types of content. “We’re hearing from a lot of rights holders who are very excited about this new type of ‘interactive fan fiction’ and believe that this new type of interaction will bring them a lot of value, but they want to be able to specify how their characters can be used (including not used at all).”
OpenAI’s rapidly evolving policies for Sora have led some commentators to argue that the company’s approach of moving fast and breaking things served a purpose, showing users and intellectual property holders the power and scope of the application.
Liam Mayes, a professor in Rice University’s media studies program, believes increasingly realistic deepfakes could have two key social effects. First, he said, “we will find unsuspecting people who will be victims of all kinds of scams, large and powerful companies who will exert coercive pressures, and nefarious actors who will undermine democratic processes,” Mayes said.
At the same time, not being able to distinguish deepfakes from real videos could reduce trust in authentic media. “We could see trust erode in all types of media establishments and institutions,” Mayes said.
As founder and president of CMG Worldwide, Mark Roesler has managed the intellectual property and licensing rights of more than 3,000 deceased personalities from entertainment, sports, history and music, including James Dean, Neil Armstrong and Albert Einstein. Roesler said Sora is just the latest technology to raise concerns about protecting the figures’ legacy.
“There are and will be abuses as there always have been with celebrities and their valuable intellectual property,” he wrote in an email. “When we started representing deceased personalities in 1981, the Internet didn’t even exist.”
“New technologies and innovation help keep alive the legacy of many historic and iconic personalities who shaped and influenced our history,” added Roesler, stating that CMG will continue to represent the interests of its clients within AI applications such as Sora.
To differentiate between a real video and one generated by Sora, OpenAI implemented several tools to help users and digital platforms identify content created by Sora.
Each video includes invisible cues, a visible watermark, and metadata – behind-the-scenes technical information that describes the content as AI-generated.
However, several of these layers can be easily removed, said Sid Srinivasan, a computer scientist at Harvard University. “Visible watermarks and metadata will deter casual misuse through some friction, but they are fairly easy to remove and won’t stop the most determined actors.”
Srinivasan said an invisible watermark and associated detection tool would likely be the most reliable approach. “Ultimately, video hosting platforms will likely need access to detection tools like this, and there is no clear timeline for broader access to such internal tools.”
Wenting Zheng, assistant professor of computer science at Carnegie Mellon University, echoed that sentiment, saying, “To automatically detect AI-generated materials in social media posts, it would be beneficial for OpenAI to share its tool for tracking images, audio, and videos with platforms to help people identify AI-generated content.”
When asked for details about whether OpenAI had shared these detection tools with other platforms like Meta or X, an OpenAI spokesperson referred NBC News to a general technical report. The report does not provide such detailed information.
To better identify genuine images, some companies are turning to AI to detect AI results, according to Ben Colman, CEO and co-founder of Reality Defender, a deepfake detection startup.
“Human beings – even those trained on the problem, like some of our competitors – are flawed and misguided, overlooking the invisible or the inaudible,” Colman said.
In Reality Defender, “AI is used to detect AI,” Colman told NBC News. “AI-generated videos can become more realistic for you and me, but AI can see and hear things we can’t.”
Similarly, McAfee’s Scam Detector software “listens to video audio for AI fingerprints and analyzes it to determine whether the content is authentic or AI-generated,” according to Steve Grobman, McAfee’s chief technology officer.
However, Grobman added, “new tools are making fake videos and audio seem increasingly real, and 1 in 5 people told us that they or someone they know has already been a victim of a deepfake scam.”
The quality of deepfakes also differs between languages, as current AI tools in commonly used languages such as English, Spanish or Mandarin are much more capable than tools in less used languages.
“We are regularly evolving the technology as new AI tools appear and expanding beyond English to cover more languages and contexts,” Grobman said.
Concerns about deepfakes have made headlines before. Less than a year ago, many observers predicted that the 2024 elections would be overrun by deepfakes. This largely turned out not to be true.
However, until this year, AI-generated media such as images, audio and video were largely distinguished from real content. Many commentators have found models released in 2025 to be particularly realistic, threatening the public’s ability to discern real human-created information from AI-generated content.
Google’s Veo 3 video generation model, released in May, was called “terrifyingly accurate” and “dangerously realistic,” inspiring one critic to ask: “Are we doomed?”