OpenAI: We’ll Help You Detect Videos Made With Sora GenAI Tool

Ahead of the company making its Sora video creation tool available to the public, OpenAI is pledging to offer a detection tool to spot ‘misleading content.’

OpenAI said that it plans to offer a tool for detecting videos generated with its forthcoming Sora video creation application, to help with curbing misuse of the app.

The GenAI pioneer on Thursday showcased realistic-looking videos created using Sora, and said the text-to-video technology is “able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background.” OpenAI said that Sora can generate videos that run up to a minute long.

[Related: Microsoft And OpenAI’s 5 Eye-Popping AI Security Findings: Report]

The realism and ease of usage suggest, among other things, that exploitation of the software for malicious purposes such as deepfake generation will likely be an issue.

Anticipating these possibilities, OpenAI said it will be “taking several important safety steps ahead of making Sora available in OpenAI’s products.”

In addition to working with teams that will be “adversarially testing the model” to root out the potential for “misinformation, hateful content, and bias,” OpenAI also plans to assist with spotting videos created with the Sora tool.

OpenAi is “building tools to help detect misleading content such as a detection classifier that can tell when a video was generated by Sora,” the company said. “We plan to include C2PA [tamper-evident] metadata in the future if we deploy the model in an OpenAI product.”

Additionally, the company said it has developed “robust image classifiers that are used to review the frames of every video generated to help ensure that it adheres to our usage policies, before it’s shown to the user.”

The disclosure comes a day after OpenAI and Microsoft released a report warning that that cybercriminals are in fact using GenAI tools powered by large language models (LLMs), including OpenAI’s hugely popular ChatGPT, to boost their attacks.

Microsoft and OpenAI said they’ve witnessed nation-state threat actors from countries including Russia, North Korea, China and Iran trying to leverage GenAI technologies such as ChatGPT to find targets and improve their cyberattacks.

“Our analysis of the current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool on the offensive landscape,” said Microsoft in the report released Wednesday.

OpenAI did not offer a specific release date for public availability of Sora, but said the tool is now available to red teamers. “We are also granting access to a number of visual artists, designers, and filmmakers to gain feedback on how to advance the model to be most helpful for creative professionals,” the company said.

Ultimately, “we’re sharing our research progress early to start working with and getting feedback from people outside of OpenAI and to give the public a sense of what AI capabilities are on the horizon,” the company said.