Terms of Service and Usage Policies

Last updated: October 8, 2025

VIOZOR TERMS OF SERVICE AND USAGE POLICIES

Plot Twist LLC

Effective Date: October 8, 2025

Last Updated: October 8, 2025

==============================================================================

Contact Information

==============================================================================

Plot Twist LLC

Email: [email protected]

Address: 18034 Ventura Blvd #2060, Encino, CA 91316, United States

Website: viozor.com

For customer support, privacy inquiries, or concerns about our services, please contact us at [email protected].

==============================================================================

Important Technology Disclosure

==============================================================================

Viozor (viozor.com) is a video generation platform operated by Plot Twist LLC. The core video generation technology is powered by OpenAI's Sora 2 API.

Key Points:

  • All video generation and AI processing is performed using OpenAI's Sora 2 technology
  • Plot Twist LLC operates as a platform provider and service operator
  • Users must comply with BOTH Plot Twist LLC's Usage Policies AND OpenAI's Usage Policies
  • The underlying AI model is developed, trained, and maintained by OpenAI
  • Plot Twist LLC implements additional platform-specific safety features and user protections

For questions about the underlying technology, please refer to OpenAI's documentation.

For questions about the Viozor (viozor.com) platform, please contact Plot Twist LLC.

==============================================================================

Apple App Store Compliance Notice

==============================================================================

This Terms of Service applies to the Viozor mobile application available on the Apple App Store, as well as our web-based services. By downloading or using the Viozor app, you agree to these terms.

Age Requirements:

  • Minimum Age: 13 years old (with parental consent for users 13-17)
  • Recommended Age: 18 years and older for full access to features
  • Age Verification: We may request verification of age before granting access to certain features

Subscription and Billing:

  • In-App Purchases: Subscriptions purchased through the Apple App Store are subject to Apple's Terms of Service
  • Billing: All in-app purchase billing is handled by Apple through your iTunes account
  • Cancellation: Manage subscriptions through your Apple ID settings (Settings > [Your Name] > Subscriptions)
  • Refunds: Contact Apple for refund requests on in-app purchases
  • Auto-Renewal: Subscriptions automatically renew unless canceled at least 24 hours before the end of the current period

DATA AND PRIVACY:

  • Privacy Policy: Please read our comprehensive Privacy Policy available in the app and at viozor.com
  • Data Collection: We collect data necessary to provide AI video generation services (see Privacy Policy for details)
  • Third-Party Services: Our app uses OpenAI's Sora 2 API, which processes your video generation requests
  • App Store Privacy: See the App Privacy section on our App Store listing for a summary of data practices
  • User Control: You can request data deletion, access your data, or modify privacy settings at any time

DEVICE PERMISSIONS:

Viozor may request the following device permissions:

  • Camera: To capture photos/videos for video generation (optional)
  • Photo Library: To select images/videos for upload and to save generated videos (optional)
  • Notifications: To send you updates about video generation completion and app updates (optional)
  • Network Access: Required to communicate with our servers and OpenAI's API

All permissions can be managed through iOS Settings > Viozor.

INTELLECTUAL PROPERTY AND GENERATED CONTENT:

  • Your Content: You retain ownership of content you upload
  • Generated Videos: Subject to the license terms specified in Section [X] below
  • Copyright Compliance: You must have rights to all content you upload
  • Apple Guidelines: All content must comply with Apple's App Store Review Guidelines

ACCEPTABLE USE:

  • You agree not to use Viozor to create content that violates Apple's App Store Review Guidelines
  • You agree not to reverse engineer, decompile, or attempt to extract source code from the app
  • You agree not to use the app for any illegal purposes or to violate third-party rights

UPDATES AND MODIFICATIONS:

  • We may update the app through the Apple App Store to add features, fix bugs, or improve performance
  • Continued use of the app after updates constitutes acceptance of any revised terms
  • We will notify you of material changes through in-app notices or email

TERMINATION:

  • We reserve the right to terminate your access to the app for violations of these Terms
  • You may delete your account at any time through app settings or by contacting [email protected]
  • Upon termination, your data will be deleted in accordance with our Privacy Policy

DISCLAIMERS AND LIMITATIONS:

  • The app is provided "as is" without warranties of any kind
  • We are not responsible for iOS compatibility issues beyond our reasonable control
  • Apple is not responsible for the app, content, or any claims related to the app

APPLE-SPECIFIC TERMS:

  • Apple and Apple's subsidiaries are third-party beneficiaries of these Terms
  • Upon your acceptance of these Terms, Apple has the right to enforce these Terms against you
  • You represent that: (i) you are not located in a country subject to a U.S. Government embargo or designated as a "terrorist supporting" country; and (ii) you are not listed on any U.S. Government list of prohibited or restricted parties
  • In the event of any failure of the app to conform to any applicable warranty, you may notify Apple and Apple will refund the purchase price (if any); to the maximum extent permitted by law, Apple will have no other warranty obligation with respect to the app
  • Apple is not responsible for addressing any claims by you or third parties relating to the app or your use of the app
  • Apple is not responsible for the investigation, defense, settlement, and discharge of any third-party claim that the app infringes intellectual property rights
  • You agree to comply with all applicable third-party terms when using the app (e.g., wireless data service agreement)
  • Contact Plot Twist LLC (not Apple) for app support at [email protected]

NOTICE TO CALIFORNIA USERS:

Under California Civil Code Section 1789.3, California users are entitled to the following consumer rights notice: If you have a question or complaint regarding the app, please contact Plot Twist LLC at [email protected]. California residents may reach the Complaint Assistance Unit of the Division of Consumer Services of the California Department of Consumer Affairs by mail at 1625 North Market Blvd., Suite N 112, Sacramento, CA 95834, or by telephone at (916) 445-1254 or (800) 952-5210.

==============================================================================

Creating Images and Videos in Line with Our Policies

Important Notice

Below are some tips on how to respect our video generation guardrails and help ensure that all usage complies with Plot Twist LLC's and OpenAI's Usage Policies.

1. Compliance with Plot Twist LLC's and OpenAI's Usage Policies

All Viozor (viozor.com) users have agreed to Plot Twist LLC's Usage Policies, Service Terms, and Terms of Use, as well as OpenAI's Usage Policies. These policies apply universally to Viozor (viozor.com) services and are designed to ensure safe and responsible usage of AI technology. You can review Plot Twist LLC's Usage Policies here⁠ and OpenAI's Usage Policies here⁠.

1.1 Depictions of Real People

You may not edit images or videos that depict any real individual without their explicit consent. You may not create images or videos as means to impersonate, harass, intimidate, or otherwise harm the depicted individual or perpetrate fraud against others.

4o image generation is capable in many instances of generating depictions of a public figure based on a text prompt. Public figures who wish for their depiction not to be generated can let us know through this form⁠.

1.2 Inappropriate and Harmful Content

Viozor (viozor.com) users are prohibited from creating or distributing content that promotes or causes harm. This includes content that is generated to harass, defame, promote violence, or sexualize children. Examples include, but are not limited to:

Non-consensual intimate imagery (NCII)

Content promoting suicide, self-harm or disordered eating

Content glorifying terrorism or terrorist organizations

Targeted harassment and bullying content

Age-inappropriate content distributed to minors

1.3 Misleading Content

Our policies prohibit any use of our image or video tools to create or distribute content that is used to defraud, scam, or mislead others. This includes taking steps to obscure or hide the use of AI technology in the image and video generation process.

1.4 Avoid illegal content or content that may violate intellectual property rights

Plot Twist LLC's Terms of Use prohibit any content that may violate the law—including using the intellectual property of others in ways that violate their rights. Additionally, you must comply with OpenAI's Terms of Use regarding intellectual property.


2. The Viozor (viozor.com) Feed

In addition to the usage and content generation policies above, we also take steps to ensure the Viozor (viozor.com) feed is appropriate and enjoyable to broad audiences, including removing content from the feed that features:

Realistic graphic violence or sexual acts

Extremely offensive language or content glorifying hatred or depression

Encouraging dangerous stunts or challenges likely to result in serious injury

Stigmatization of body types

Actively encouraging the use of a harmful drug or substance


3. Reporting Violations

If you encounter content that you believe violates any of our policies, please report it immediately⁠. We take all violations seriously and will review reported content for compliance with our terms and Plot Twist LLC's Usage Policies.


Usage Policies

We’ve recently updated these policies. Learn more here⁠(opens in a new window). See previous version here⁠.

We aim for our tools to be used safely and responsibly, while maximizing your control over how you use them. In building our Usage Policies, we keep a few important things in mind.

We empower users to innovate with AI. We build AI products that maximize helpfulness and freedom, while ensuring safety. Usage Policies are just one way we set clear expectations for the use of our products within a broader safety ecosystem that sets responsible guardrails across our services. You can learn more about our safety approach and our commitment to customizability, transparency, and intellectual freedom to explore, debate, and create with AI.

Responsible use is a shared priority. We assume the very best of our users. Our terms and policies—including these Usage Policies—set a reasonable bar for acceptable use. Our rules are no substitute for legal requirements, professional duties, or ethical obligations that should influence how people use AI. We hold people accountable for appropriate use of our services, and breaking or circumventing our rules and safeguards may mean you lose access to our systems or experience other penalties.

We build with safety first. We monitor and enforce policies with privacy safeguards in place and clear review processes. We give developers practical moderation tools⁠(opens in a new window) and guidance so they can support their end users. We publish what our systems can and can’t do, share research and updates, and provide a simple way to report misuse.

We update as we learn. People are using our systems in new ways every day, and we update our rules to ensure they are not overly restrictive or to better protect our users. We reserve all rights to withhold access where we reasonably believe it necessary to protect our service or users or anyone else. You can appeal⁠ if you think we have made a mistake enforcing policy, and we will work to make things right. If you’d like to keep up with Usage Policies updates, complete this form.

Your use of Plot Twist LLC's Viozor (viozor.com) services must follow these Usage Policies (in addition to OpenAI's Usage Policies for the underlying Sora 2 technology):

Protect people. Everyone has a right to safety and security. So you cannot use our services for:

threats, intimidation, harassment, or defamation

suicide, self-harm, or disordered eating promotion or facilitation

sexual violence or non-consensual intimate content

terrorism or violence, including hate-based violence

weapons development, procurement, or use, including conventional weapons or CBRNE

illicit activities, goods, or services

destruction, compromise, or breach of another’s system or property, including malicious or abusive cyber activity or attempts to infringe on intellectual property rights of others

real money gambling

provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional

unsolicited safety testing

circumventing our safeguards

national security or intelligence purposes without our review and approval

Respect privacy. People are entitled to privacy. So, we don’t allow attempts to compromise the privacy of others, including to aggregate, monitor, profile, or distribute individuals’ private or sensitive information without their authorization. And, you may never use our services for:

facial recognition databases without data subject consent

real-time remote biometric identification in public spaces

use of someone’s likeness, including their photorealistic image or voice, without their consent in ways that could confuse authenticity

evaluation or classification of individuals based on their social behavior, personal traits, or biometric data (including social scoring, profiling, or inferring sensitive attributes)

inference regarding an individual’s emotions in the workplace and educational settings, except when necessary for medical or safety reasons

assessment or prediction of the risk of an individual committing a criminal offense based solely on their personal traits or on profiling

Keep minors safe. Children and teens deserve special protection. Our services are designed to prevent harm and support their well-being, and must never be used to exploit, endanger, or sexualize anyone under 18 years old. We report apparent child sexual abuse material and child endangerment to the National Center for Missing and Exploited Children. We prohibit use of our services for:

child sexual abuse material (CSAM), whether or not any portion is AI generated

grooming of minors

exposing minors to age-inappropriate content, such as graphic self-harm, sexual, or violent content

promoting unhealthy dieting or exercise behavior to minors

shaming or otherwise stigmatizing the body type or appearance of minors

dangerous challenges for minors

underaged sexual or violent roleplay

underaged access to age-restricted goods or activities

Empower people. People should be able to make decisions about their lives and their communities. So we don’t allow our services to be used to manipulate or deceive people, to interfere with their exercise of human rights, to exploit people’s vulnerabilities, or to interfere with their ability to get an education or access critical services, including any use for:

academic dishonesty

deceit, fraud, scams, spam, or impersonation

political campaigning, lobbying, foreign or domestic election interference, or demobilization activities

automation of high-stakes decisions in sensitive areas without human review

critical infrastructure

education

housing

employment

financial activities and credit

insurance

legal

medical

essential government services

product safety components

national security

migration

law enforcement


Overview of Viozor

TECHNOLOGY DISCLAIMER: Viozor (viozor.com) is a video generation service powered by OpenAI's Sora 2 API. All video generation and processing is performed using OpenAI's Sora 2 technology. Plot Twist LLC operates as a platform provider, utilizing OpenAI's advanced AI models to deliver video generation capabilities to our users.

Viozor (viozor.com) is designed to take text, image, and video inputs and generate a new video as an output using OpenAI's Sora 2 API. Users can create videos up to 1080p resolution (20 seconds max) in various formats, generate new content from text, or enhance, remix, and blend their own assets. Users will be able to explore the Featured and Recent feeds which showcase community creations and offer inspiration for new ideas. Viozor (viozor.com) builds on OpenAI's Sora 2 technology, which is based on learnings from DALL·E and GPT models, and is designed to give people expanded tools for storytelling and creative expression.

About the Underlying Technology (Sora 2 by OpenAI)

Sora 2 is a diffusion model developed by OpenAI, which generates a video by starting off with a base video that looks like static noise and gradually transforms it by removing the noise over many steps. By giving the model foresight of many frames at a time, OpenAI has solved a challenging problem of making sure a subject stays the same even when it goes out of view temporarily. Similar to GPT models, Sora uses a transformer architecture, unlocking superior scaling performance.

Sora uses the recaptioning technique from DALL·E 3, which involves generating highly descriptive captions for the visual training data. As a result, the model is able to follow the user's text instructions in the generated video more faithfully.

In addition to being able to generate a video solely from text instructions, the model is able to take an existing still image and generate a video from it, animating the image's contents with accuracy and attention to small detail. The model can also take an existing video and extend it or fill in missing frames⁠. Sora serves as a foundation for models that can understand and simulate the real world, a capability OpenAI believes will be an important milestone for achieving AGI.

Sora's capabilities may also introduce novel risks, such as the potential for misuse of likeness or the generation of misleading or explicit video content. OpenAI has implemented safety work from DALL·E's deployment in ChatGPT and the API and safety mitigations for other OpenAI products. Plot Twist LLC has implemented additional safety measures on top of OpenAI's existing protections to ensure responsible use of Viozor (viozor.com).

Model Data (OpenAI's Sora 2)

NOTE: The following information describes OpenAI's Sora 2 model, which powers Viozor (viozor.com). Plot Twist LLC does not train or maintain the underlying AI model.

As described in OpenAI's technical report⁠1 from February 2024, Sora takes inspiration from large language models which acquire generalist capabilities by training on internet-scale data. The success of the LLM paradigm is enabled in part by the use of tokens that elegantly unify diverse modalities of text—code, math and various natural languages. With Sora, OpenAI considered how generative models of visual data can inherit such benefits. Whereas LLMs have text tokens, Sora has visual patches. Patches have previously been shown to be an effective representation for models of visual data. OpenAI found that patches are a highly-scalable and effective representation for training generative models on diverse types of videos and images. At a high level, videos are turned into patches by first compressing videos into a lower-dimensional latent space, and subsequently decomposing the representation into spacetime patches.

Sora was trained by OpenAI on diverse datasets, including a mix of publicly available data, proprietary data accessed through partnerships, and custom datasets developed in-house. These consist of:

Select publicly available data, mostly collected from industry-standard machine learning datasets and web crawls.

Proprietary data from data partnerships. OpenAI forms partnerships to access non-publicly available data. For example, OpenAI partnered with Shutterstock⁠ and Pond5 on building and delivering AI-generated images. OpenAI also partners to commission and create datasets fit for their needs.

Human data: Feedback from AI trainers, red teamers, and employees.

Pretraining filtering and Data Preprocessing

In addition to mitigations implemented after the pre-training stage, pre-training filtering mitigations can provide an additional layer of defense that, along with other safety mitigations, help exclude unwanted and harmful data from datasets. Before training, all datasets undergo this filtering process by OpenAI, removing the most explicit, violent, or otherwise sensitive content (for instance, some hate symbols), representing an extension of the methods used to filter the data on which other models were trained, including DALL·E 2 and DALL·E 3.

Risk Identification And Deployment Preparation for Viozor (viozor.com)

Plot Twist LLC has undertaken a robust process to understand both potential misuse and real-world creative uses to help inform Viozor (viozor.com)'s designs and safety mitigations. OpenAI worked with hundreds of visual artists, designers, and filmmakers from more than 60 countries to gain feedback on how to advance the Sora model. Plot Twist LLC has also crafted additional evaluations to discover and assess risks specific to the Viozor (viozor.com) platform and iteratively improve our safety and risk mitigations.

Our safety stack for Viozor (viozor.com) builds on OpenAI's existing safety mitigations employed in Sora, DALL·E and ChatGPT, as well as custom-built mitigations specific to our Viozor (viozor.com) platform. Because this is a powerful tool, we are taking an iterative approach to safety, particularly in areas where context is important or we foresee novel risks related to video. Examples of our iterative approach include age gating access to users who are 18 or older, restricting the use of likeness/face-uploads, and having more conservative moderation thresholds on prompts and uploads of minors at launch. We want to continue to learn how people use Viozor (viozor.com) and iterate to best balance safety while maximizing creative potential for our users.

External Red Teaming (OpenAI's Sora 2)

OpenAI worked with external red teamers located in nine different countries to test Sora, identify weaknesses in the safety mitigations, and give feedback on risks associated with Sora's new product capabilities. Red teamers had access to the Sora product with various iterations of safety mitigations and system maturity starting in September and continuing into December 2024, testing more than 15,000 generations. This red teaming effort builds upon work in early 2024 where a Sora model without production mitigations was tested.

Red teamers explored novel potential risks of Sora's model and the product's tools, and tested safety mitigations as they were developed and improved. These red teaming campaigns covered various types of violative and disallowed content (sexual and erotic content, violence and gore, self harm, illegal content, mis/disinformation, etc), adversarial tactics (both prompting and tool/feature use) to evade safety mitigations, as well as how these tools could be exploited to progressively degrade moderation tools and safeguards. Red teamers also provided feedback on their perceptions of Sora on areas including bias and general performance.

We explored text-to-video generation using both straightforward prompts and adversarial prompting tactics across all content categories mentioned above. The media upload capability was tested with a large variety of images and videos, including public persons, and a broad variety of content categories to test the ability to generate violative content. We also tested various uses and combinations of the modification tools (storyboards, recut, remix, and blend) to assess their utility to generate prohibited content.

Red teamers identified noteworthy observations for both specific types of prohibited content and general adversarial tactics. For example, red teamers found that using text prompts with either medical situations or science fiction / fantasy settings degraded safeguards against generating erotic and sexual content until additional mitigations were built. Red teamers used adversarial tactics to evade elements of the safety stack, including suggestive prompts and using metaphors to harness the model's inference capability. Over many attempts, they could identify trends of prompts and words which would trigger safeguards, and test different phrasing and words to evade refusals. Red teamers would eventually select the most-concerning generation to use as seed media for further development into violative content that couldn't be created with single-prompt techniques. Jailbreak techniques sometimes proved effective to degrade safety policies, allowing OpenAI to refine these protections as well.

Red teamers also tested media uploads and Sora's tools (storyboards, recut, remix, and blend) with both publicly available images and AI-generated media. This revealed gaps in input and output filtering to strengthen prior to Sora's release, and helped hone protections for media uploads including people. Testing also revealed the need for stronger classifier filtering to mitigate the risk of non-violative media uploads being modified into prohibited erotic, violence, or deepfake content.

The feedback and data generated by red teamers enabled the creation of additional layers of safety mitigations and improvements on existing safety evaluations, which are described in the Specific Risk Areas and Mitigations⁠ sections. These efforts allowed additional tuning of prompt filtering, blocklists, and classifier thresholds to ensure model compliance with safety goals.

Learnings from Early Artist Access (OpenAI's Sora Program)

Over the last nine months, OpenAI observed user feedback across 500,000+ model requests from 300+ users from 60+ countries. This data informed enhancements in model behavior and model adherence to safety protocols. For example, artist feedback helped understand the limitations a visible watermark has on their workflows, which informed the decision to allow paying users to download video files without the visible watermark while still embedding C2PA data.

This early access program also taught that if Sora is intended to serve as an expanded tool for storytelling and creative expression, it would require offering more flexibility to artists around some sensitive areas. Artists, independent filmmakers, studios and other entertainment industry organizations use Sora as a crucial part of their development processes. At the same time, identifying both positive use cases and potential misuse allowed determination of areas where more restrictive product-level mitigations were required to mitigate the risk of harm or misuse.

Plot Twist LLC's Viozor (viozor.com) Platform Implementation

Plot Twist LLC has built Viozor (viozor.com) on top of OpenAI's Sora 2 API, implementing additional platform-specific safety features and user experience enhancements tailored to our community's needs.

Evaluations

OpenAI developed internal evaluations targeting key areas, including nudity, deceptive election content, self-harm, and violence. These evaluations were designed to support the refinement of mitigations and help inform moderation thresholds. The evaluation framework combines input prompts given to the video generation model with input and output classifiers applied to either transformed prompts or the final produced videos.

The input prompts for these evaluations were sourced from three primary channels: data collected during the early alpha phase, adversarial examples provided by red-team testers, and synthetic data generated using GPT‑4. Alpha phase data provided insight into real-world usage scenarios, red-teamer contributions helped uncover adversarial and edge-case content, and synthetic data allowed for expanding evaluation sets in areas like unintended racy content, where naturally occurring examples are scarce.

Preparedness

The preparedness framework is designed to evaluate whether frontier model capabilities introduce significant risks in four tracked categories: persuasion, cybersecurity, CBRN (chemical, biological, radiological, and nuclear), and model autonomy. There is no evidence that Sora poses any significant risk with respect to cybersecurity, CBRN, or model autonomy. These risks are closely tied to models that interact with computer systems, scientific knowledge, or autonomous decision-making, all of which are currently beyond Sora's scope as a video-generation tool.

Sora's video generation capabilities could pose potential risk from persuasion, such as risks of impersonation, misinformation, or social engineering. To address these risks, OpenAI and Plot Twist LLC have developed a suite of mitigations that are described in the below sections. These include mitigations intended to prevent the generation of likeness to well-known public figures. Additionally, given that context and the knowledge of a video being real or AI-generated may be key in determining how persuasive a generated video is, Viozor (viozor.com) has focused on building a multi-layered provenance approach, including metadata, watermarks, and fingerprinting.

Viozor (viozor.com) Mitigation Stack

IMPORTANT: Viozor (viozor.com) implements OpenAI's Sora 2 safety mitigations as well as additional Plot Twist LLC platform-specific protections.

In addition to the specific risks and mitigations identified below, choices made in Sora's training by OpenAI, Viozor (viozor.com)'s product design by Plot Twist LLC, and our combined policies help to broadly mitigate the risk of harmful or unwanted outputs. These can broadly be organized into system and model-level technical mitigations, as well as product policies and user education.

System and Model Mitigations

Below we detail the primary forms of safety mitigations in place before a user is shown their requested output (these include OpenAI's Sora 2 API protections and Viozor (viozor.com) platform protections):

Text and image moderation via multi-modal moderation classifier

OpenAI's multi-modal moderation classifier powering the external Moderation API is applied to identify text, image or video prompts that may violate usage policies, both on input and outputs. Violative prompts detected by the system will result in a refusal. Learn more about OpenAI's multi-modal moderation API here⁠.2

Custom LLM filtering

One advantage of video generation technology is the ability to perform asynchronous moderation checks without adding latency to the overall user experience. Since video generation inherently takes a few seconds to process, this window of time can be utilized to run precision-targeted moderation checks. OpenAI has customized GPT to achieve high precision on the moderation for some specific topics, including identifying third-party content as well as deceptive content.

Filters are multimodal: both image/video uploads, text prompts and outputs are included in the context of each LLM call. This allows detection of violating combinations across image and text.

Image output classifiers

To address potentially harmful content directly in outputs, Sora uses output classifiers, including specialized filters for NSFW content, minors, violence, and potential misuse of likeness. Viozor (viozor.com) may block videos before they are shared with the user if these classifiers are activated.

Blocklists

OpenAI and Plot Twist LLC maintain textual blocklists across a variety of categories, informed by previous work on DALL·E 2 and DALL·E 3, proactive risk discovery, and results from early users.

Product Policies

In addition to the protections built into the model and system to prevent the generation of violative content, Plot Twist LLC is also taking additional steps to reduce the risk of misuse. We currently are only offering Viozor (viozor.com) to users who are 18 or older and we are applying moderation filters to the content that is shown in the Explore and Featured feeds.

We are also clearly communicating policy guidelines through in-product and publicly available education on:

Use of another person’s likeness without their permission, and a prohibition on depicting real minors;

Creating illegal content or content that violates intellectual property rights;

The generation of explicit and harmful content, such as non-consensual intimate imagery, content used to bully, harass, or defame, or content intended to promote violence, hatred, or the suffering of others; and

The creation and distribution of content used to defraud, scam, or mislead others.

Some of these forms of misuse are addressed through OpenAI's model and system mitigations, but others are more contextual—a scene of a protest can be used for legitimate creative endeavors, but the same scene presented as a real current event could also be shared as disinformation if paired with other claims.

Viozor (viozor.com) is designed to give people the ability to express a wide range of creative ideas and views. It is not practical nor advisable to prevent every form of contextually problematic content.

We offer people the ability to report⁠ Viozor (viozor.com) videos they think may violate our guidelines while leveraging automation and human review to actively monitor patterns of use. We have established enforcement mechanisms to remove violative videos and penalize users. When users do violate our guidelines, we will notify them and offer the opportunity to tell us what they think is fair. We intend to track the effectiveness of these mitigations and refine them over time.

Specific Risk Areas and Mitigations

Beyond general safety measures above, early testing and evaluation helped identify several areas of particular safety focus.

Child Safety

Plot Twist LLC is deeply committed to addressing⁠3 child safety risks, and we prioritize prevention, detection, and reporting of Child Sexual Abuse Material⁠(opens in a new window) (CSAM) content across all our products, including Viozor (viozor.com). Plot Twist LLC efforts in the child safety space leverage OpenAI's robust safety measures, including responsibly sourced data sets protected from CSAM, partnership with National Center for Missing & Exploited Children (NCMEC) to prevent child sexual abuse and protect children, red-teaming in accordance with Thorn's recommendations and in compliance with legal restrictions, and robust scanning for CSAM across all inputs and outputs. This includes scanning first party and third party users (API and Enterprise) unless customers meet rigorous criteria for removal of CSAM scanning. To prevent generation of CSAM, Viozor (viozor.com) benefits from the robust safety stack built into OpenAI's Sora 2, leveraging system mitigations used across OpenAI products such as ChatGPT and DALL·E4 as well as some additional platform-level protections specific to Viozor (viozor.com).

Input Classifiers

For Child Safety we leverage 3 different input mitigations across text, image and video input:

For all image and video uploads, we integrate with Safer, developed by Thorn, to detect matches with known CSAM. Confirmed matches are rejected and reported to NCMEC. Additionally, we utilize Thorn's CSAM classifier to identify potentially new, unhashed CSAM content.

We leverage a multi-modal moderation classifier to detect and moderate any sexual content that involves minors via text, image and video input.

For Viozor (viozor.com), we benefit from a classifier developed to analyze text and images to predict whether an individual under the age of 18 is depicted or if the accompanying caption references a minor. We reject requests for image to video that contain under-18 individuals. If text-to-video is determined to be under 18, we enforce much stricter thresholds for moderation related to sexual, violent or self-harm content.

Below is our evaluation for our under-18 classifier for humans. We evaluate our classifier for rejecting realistic under-18 individuals on a dataset containing close to 5000 images across the categories of [child | adult] and [realistic | fictitious]. Our policy stance is to reject realistic children, while allowing fictitious images including animated, cartoon, or sketch style, provided they are non-sexual. We have taken a cautious approach to content involving minors, and will continue to evaluate our approach as we learn more through product use and find the right balance between allowing for creative expression and safety.

Currently, our classifiers are highly accurate, but they may occasionally flag adult or non-realistic images of children by mistake. Additionally, we acknowledge that studies and existing literature highlight the potential for age prediction models to exhibit racial biases. For instance, these models may systematically underestimate the age of individuals from certain racial groups.5 We are committed to enhancing the performance of our classifier, minimizing false positives, and deepening our understanding of potential biases over the coming months.

Expected outcome

n_samples

count (is_child)

count (not_child)

Evaluated metrics

Realistic Child

Classify images as “is child”

1589

1555

34

Accuracy: 97.86%

Realistic Adult

Classify images as “not child”

1370

36

1334

Accuracy: 99.28%

Fictitious Adult

Classify images as “not child”

965

7

958

Accuracy: 97.37%

Fictitious Child

Classify images as “not child”

1050

323

727

Accuracy: 69.24%

Total

4974

1921

3053

Precision: 80.95% Recall: 97.86%

Note: precision is calculated as the % of is_child classifications that are realistic children, and recall is calculated as the % of realistic child images that are classified as is_child

Output

As mentioned above, once we identify a reference to minors on text input with our under-18 classifier, we enforce strict thresholds for moderation related to sexual, violent or self harm content on output. Below are the two output classifiers we use to achieve this:

Multi modal moderation classifier which scans for unsafe video outputs reject requests that may be particularly sensitive

We are also leveraging our existing DALL·E image classifier to scan for violations related to child safety.

Our output classifiers scan 2 frames per second and when determining a video as unsafe we block any output.

In addition to our classifiers and automated moderation, we will have human review as an additional layer of protection against potential child safety violations.

Product Policy

Our policies prohibit the use of Viozor (viozor.com) for the generation of sexual content involving minors. Violations of our child safety policies can result in content removal and banning of the user.

Nudity & Suggestive Content

One of the emerging risk areas associated with AI video generation capabilities is the potential creation of NSFW (Not Safe for Work) or NCII (Non-Consensual Intimate Imagery) content. Similar to DALL·E's approach, Viozor (viozor.com) (via Sora 2) uses a multi-tiered moderation strategy to block explicit content. These include prompt transformations, image output classifiers, and blocklists, which all contribute to a system that restricts suggestive content, particularly for age-appropriate outputs. Thresholds for our classifiers are stricter for image uploads than for text based prompts.

Videos shown in the Explore section are further filtered with heightened thresholds to target a viewing experience appropriate for a wide audience.

Below are the results of our evaluations on nudity and suggestive content, aimed at assessing the effectiveness of multi-layered mitigation across inputs and outputs. Based on findings we have iterated our thresholds and applied more strict moderation to images with uploads including people.

Category

Accuracy* (at input)

Accuracy* (at output, i.e. E2E)

Nudity & Suggestive Content

97.25%

97.59%

Eval explanation:

N = total number of violating samples (~200 per category)

I = total number of violating samples passed by input moderation checks

O = total number of violating samples passed by output moderation checks

Accuracy at Input = (N - I) / N

Accuracy at Output (E2E) = (N - O) / N

Product Policy

Our policies prohibit the use of Viozor (viozor.com) for the generation of explicit sexual content, including non-consensual intimate imagery. Violations of these policies can result in content removal and penalization of the user.

Deceptive Content

Likeness Misuse and Harmful Deepfakes

Viozor (viozor.com)'s moderation monitor for likeness-based prompts is intended to flag potentially harmful deepfake content, with the intent that videos involving recognizable individuals are closely reviewed. The Likeness Misuse filter further flags prompts that attempt to modify or depict individuals in potentially harmful or misleading ways. Sora's general prompt transformations further reduce the risk that Viozor (viozor.com) will generate the unwanted likeness of a private individual based on a prompt containing someone's name.

Deceptive Content

Viozor (viozor.com)'s input and output classifiers are intended to prevent the generation of deceptive content related to elections that depicts fraudulent, unethical or otherwise illegal activity. The evaluation metrics include classifiers to flag style or filter techniques that could produce misleading videos in the context of elections, thus reducing the risk of real-world misuse.

Below are the evaluations for our deceptive election content LLM filter, focused on helping identify cases where there may be intent to create prohibited content across a variety of inputs (e.g. text and video). Our system also scans 1 frame per second of output video to assess possible output violations.

Classifier

Recall

Precision

Result when flagged

Deceptive Election Content

98.23%

88.80%

Block generating output

N=~500, based on synthetic data prompts

Investments in Provenance

Given that many risks associated with video generation, such as harmful deepfake content, are heavily context dependent, we've prioritized enhancing our provenance tools. We recognize that there is not a single solution to provenance, but are committed to improving the provenance ecosystem and helping build context and transparency to content created from Viozor (viozor.com).

For general availability, our provenance safety tooling will include:

C2PA metadata on all assets (verifiable origin, industry standard)

Animated visible Viozor (viozor.com) watermarks by default (transparency of viewers that this 'AI')

Internal reverse video search tool, to help members of Plot Twist LLC's safety team assess with high confidence if content is created by Viozor (viozor.com)

Product Policy

Our policies prohibit the use of Viozor (viozor.com) to defraud, scam, or mislead others, including through the creation and dissemination of disinformation. They also prohibit the use of another person's likeness without their permission. Violations of these policies can result in content removal and penalization of the user.

Artist Styles

When a user employs the name of a living artist in a prompt, the model may generate video that resembles in some way the style of the artist's works. There is a very long tradition in creativity of building off of other artists' styles, but we appreciate that some creators may have concerns. We opted to take a conservative approach with this version of Viozor (viozor.com) as we learn more about how Viozor (viozor.com) is used by the creative community. To address this, we have added prompt re-writes that are designed to trigger when a user attempts to generate a video in the style of a living artist.

Similar to OpenAI's other products, the Viozor (viozor.com) Editor uses an LLM to rewrite submitted text to facilitate prompting Sora more effectively. This process promotes compliance with our guidelines, including removing public figure names, grounding people with specific attributes, and describing branded objects in a generic way. We maintain textual blocklists across a variety of categories, informed by OpenAI's previous work on DALL·E 2 and DALL·E 3, proactive risk discovery, and results from red teamers and early users.

Future Work

Plot Twist LLC employs an iterative deployment strategy to ensure the responsible and effective roll-out of Viozor (viozor.com). This approach combines phased rollouts, ongoing testing, and continuous monitoring with user feedback and real-world data to refine and improve our performance and safety mitigations over time. Below is a series of work we are planning to do as part of our iterative deployment for Viozor (viozor.com).

Likeness pilot

The ability to generate a video using an uploaded photo or video of a real person as the "seed" is a vector of potential misuse that we are taking a particularly incremental approach toward to learn from early patterns of use. Early feedback from artists indicate that this is a powerful creative tool they value, but given the potential for abuse, we are not initially making it available to all users. Instead, in keeping with our practice of iterative deployment, the ability to upload images or videos of people will be made available to a subset of users and we will have active, in depth monitoring in place to understand the value of it to the Viozor (viozor.com) community and to adjust our approach to safety as we learn. Uploads containing images of minors will not be permitted during this test.

Provenance and transparency Initiatives

Viozor (viozor.com)'s future iterations will continue to strengthen traceability through research into reverse embedding search tools and continued implementation on transparency measures such as C2PA. We are excited to explore potential partnerships with NGOs and research organizations to grow and improve the provenance ecosystem and test our internal reverse image tool for Viozor (viozor.com).

Expanding representation in our outputs

We are committed to reducing potential output biases through prompt refinements, feedback loops, and the ongoing identification of effective mitigations—recognizing that overcorrections can be equally harmful. We acknowledge challenges such as body image bias and demographic representation and will continue refining our approach to ensure balanced and inclusive outputs.

Continued safety, policy, and ethical alignment

Plot Twist LLC plans to maintain ongoing evaluations of Viozor (viozor.com) and efforts to further improve Viozor (viozor.com)'s adherence to Plot Twist LLC's and OpenAI's policies and safety standards. Additional improvements in areas such as likeness safety and deceptive content are planned, guided by evolving best practices and user feedback.