Flint Output is synthetic media generated by AI. This policy explains how we label Output and what you must do when publishing it. It forms part of our Terms of Service.
About this policy
Flint produces videos that can include AI-generated humans, voices, and scenes. That brings disclosure obligations — some under platform rules, some under law. This policy explains how we help (by embedding provenance and applying watermarks) and what you must do (by enabling platform AI toggles and, where needed, adding visible disclosure).
Regulatory context
- US federal: FTC Endorsement Guides; FTC Rule on Government and Business Impersonation; proposed rules on AI-generated impersonation.
- US state: Tennessee ELVIS Act (voice cloning); California AB 730 / AB 2839 / AB 2655 (election deepfakes), SB 1001 (bot disclosure); Texas SB 751 (election deepfakes); Michigan HB 5141; Minnesota 609.771; Washington SB 5152; New York deepfake and non-consensual imagery laws; Illinois BIPA; Washington My Health My Data.
- Platforms: Meta AI-generated content disclosure for paid ads; Google Ads synthetic-content and election-ads policy; YouTube altered/synthetic content disclosure; TikTok AI-generated content label; Instagram / Facebook AI labelling; LinkedIn AI content disclosures; Pinterest.
How Flint labels Output
- Provenance metadata: we embed C2PA-style content credentials where supported, identifying Output as AI-generated, noting the model(s) used, and including a content identifier.
- Visible watermark: we may apply a visible "AI-generated" watermark on Output, especially for human-like actors. Positioning and opacity may be configurable on paid plans, but the watermark cannot be fully removed without written authorisation.
- File metadata: EXIF / metadata fields may include a content ID, model identifier, and generation timestamp.
- Traceability: we retain generation logs to support takedown investigations and legal requests.
Your obligations
- Do not remove, alter, or obscure provenance metadata, watermarks, or disclosure labels.
- On every platform that offers an AI-generated / altered-content toggle, enable it.
- Where a reasonable viewer might believe the actor is real, include visible disclosure in caption or on-screen (e.g., "Featuring an AI-generated actor").
- Do not present Output as a factual recording or real testimonial.
- Do not combine Output with real audio or video to falsify any real person's identity or statements.
- Do not generate Output depicting identifiable real persons (public or private) without documented written consent.
- For political content, comply with applicable election and political-ad rules; political deepfakes of real people are prohibited outright.
Prohibited synthetic media
- Deepfakes of real identifiable persons without consent.
- Political deepfakes intended to be mistaken for real statements by political figures.
- Medical, scientific, or safety claims attributed to real professionals or institutions.
- Sexual, violent, or abusive deepfakes of any person.
- Voice clones of musicians, performers, or other public figures without consent.
- Content involving minors in any inappropriate context.
Takedown & reporting
If you believe your likeness or voice has been used in Output without consent, email support@flintads.com (Grievance Officer: Jivesh Gupta). We acknowledge within 24 hours and investigate. Where warranted, we remove or disable the Output and may suspend the responsible account.
Updates
Synthetic media regulation is developing quickly. We will revise this policy as rules evolve and notify users of material changes.
Still got a question?
Email us and we'll get back to you within two business days.
support@flintads.com →