A state-of-the-art, open-source framework for invisible, robust watermarking across all modalities audio, image, video, and text. This suite spans the entire generative lifecycle, from training data and inference to generated media.
Meta Seal offers state-of-the-art, open-source tools for embedding robust and imperceptible watermarks across images, video, audio, text, and generative AI models. As AI-generated content becomes increasingly sophisticated, provenance tracking and authentication mechanisms are more critical than ever.
Watermarks applied after content generation by any model or system. This model-agnostic approach works universally across images, video, audio, and textβno matter how the content was created. Designed for protecting existing content and third-party generated media.
Watermarks embedded during the generation process by modifying the model's latent representations or embedding the watermark directly in the model weights. This native approach enables seamless integration with diffusion models, LLMs, and other generative systems for real-time watermarking as content is created.
Unified latent space watermarking that enables 20Γ speedup over pixel methods and secures open-source models via in-model distillation.
Roots the watermark in the model's latent decoder for tracing the outputs of latent generative models.
Watermarking for autoregressive image generation models.
Watermarks embedded into training datasets to track data provenance and usage. This technique enables dataset creators to verify if their data was used to train specific models, providing accountability and attribution in the AI training pipeline.
Designed to detect if a language model was trained on synthetic text by detecting weak residuals of watermark signals in fine-tuned LLMs, with high confidence detection even when as little as 5% of training text is watermarked.
Watermarks benchmarks before release to detect if models were trained on test sets, using theoretically grounded statistical tests to identify contamination while preserving benchmark utility.
Research on adversarial attacks and defenses for watermarking systems. Red teaming efforts explore vulnerabilities like watermark removal, forgery, and spoofing attacksβwhile developing robust countermeasures to help watermarks remain secure and reliable.
Black-box watermark forging using image preference models for red-teaming watermarking systems.
Access our official research implementations. Open weights, MIT Licensed, and designed for immediate scientific exploration.
Discover how invisible watermarking technology is being explored to help address content provenance challenges across media platforms.
Learn how Meta scaled invisible watermarking from research to production, addressing real-world challenges including originality protection on Instagram and detecting AI-generated videos. Read the engineering blog.
Audio Seal and Video Seal watermark 4,000+ hours of dyadic conversations with streaming capability and robust detection for AI-generated content. Explore dataset.
Audio watermarking and detection technology that provides industry-leading detection performance on accuracy, imperceptibility, and speed. Learn more.
Meta uses invisible watermarking to enable transparency labels on AI-generated content across Facebook, Instagram, and Threads. Learn about AI-generated image labeling and transparency in ads.