Introducing Meta Seal

State-of-the-Art, Open Source Invisible Watermarking

A state-of-the-art, open-source framework for invisible, robust watermarking across all modalities audio, image, video, and text. This suite spans the entire generative lifecycle, from training data and inference to generated media.

Research Overview

Watermarking across all modalities

Meta Seal offers state-of-the-art, open-source tools for embedding robust and imperceptible watermarks across images, video, audio, text, and generative AI models. As AI-generated content becomes increasingly sophisticated, provenance tracking and authentication mechanisms are more critical than ever.

Post-hoc watermarking

Watermarks applied after content generation by any model or system. This model-agnostic approach works universally across images, video, audio, and textβ€”no matter how the content was created. Designed for protecting existing content and third-party generated media.

🎬

Image & Video

+
WAM (Watermark Anything Model)

Embed (possibly multiple) localized watermarks into images, survives inpainting and splicing attacks.

Sync Seal

Watermarking models for robust image synchronization, enabling to revert geometric transformations applied to image.

🎡

Audio

+
πŸ“

Text

+

In-model & generation-time watermarking

Watermarks embedded during the generation process by modifying the model's latent representations or embedding the watermark directly in the model weights. This native approach enables seamless integration with diffusion models, LLMs, and other generative systems for real-time watermarking as content is created.

Dist Seal

Unified latent space watermarking that enables 20Γ— speedup over pixel methods and secures open-source models via in-model distillation.

Stable Signature

Roots the watermark in the model's latent decoder for tracing the outputs of latent generative models.

WMAR

Watermarking for autoregressive image generation models.

Dataset watermarking

Watermarks embedded into training datasets to track data provenance and usage. This technique enables dataset creators to verify if their data was used to train specific models, providing accountability and attribution in the AI training pipeline.

Radioactive watermarks

Designed to detect if a language model was trained on synthetic text by detecting weak residuals of watermark signals in fine-tuned LLMs, with high confidence detection even when as little as 5% of training text is watermarked.

Detecting benchmark contamination through watermarking

Watermarks benchmarks before release to detect if models were trained on test sets, using theoretically grounded statistical tests to identify contamination while preserving benchmark utility.

Watermark security

Research on adversarial attacks and defenses for watermarking systems. Red teaming efforts explore vulnerabilities like watermark removal, forgery, and spoofing attacksβ€”while developing robust countermeasures to help watermarks remain secure and reliable.

WMForger

Black-box watermark forging using image preference models for red-teaming watermarking systems.

Try it yourself

Get the Models

Access our official research implementations. Open weights, MIT Licensed, and designed for immediate scientific exploration.

watermark_detect.py
Applications

Learn More

Discover how invisible watermarking technology is being explored to help address content provenance challenges across media platforms.