Skip to content

OpenMark: Protect your images from AI training. A one-shot tool that applies an invisible watermark and disrupts model learning. / OpenMark: 당신의 이미지를 AI 학습으로부터 보호하세요. 보이지 않는 워터마크와 학습 방해를 한 번에 적용하는 원샷 보호 툴킷.

Notifications You must be signed in to change notification settings

hoyeol-ui/OpenMark2

Repository files navigation

OpenMark — Protect + Disrupt (One-Shot)

Apply an invisible watermark (UUID) + learning disruption with a single click!

Protect your content from being trained on by large AI models while maintaining virtually identical visual quality to the original.

References OpenAI CLIP: https://github.com/openai/CLIP

Microsoft InvisMark: https://github.com/microsoft/InvisMark

Radford et al., "Learning Transferable Visual Models From Natural Language Supervision", ICML 2021.

Zhang et al., "InvisMark: Invisible and Robust Watermarking for AI-generated Image Provenance", CVPR 2024.

✨ Core Features One-Shot Processing: Upload an image to apply protection (watermark), add disruption, and receive the final image, its unique UUID, and diagnostic visualizations all at once.

Guaranteed Watermark Recovery: The invisible watermark is verified via an internal decoder immediately after insertion and is auto-corrected if necessary.

Maintains Visual Quality: Achieves an average PSNR of ≈ 42-44dB (default) by suppressing high-frequency noise.

Learning Disruption: Applies a lightweight EOT-PGD attack to disrupt AI model training, with proven partial robustness against JPEG compression and resizing.

Diagnostic Visualizations: See from an "AI's perspective" with a Residual Heatmap, FFT analysis, and an Overlay view to understand the changes made.

Terminology UUID: A unique text identifier hidden within the image, used for tracking and authentication.

Residual Heatmap: A map that visualizes minute, often imperceptible, differences between two images using a color gradient.

FFT (Fast Fourier Transform): Displays the image's data in the frequency domain, which is useful for identifying hidden patterns.

Overlay: A visual feed that exaggerates and displays the alterations on top of the original image.

PSNR (Peak Signal-to-Noise Ratio): A metric for image quality loss between the original and processed images. A higher value indicates better quality.

demo result

🧪 Experiment Background & Results This project began with a successful experiment on the CIFAR-10 dataset, where we drastically reduced a CLIP model's zero-shot accuracy from 76% to 26%.

However, while validating the technique on the more realistic, high-resolution Oxford-IIIT Pets dataset, we found that the initial disruption settings had a minimal effect, reducing accuracy by only 4.35 percentage points.

We analyzed the cause and proceeded to tune the text prompts and disruption strength parameters. As a result, we successfully induced a much stronger and more significant degradation in the model's performance.

CLIP Zero-Shot Performance Comparison (Oxford-IIIT Pets, 525 Samples) This experiment directly evaluates how our watermarking and disruption techniques affect an AI model's "vision" by assessing how well an OpenCLIP model recognizes images without any additional training.

Experiment Overview:

Model: OpenCLIP ViT-B/32 (laion2b_s34b_b79k)

Data: Oxford-IIIT Pets (525 samples)

Evaluation: A comparison of zero-shot Top-1 accuracy across three versions of the dataset:

Clean: The original, unprocessed images.

WM-only: Images with only the invisible watermark applied.

WM+Disrupt: Images with both the watermark and the enhanced disruption applied.

📊 Key Results By applying the successfully tuned parameters, we confirmed that the disruption technique is highly effective at degrading the AI model's recognition accuracy.

pets500_clip_bars.png pets 500 samples result

Results Analysis:

Clean (Original): Showed a high baseline performance of 79.05%.

WM-only (Watermark): Scored 78.48%, demonstrating that the watermark itself has a negligible impact on AI recognition.

WM+Disrupt (Learning Disruption): Plummeted to 59.81%, a significant accuracy drop of -19.24 percentage points.

Conclusion: We have demonstrated that OpenMark can effectively protect content by significantly impairing an AI model's image recognition capabilities, all while preserving the visual quality of the images.

About

OpenMark: Protect your images from AI training. A one-shot tool that applies an invisible watermark and disrupts model learning. / OpenMark: 당신의 이미지를 AI 학습으로부터 보호하세요. 보이지 않는 워터마크와 학습 방해를 한 번에 적용하는 원샷 보호 툴킷.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages