- [2024/12] Copyright-Protected Language Generation via Adaptive Model Fusion
- [2024/12] Black-Box Forgery Attacks on Semantic Watermarks for Diffusion Models
- [2024/11] SoK: Watermarking for AI-Generated Content
- [2024/11] CDI: Copyrighted Data Identification in Diffusion Models
- [2024/11] CopyrightMeter: Revisiting Copyright Protection in Text-to-image Models
- [2024/11] WaterPark: A Robustness Assessment of Language Model Watermarking
- [2024/11] One Prompt to Verify Your Models: Black-Box Text-to-Image Models Verification via Non-Transferable Adversarial Attacks
- [2024/11] Debiasing Watermarks for Large Language Models via Maximal Coupling
- [2024/11] CLUE-MARK: Watermarking Diffusion Models using CLWE
- [2024/11] SoK: On the Role and Future of AIGC Watermarking in the Era of Gen-AI
- [2024/11] Conceptwm: A Diffusion Model Watermark for Concept Protection
- [2024/11] LLM App Squatting and Cloning
- [2024/11] InvisMark: Invisible and Robust Watermarking for AI-generated Image Provenance
- [2024/11] Watermarking Language Models through Language Models
- [2024/11] Revisiting the Robustness of Watermarking to Paraphrasing Attacks
- [2024/11] ROBIN: Robust and Invisible Watermarks for Diffusion Models with Adversarial Optimization
- [2024/10] Embedding Watermarks in Diffusion Process for Model Intellectual Property Protection
- [2024/10] Shallow Diffuse: Robust and Invisible Watermarking through Low-Dimensional Subspaces in Diffusion Models
- [2024/10] Inevitable Trade-off between Watermark Strength and Speculative Sampling Efficiency for Language Models
- [2024/10] Watermarking Large Language Models and the Generated Content: Opportunities and Challenges
- [2024/10] Robust Watermarking Using Generative Priors Against Image Editing: From Benchmarking to Advances
- [2024/10] Provably Robust Watermarks for Open-Source Language Models
- [2024/10] REEF: Representation Encoding Fingerprints for Large Language Models
- [2024/10] CoreGuard: Safeguarding Foundational Capabilities of LLMs Against Model Stealing in Edge Deployment
- [2024/10] NSmark: Null Space Based Black-box Watermarking Defense Framework for Pre-trained Language Models
- [2024/10] UTF:Undertrained Tokens as Fingerprints A Novel Approach to LLM Identification
- [2024/10] FreqMark: Frequency-Based Watermark for Sentence-Level Detection of LLM-Generated Text
- [2024/10] MergePrint: Robust Fingerprinting against Merging Large Language Models
- [2024/10] An undetectable watermark for generative image models
- [2024/10] WAPITI: A Watermark for Finetuned Open-Source LLMs
- [2024/10] Signal Watermark on Large Language Models
- [2024/10] Ward: Provable RAG Dataset Inference via LLM Watermarks
- [2024/10] Universally Optimal Watermarking Schemes for LLMs: from Theory to Practice
- [2024/10] Can Watermarked LLMs be Identified by Users via Crafted Prompts?
- [2024/10] A Watermark for Black-Box Language Models
- [2024/10] Optimizing Adaptive Attacks against Content Watermarks for Language Models
- [2024/10] Discovering Clues of Spoofed LM Watermarks
- [2024/09] Dormant: Defending against Pose-driven Human Image Animation
- [2024/09] A Certified Robust Watermark For Large Language Models
- [2024/09] Multi-Designated Detector Watermarking for Language Models
- [2024/09] Measuring Copyright Risks of Large Language Model via Partial Information Probing
- [2024/09] Towards Effective User Attribution for Latent Diffusion Models via Watermark-Informed Blending
- [2024/09] PersonaMark: Personalized LLM watermarking for model protection and user attribution
- [2024/09] FP-VEC: Fingerprinting Large Language Models via Efficient Vector Addition
- [2024/08] Watermarking Techniques for Large Language Models: A Survey
- [2024/08] MCGMark: An Encodable and Robust Online Watermark for LLM-Generated Malicious Code
- [2024/08] Robustness of Watermarking on Text-to-Image Diffusion Models
- [2024/08] Hide and Seek: Fingerprinting Large Language Models with Evolutionary Learning
- [2024/07] Strong Copyright Protection for Language Models via Adaptive Model Fusion
- [2024/07] LLMmap: Fingerprinting For Large Language Models
- [2024/07] SLIP: Securing LLMs IP Using Weights Decomposition
- [2024/07] Hey, That's My Model! Introducing Chain & Hash, An LLM Fingerprinting Technique
- [2024/07] Building Intelligence Identification System via Large Language Model Watermarking: A Survey and Beyond
- [2024/07] Less is More: Sparse Watermarking in LLMs with Enhanced Text Quality
- [2024/07] On Evaluating The Performance of Watermarked Machine-Generated Texts Under Adversarial Attacks
- [2024/07] Waterfall: Framework for Robust and Scalable Text Watermarking
- [2024/07] A Fingerprint for Large Language Models
- [2024/06] AIGC-Chain: A Blockchain-Enabled Full Lifecycle Recording System for AIGC Product Copyright Management
- [2024/06] PID: Prompt-Independent Data Protection Against Latent Diffusion Models
- [2024/06] PostMark: A Robust Blackbox Watermark for Large Language Models
- [2024/06] EnTruth: Enhancing the Traceability of Unauthorized Dataset Usage in Text-to-image Diffusion Models with Minimal and Robust Alterations
- [2024/06] Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI
- [2024/06] Hiding Text in Large Language Models: Introducing Unconditional Token Forcing Confusion
- [2024/06] Bileve: Securing Text Provenance in Large Language Models Against Spoofing with Bi-level Signature
- [2024/06] Edit Distance Robust Watermarks for Language Models
- [2024/05] Black-Box Detection of Language Model Watermarks
- [2024/05] Large Language Model Watermark Stealing With Mixed Integer Programming
- [2024/05] FreezeAsGuard: Mitigating Illegal Adaptation of Diffusion Models via Selective Tensor Freezing
- [2024/05] A Watermark for Low-entropy and Unbiased Generation in Large Language Models
- [2024/05] Enhancing Watermarked Language Models to Identify Users
- [2024/05] AquaLoRA: Toward White-box Protection for Customized Stable Diffusion Models via Watermark LoRA
- [2024/05] Stylometric Watermarks for Large Language Models
- [2024/05] UnMarker: A Universal Attack on Defensive Watermarking
- [2024/05] Stable Signature is Unstable: Removing Image Watermark from Diffusion Models
- [2024/05] Adaptive and robust watermark against model extraction attack
- [2024/05] ProFLingo: A Fingerprinting-based Copyright Protection Scheme for Large Language Models
- [2024/05] DiffuseTrace: A Transparent and Flexible Watermarking Scheme for Latent Diffusion Model
- [2024/05] Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable
- [2024/04] Disguised Copyright Infringement of Latent Diffusion Model
- [2024/04] Gaussian Shading: Provable Performance-Lossless Image Watermarking for Diffusion Models
- [2024/04] Have You Merged My Model? On The Robustness of Large Language Model IP Protection Methods Against Model Merging
- [2024/04] A Training-Free Plug-and-Play Watermark Framework for Stable Diffusion
- [2024/04] Topic-based Watermarks for LLM-Generated Text
- [2024/04] A Statistical Framework of Watermarks for Large Language Models: Pivot, Detection Efficiency and Optimal Rules
- [2024/03] RAW: A Robust and Agile Plug-and-Play Watermark Framework for AI-Generated Images with Provable Guarantees
- [2024/03] Is Watermarking LLM-Generated Code Robust?
- [2024/03] Ghost Sentence: A Tool for Everyday Users to Copyright Data from Large Language Models
- [2024/03] Bypassing LLM Watermarks with Color-Aware Substitutions
- [2024/03] A Transfer Attack to Image Watermarks
- [2024/03] An Entropy-based Text Watermarking Detection Method
- [2024/03] Duwak: Dual Watermarks in Large Language Models
- [2024/03] Towards Better Statistical Understanding of Watermarking LLMs
- [2024/03] Learning to Watermark LLM-generated Text via Reinforcement Learning
- [2024/03] A Watermark-Conditioned Diffusion Model for IP Protection
- [2024/03] Hufu: A Modality-Agnositc Watermarking System for Pre-Trained Transformers via Permutation Equivariance
- [2024/03] WaterMax: breaking the LLM watermark detectability-robustness-quality trade-off
- [2024/02] Watermark Stealing in Large Language Models
- [2024/02] Token-Specific Watermarking with Enhanced Detectability and Semantic Coherence for Large Language Models
- [2024/02] EmMark: Robust Watermarks for IP Protection of Embedded Quantized Large Language Models
- [2024/02] Generative Models are Self-Watermarked: Declaring Model Authentication through Re-Generation
- [2024/02] Attacking LLM Watermarks by Exploiting Their Strengths
- [2024/02] Double-I Watermark: Protecting Model Copyright for LLM Fine-tuning
- [2024/02] Watermarking Makes Language Models Radioactive
- [2024/02] Can Watermarks Survive Translation? On the Cross-lingual Consistency of Text Watermark for Large Language Models
- [2024/02] A Survey of Text Watermarking in the Era of Large Language Models
- [2024/02] Proving membership in LLM pretraining data via data watermarks
- [2024/02] Resilient Watermarking for LLM-Generated Codes
- [2024/02] Permute-and-Flip: An optimally robust and watermarkable decoder for LLMs
- [2024/02] Copyright Protection in Generative AI: A Technical Perspective
- [2024/01] Adaptive Text Watermark for Large Language Models
- [2024/01] Instructional Fingerprinting of Large Language Models
- [2024/01] Generative AI Has a Visual Plagiarism Problem
- [2023/12] Human-Readable Fingerprint for Large Language Models
- [2023/12] Mark My Words: Analyzing and Evaluating Language Model Watermarks
- [2023/11] WaterBench: Towards Holistic Evaluation of Watermarks for Large Language Models
- [2023/11] Towards More Effective Protection Against Diffusion-Based Mimicry with Score Distillation
- [2023/11] A Robust Semantics-based Watermark for Large Language Model against Paraphrasing
- [2023/11] Protecting Intellectual Property of Large Language Model-Based Code Generation APIs via Watermarks
- [2023/10] REMARK-LLM: A Robust and Efficient Watermarking Framework for Generative Large Language Models
- [2023/10] Watermarking LLMs with Weight Quantization
- [2023/09] A Private Watermark for Large Language Models
- [2023/09] A Semantic Invariant Robust Watermark for Large Language Models
- [2023/09] Provable Robust Watermarking for AI-Generated Text
- [2023/09] SILO Language Models: Isolating Legal Risk In a Nonparametric Datastore
- [2023/08] PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification
- [2023/06] Generative Watermarking Against Unauthorized Subject-Driven Image Synthesis
- [2023/05] Tree-Ring Watermarks: Fingerprints for Diffusion Images that are Invisible and Robust
- [2023/05] Watermarking Diffusion Model
- [2023/03] A Recipe for Watermarking Diffusion Models
- [2023/02] Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models
- [2023/01] A Watermark for Large Language Models