大模型安全笔记
search
⌘Ctrlk
大模型安全笔记
  • 前言
  • MM-LLM
  • VLM-Defense
  • VLM
  • VLM-Attack
  • T2I-Attack
  • Survey
  • LVM-Attack
  • For Good
  • Benchmark
  • Explainality
  • Privacy-Defense
    • Defending Our Privacy With Backdoors
    • PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification
  • Privacy-Attack
  • Others
  • LLM-Attack
  • LLM-Defense
gitbookPowered by GitBook
block-quoteOn this pagechevron-down

Privacy-Defense

Defending Our Privacy With Backdoorschevron-rightPromptCARE: Prompt Copyright Protection by Watermark Injection and Verificationchevron-right
PreviousVisual Explanations of Image-Text Representations via Multi-Modal Information Bottleneck Attributiochevron-leftNextDefending Our Privacy With Backdoorschevron-right