大模型安全笔记
Search...
Ctrl
K
LLM-Attack
Don’t Say No: Jailbreaking LLM by Suppressing Refusal
Previous
TALK TOO MUCH: Poisoning Large Language Models under Token Limit
Next
Goal-guided Generative Prompt Injection Attack on Large Language Models
Last updated
9 months ago