LoRA-as-an-Attack! Piercing LLM Safety Under The Share-and-Play Scenario
PreviousComposite Backdoor Attacks Against Large Language ModelsNextStealthy and Persistent Unalignment on Large Language Models via Backdoor Injections
Last updated
Last updated