Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models
PreviousBackdoor Attacks for In-Context Learning with Language ModelsNextUOR: Universal Backdoor Attacks on Pre-trained Language Models
Last updated
Last updated