Universal Vulnerabilities in Large Language Models: Backdoor Attacks for In-context Learning
PreviousLarge Language Models Are Better Adversaries: Exploring Generative Clean-Label Backdoor Attacks AgaiNextBADCHAIN: BACKDOOR CHAIN-OF-THOUGHT PROMPTING FOR LARGE LANGUAGE MODELS
Last updated

