DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions
PreviousLinkPrompt: Natural and Universal Adversarial Attacks on Prompt-based Language ModelsNextSyntactic Ghost: An Imperceptible General-purpose Backdoor Attacks on Pre-trained Language Models
Last updated