LinkPrompt: Natural and Universal Adversarial Attacks on Prompt-based Language Models
PreviousEasyJailbreak: A Unified Framework for Jailbreaking Large Language ModelsNextDeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions
Last updated