“Not Aligned” is Not “Malicious”: Being Careful about Hallucinations of Large Language Models’ Jailb
PreviousKnowledge-to-Jailbreak: One Knowledge Point Worth One AttackNextTowards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis
Last updated

