Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses

Last updated