Jailbreaking GPT-4V via Self-Adversarial Attacks with System Prompts

Last updated