SC-Safety: A Multi-round Open-ended Question Adversarial Safety Benchmark for Large Language Models
PreviousToViLaG: Your Visual-Language Generative Model is Also An EvildoerNextPromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts
Last updated

