Privacy-Attack
PANDORA’S WHITE-BOX: INCREASED TRAINING DATA LEAKAGE IN OPEN LLMSUntitledMembership Inference Attacks against Large Language Models via Self-prompt CalibrationLANGUAGE MODEL INVERSIONEffective Prompt Extraction from Language ModelsPrompt Stealing Attacks Against Large Language ModelsStealing Part of a Production Language ModelPractical Membership Inference Attacks against Fine-tuned Large Language Models via Self-prompt CaliPrompt Stealing Attacks Against Large Language ModelsPRSA: Prompt Reverse Stealing Attacks against Large Language ModelsLow-Resource Languages Jailbreak GPT-4Scalable Extraction of Training Data from (Production) Language Models