Mitigating LLM Hallucinations via Conformal Abstention
PreviousDoes Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?NextDetecting and Mitigating Hallucination in Large Vision Language Models via Fine-Grained AI Feedback
Last updated
Last updated