Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?
PreviousMitigating Hallucinations in Large Language Models via Self-Refinement-Enhanced Knowledge RetrievalNextMitigating LLM Hallucinations via Conformal Abstention
Last updated
Last updated