Mitigating Hallucinations in Large Language Models via Self-Refinement-Enhanced Knowledge Retrieval
PreviousThe First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?NextDoes Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?
Last updated
