Fact-Checking the Output of Large Language Models via Token-Level Uncertainty Quantification
PreviousIn Search of Truth: An Interrogation Approach to Hallucination DetectionNextUnsupervised Real-Time Hallucination Detection based on the Internal States of Large Language Models
Last updated
