Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
PreviousMLLM-as-a-Judge: Assessing Multimodal LLM-as-a-Judge with Vision-Language BenchmarkNextCAN LANGUAGE MODELS BE INSTRUCTED TO PROTECT PERSONAL INFORMATION?
Last updated


