Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning

Last updated