RITUAL

RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in Large Vision Language Models

Sangmin Woo*, Jaehyuk Jang*, Donguk Kim*, Yubin Choi, Changick Kim
KAIST
*Indicates Equal Contribution
Overview

TL;DR. RITUAL is a simple yet effective anti-hallucination approach for LVLMs. Our RITUAL method leverages basic image transformations (e.g., vertical and horizontal flips) to enhance LVLM accuracy without external models or training. By integrating transformed and original images, RITUAL significantly reduces hallucinations in both discriminative tasks and descriptive tasks. Using both versions together enables the model to refine predictions, reducing errors and boosting correct responses.

Abstract

Recent advancements in Large Vision Language Models (LVLMs) have revolutionized how machines understand and generate textual responses based on visual inputs, yet they often produce "hallucinatory" outputs that misinterpret visual information, posing challenges in reliability and trustworthiness. We propose RITUAL, a simple decoding method that reduces hallucinations by leveraging randomly transformed images as complementary inputs during decoding, adjusting the output probability distribution without additional training or external models. Our key insight is that random transformations expose the model to diverse visual perspectives, enabling it to correct misinterpretations that lead to hallucinations. Specifically, when a model hallucinates based on the original image, the transformed images---altered in aspects such as orientation, scale, or color---provide alternative viewpoints that help recalibrate the model's predictions. By integrating the probability distributions from both the original and transformed images, RITUAL effectively reduces hallucinations. To further improve reliability and address potential instability from arbitrary transformations, we introduce RITUAL+, an extension that selects image transformations based on self-feedback from the LVLM. Instead of applying transformations randomly, RITUAL+ uses the LVLM to evaluate and choose transformations that are most beneficial for reducing hallucinations in a given context. This self-adaptive approach mitigates the potential negative impact of certain transformations on specific tasks, ensuring more consistent performance across different scenarios. Experiments demonstrate that RITUAL and RITUAL+ significantly reduce hallucinations across several object hallucination benchmarks.

RITUAL

RITUAL. At each timestep t, LVLM auto-regressively samples a response ηt given a visual input, a textual query, and previously generated tokens. When conditioned on the original image V, the probabilities for Blue (correct) and Red (hallucinated) responses are similar, which can lead to the hallucinated response being easily sampled. RITUAL leverages an additional probability distribution conditioned on the transformed image V^(T), where the likelihood of hallucination is significantly reduced. Consequently, the response is sampled from a linear combination of the two probability distributions, ensuring more accurate and reliable outputs.

RITUAL+

RITUAL+. In RITUAL, the original image V undergoes random transformations, generating a transformed image. In RITUAL+, the model evaluates various potential transformations and selects the most beneficial one to improve answer accuracy within the given context, further refining reliability. These transformed images serve as complementary inputs, enabling the model to incorporate multiple visual perspectives to reduce hallucinations.

POPE Results

POPE

RITUAL consistently outperforms the contrastive decoding baselines: VCD and M3ID. Moreover, RITUAL is shown to be compatible with both VCD and M3ID, leading to further performance improvements in most configurations. VCD and M3ID are reproduced within our evaluation setting.

MME-Fullset Results

MME-Fullset

When equipped with RITUAL, LLaVA-1.5 performs best in 12 out of 14 categories, while InstructBLIP excels in 11 categories. RITUAL not only reduces hallucinations but also enhances the general capabilities of LVLMs.

MME-Hallucination Results

MME-Hallucination

RITUAL effectively mitigates hallucinations at both the object and attribute levels, outperforming contrastive decoding methods in Total Score.

CHAIR Results

CHAIR

RITUAL significantly reduces object hallucinations in caption generation compared to VCD and M3ID. It can also boost performance when combined with these baselines. The number of max new tokens is set to 64.

Qualitative Results

BibTeX


@article{woo2024ritual,
  title={RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in Large Vision Language Models}, 
  author={Woo, Sangmin and Jang, Jaehyuk and Kim, Donguk and Choi, Yubin and Kim, Changick},
  journal={arXiv preprint arXiv:2405.17821},
  year={2024},
}