RITUAL

RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in LVLMs

Sangmin Woo*, Jaehyuk Jang*, Donguk Kim*, Yubin Choi, Changick Kim
KAIST
*Indicates Equal Contribution
Overview

Overview. At each timestep t, LVLM auto-regressively samples a response ηt given a visual input, a textual query, and previously generated tokens. When conditioned on the original image V, the probabilities for Blue (correct) and Red (hallucinated) responses are similar, which can lead to the hallucinated response being easily sampled. RITUAL leverages an additional probability distribution conditioned on the transformed image V^(T), where the likelihood of hallucination is significantly reduced. Consequently, the response is sampled from a linear combination of the two probability distributions, ensuring more accurate and reliable outputs.

Abstract

Recent advancements in Large Vision Language Models (LVLMs) have revolutionized how machines understand and generate textual responses based on visual inputs. Despite their impressive capabilities, they often produce "hallucinatory" outputs that do not accurately reflect the visual information, posing challenges in reliability and trustworthiness. Current methods such as contrastive decoding have made strides in addressing these issues by contrasting the original probability distribution of generated tokens with distorted counterparts; yet, generating visually-faithful outputs remains a challenge. In this work, we shift our focus to the opposite: What could serve as a complementary enhancement to the original probability distribution? We propose a simple, training-free method termed RITUAL to enhance robustness against hallucinations in LVLMs. Our approach employs random image transformations as complements to the original probability distribution, aiming to mitigate the likelihood of hallucinatory visual explanations by enriching the model’s exposure to varied visual scenarios. Our empirical results show that while the isolated use of transformed images initially degrades performance, strategic implementation of these transformations can indeed serve as effective complements. Notably, our method is compatible with current contrastive decoding methods and does not require external models or costly self-feedback mechanisms, making it a practical addition. In experiments, RITUAL significantly outperforms existing contrastive decoding methods across several object hallucination benchmarks, including POPE, CHAIR, and MME.

Intriguing impact of random image transformations on LVLMs

Motivation

(Left) Using the randomly transformed image (V^(T)) as a visual input to LVLMs results in lower performance compared to using the original image (V). (Right) However, when these two images are combined, an intriguing phenomenon is observed: cases incorrectly predicted with the original image are now correctly predicted. (i) Although V^(T) alone does not yield a correct answer, it reduces the likelihood of a hallucinated answer and increases the likelihood of a correct answer. (ii) In some cases, V^(T) strongly aligns with the correct answer, leading to accurate answers.

Comparison with Contrastive Decoding Methods

Comparison

Unlike contrastive decoding methods, which contrast the conditional probability given the original image (V) to that given a diffused (or absent) image (V′), we leverage both the original image (V) and a randomly transformed image (V^(T)) in a complementary manner. While simple, RITUAL achieves state-of-the-art performance on multiple hallucination benchmarks, including the POPE.

POPE Results

POPE

RITUAL consistently outperforms the contrastive decoding baselines: VCD and M3ID. Moreover, RITUAL is shown to be compatible with both VCD and M3ID, leading to further performance improvements in most configurations. VCD and M3ID are reproduced within our evaluation setting.

MME-Fullset Results

MME-Fullset

When equipped with RITUAL, LLaVA-1.5 performs best in 12 out of 14 categories, while InstructBLIP excels in 11 categories. RITUAL not only reduces hallucinations but also enhances the general capabilities of LVLMs.

MME-Hallucination Results

MME-Hallucination

RITUAL effectively mitigates hallucinations at both the object and attribute levels, outperforming contrastive decoding methods in Total Score.

CHAIR Results

CHAIR

RITUAL significantly reduces object hallucinations in caption generation compared to VCD and M3ID. It can also boost performance when combined with these baselines. The number of max new tokens is set to 64.

Qualitative Results

BibTeX


@article{woo2024ritual,
  title={RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in LVLMs}, 
  author={Woo, Sangmin and Jang, Jaehyuk and Kim, Donguk and Choi, Yubin and Kim, Changick},
  journal={arXiv preprint arXiv:2405.17821},
  year={2024},
}