Abstract:
Text-to-image (T2I) diffusion models have become prominent tools for generating high-fidelity images from text prompts. However, when trained on unfiltered internet data,...Show MoreMetadata
Abstract:
Text-to-image (T2I) diffusion models have become prominent tools for generating high-fidelity images from text prompts. However, when trained on unfiltered internet data, these models can produce unsafe, incorrect, or stylis-tically undesirable images that are not aligned with human preferences. To address this, recent approaches have incor-porated human preference datasets to fine-tune T2I mod-els or to optimize reward functions that capture these pref-erences. Although effective, these methods are vulnerable to reward hacking, where the model overfits to the reward function, leading to a loss of diversity in the generated images. In this paper, we prove the inevitability of reward hacking and study natural regularization techniques like KL divergence and LoRA scaling, and their limitations for diffusion models. We also introduce Annealed Importance Guidance (AIG), an inference-time regularization inspired by Annealed Importance Sampling, which retains the di-versity of the base model while achieving Pareto-Optimal reward-diversity tradeoffs. Our experiments demonstrate the benefits of AIG for Stable Diffusion models, striking the optimal balance between reward optimization and im-age diversity. Furthermore, a user study confirms that AIG improves diversity and quality of generated images across different model architectures and reward functions.
Date of Conference: 26 February 2025 - 06 March 2025
Date Added to IEEE Xplore: 08 April 2025
ISBN Information: