Elucidating Optimal Reward-Diversity Tradeoffs in Text-to-Image Diffusion Models

1University of Pennsylvania, 2NVIDIA
*Work done during an internship at NVIDIA

Preprint

draft base ours
DRaFT
Base
Ours

Annealed Importance Guidance (AIG) improves diversity and quality of generated images from a finetuned diffusion model at inference time without additioning finetuning.

Overview Teaser

Abstract

Text-to-image (T2I) diffusion models have become prominent tools for generating high-fidelity images from text prompts. However, when trained on unfiltered internet data, these models can produce unsafe, incorrect, or stylistically undesirable images that are not aligned with human preferences. To address this, recent approaches have incorporated human preference datasets to fine-tune T2I models or to optimize reward functions that capture these preferences. Although effective, these methods are vulnerable to reward hacking, where the model overfits to the reward function, leading to a loss of diversity in the generated images. In this paper, we prove the inevitability of reward hacking and study natural regularization techniques like KL divergence and LoRA scaling, and their limitations for diffusion models. We also introduce Annealed Importance Guidance (AIG), an inference-time regularization inspired by Annealed Importance Sampling, which retains the diversity of the base model while achieving Pareto-Optimal reward-diversity tradeoffs. Our experiments demonstrate the benefits of AIG for Stable Diffusion models, striking the optimal balance between reward optimization and image diversity. Furthermore, a user study confirms that AIG improves diversity and quality of generated images across different model architectures and reward functions.

Results

Hover to see finetuned model, click on the icon to toggle between DRaFT and AIG (Ours).
DRaFT


If you like our work, consider citing us

@article{jena2024elucidating,
  title={Elucidating Optimal Reward-Diversity Tradeoffs in Text-to-Image Diffusion Models},
  author={Jena, Rohit and Taghibakhshi, Ali and Jain, Sahil and Shen, Gerald and Tajbakhsh, Nima and Vahdat, Arash},
  journal={arXiv preprint arXiv:2409.06493},
  year={2024}
}