Enhancing Privacy-Utility Trade-offs to Mitigate Memorization in Diffusion Models

1School of Computer Science, Faculty of Engineering, The University of Sydney, Australia
2School of Physics, Mathematics and Computing, The University of Western Australia, Australia
3Center for Research in Computer Vision, University of Central Florida, USA

CVPR 2025
MY ALT TEXT

Abstract

Text-to-image diffusion models have demonstrated remarkable capabilities in creating images highly aligned with user prompts, yet their proclivity for memorizing training set images has sparked concerns about the originality of the generated images and privacy issues, potentially leading to legal complications for both model owners and users, particularly when the memorized images contain proprietary content. Although methods to mitigate these issues have been suggested, enhancing privacy often results in a significant decrease in the utility of the outputs, as indicated by text-alignment scores. To bridge the research gap, we introduce a novel method, PRSS, which refines the classifier-free guidance approach in diffusion models by integrating prompt re-anchoring (PR) to improve privacy and incorporating semantic prompt search (SS) to enhance utility. Extensive experiments across various privacy levels demonstrate that our approach consistently improves the privacy-utility trade-off, establishing a new state-of-the-art.

BibTeX


@inproceedings{chen2025enhancing,
  title={Enhancing Privacy-Utility Trade-offs to Mitigate Memorization in Diffusion Models},
  author={Chen, Chen and Liu, Daochang and Shah, Mubarak and Xu, Chang},
  booktitle={CVPR},
  year={2025}
}