TY  - GEN
AV  - public
EP  - 4943
N1  - This version is the version of record. For information on re-use, please refer to the publisher's terms and conditions.
ID  - discovery10184511
CY  - Waikoloa, HI, USA
A1  - Wang, Hai
A1  - Xiang, Xiaoyu
A1  - Fan, Yuchen
A1  - Xue, Jinghao
PB  - IEEE
Y1  - 2024/04/09/
N2  - Personalized text-to-image (T2I) synthesis based on diffusion models has attracted significant attention in recent research. However, existing methods primarily concentrate on customizing subjects or styles, neglecting the exploration of global geometry. In this study, we propose an approach that focuses on the customization of 360-degree panoramas, which inherently possess global geometric properties, using a T2I diffusion model. To achieve this, we curate a paired image-text dataset specifically designed for the task and subsequently employ it to fine-tune a pre-trained T2I diffusion model with LoRA. Nevertheless, the fine-tuned model alone does not ensure the continuity between the leftmost and rightmost sides of the synthesized images, a crucial characteristic of 360-degree panoramas. To address this issue, we propose a method called StitchDiffusion. Specifically, we perform pre-denoising operations twice at each time step of the denoising process on the stitch block consisting of the leftmost and rightmost image regions. Furthermore, a global cropping is adopted to synthesize seamless 360-degree panoramas. Experimental results demonstrate the effectiveness of our customized model combined with the proposed StitchDiffusion in generating high-quality 360-degree panoramic images. Moreover, our customized model exhibits exceptional generalization ability in producing scenes unseen in the fine-tuning dataset. Code is available at https://github.com/littlewhitesea/StitchDiffusion.
TI  - Customizing 360-Degree Panoramas Through Text-to-Image Diffusion Models
UR  - https://doi.org/10.1109/WACV57701.2024.00486
SP  - 4933
ER  -