To address ID leakage and the linear increase in generation time with the number of identities, we propose ID-Patch, a novel method for robust identity-to-position association. From the same facial features, we generate both an **ID patch**—placed on the conditional image for precise spatial control—and ID embeddings, which are fused with text embeddings to enhance identity resemblance.
Path to the pose image used for conditioning the generation. Default: data/poses/example_pose.png
--subject_dir
Directory containing subject identity images. Each image should represent one person. Default: data/subjects
--subjects
Comma-separated list of subject image filenames (e.g., exp_man.jpg,exp_woman.jpg). The order corresponds to their placement from left to right in the generated image.
--prompt
Text prompt describing the scene to be generated. This guides the overall content and style of the output image.
--base_model_path
Path to the base diffusion model to be used for generation. Default: RunDiffusion/Juggernaut-X-v10
--output_dir
Directory where the generated images will be saved. Default: results
--output_name
Filename prefix for the generated image(s). Default: exp_result
Disclaimer
Our released HuggingFace model differs from the paper’s version due to training on a different dataset.
License
Copyright 2024 Bytedance Ltd. and/or its affiliates
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Citation
If you find this code useful for your research, please cite us via the BibTeX below.
@InProceedings{zhang2025idpatch,
author = {Zhang, Yimeng and Zhi, Tiancheng and Liu, Jing and Sang, Shen and Jiang, Liming and Yan, Qing and Liu, Sijia and Luo, Linjie},
title = {ID-Patch: Robust ID Association for Group Photo Personalization},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2025}
}
[CVPR 2025] ID-Patch: Robust ID Association for Group Photo Personalization
Yimeng Zhang1,2,*, Tiancheng Zhi1, Jing Liu1, Shen Sang1, Liming Jiang1, Qing Yan1, Sijia Liu2, Linjie Luo1
1ByteDance Inc., 2Michigan State University
*Work done during internship at ByteDance.
ID-Patch: Build Identity-to-Position Association
To address ID leakage and the linear increase in generation time with the number of identities, we propose ID-Patch, a novel method for robust identity-to-position association. From the same facial features, we generate both an **ID patch**—placed on the conditional image for precise spatial control—and ID embeddings, which are fused with text embeddings to enhance identity resemblance.
Environment Setup
Note: Python 3.9 and CUDA 12.2 are required.
Download models from https://huggingface.co/ByteDance/ID-Patch, and put them under
models/folder.Demo
--pose_image_pathdata/poses/example_pose.png--subject_dirdata/subjects--subjectsexp_man.jpg,exp_woman.jpg). The order corresponds to their placement from left to right in the generated image.--prompt--base_model_pathRunDiffusion/Juggernaut-X-v10--output_dirresults--output_nameexp_resultDisclaimer
Our released HuggingFace model differs from the paper’s version due to training on a different dataset.
License
Citation
If you find this code useful for your research, please cite us via the BibTeX below.