[08/11/2025] 🎉 Our image version DreamID is accepted by SIGGRAPH Asia 2025!
💡 Usage Tips
Reference Image Preparation: Please upload cropped face images (recommended resolution: 512x512) as reference. Avoid using full-body photos to ensure optimal identity preservation.
Inference Steps: For simple scenes, you can reduce the sampling steps to 20 to significantly decrease inference time.
Note: Our internal model based on Seedance1.0 achieves high quality in under 8 steps. Feel free to experience it at CapCut.
Best Quality: For the highest fidelity results, we recommend using a resolution of 1280x720.
Enhanced Pose Detection: We have resolved the previous pose detection issue by introducing DreamID-V-Wan-1.3B-DWPose. This significantly improves stability and robustness in pose extraction.
Our work builds upon and is greatly inspired by several outstanding open-source projects, including Wan2.1, Phantom, OpenHumanVid, Follow-Your-Emoji, DWPose. We sincerely thank the authors and contributors of these projects for generously sharing their excellent codes and ideas.
📧 Contact
If you have any comments or questions regarding this open-source project, please open a new issue or contact Xu Guo and Fulong Ye.
⚠️ Ethics Statement
This project, DreamID-V, is intended for academic research and technical demonstration purposes only.
Prohibited Use: Users are strictly prohibited from using this codebase to generate content that is illegal, defamatory, pornographic, harmful, or infringes upon the privacy and rights of others.
Responsibility: Users bear full responsibility for the content they generate. The authors and contributors of this project assume no liability for any misuse or consequences arising from the use of this software.
AI Labeling: We strongly recommend marking generated videos as “AI-Generated” to prevent misinformation.
By using this software, you agree to adhere to these guidelines and applicable local laws.
⭐ Citation
If you find our work helpful, please consider citing our paper and leaving valuable stars
@misc{guo2026dreamidvbridgingimagetovideogaphighfidelity,
title={DreamID-V:Bridging the Image-to-Video Gap for High-Fidelity Face Swapping via Diffusion Transformer},
author={Xu Guo and Fulong Ye and Xinghui Li and Pengqi Tu and Pengze Zhang and Qichao Sun and Songtao Zhao and Xiangwang Hou and Qian He},
year={2026},
eprint={2601.01425},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2601.01425},
}
DreamID-V: Bridging the Image-to-Video Gap for High-Fidelity Face Swapping via Diffusion Transformer
🌐 Project Page | 📜 Arxiv | 🤗 Models |
🔥 News
💡 Usage Tips
⚡️ Quickstart
Model Preparation
Installation
Install dependencies:
DreamID-V-Wan-1.3B-Faster
Please ensure you have downloaded dreamidv_faster.pth and the DWPose estimation models are placed in the correct directory.
DreamID-V-Wan-1.3B-DWPose
Please ensure the pose estimation models are placed in the correct directory as follows:
DreamID-V-Wan-1.3B-MediaPipe
👍 Acknowledgements
Our work builds upon and is greatly inspired by several outstanding open-source projects, including Wan2.1, Phantom, OpenHumanVid, Follow-Your-Emoji, DWPose. We sincerely thank the authors and contributors of these projects for generously sharing their excellent codes and ideas.
📧 Contact
If you have any comments or questions regarding this open-source project, please open a new issue or contact Xu Guo and Fulong Ye.
⚠️ Ethics Statement
This project, DreamID-V, is intended for academic research and technical demonstration purposes only.
⭐ Citation
If you find our work helpful, please consider citing our paper and leaving valuable stars
Star History