Launch local gradio demo if you have multiple GPUs:
python3 -m demo.gradio_animate_dist
Then open gradio demo in local browser.
🙏 Acknowledgements
We would like to thank AK(@_akhaliq) and huggingface team for the help of setting up oneline gradio demo.
🎓 Citation
If you find this codebase useful for your research, please use the following entry.
@inproceedings{xu2023magicanimate,
author = {Xu, Zhongcong and Zhang, Jianfeng and Liew, Jun Hao and Yan, Hanshu and Liu, Jia-Wei and Zhang, Chenxu and Feng, Jiashi and Shou, Mike Zheng},
title = {MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model},
booktitle = {arXiv},
year = {2023}
}
MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
Zhongcong Xu · Jianfeng Zhang · Jun Hao Liew · Hanshu Yan · Jia-Wei Liu · Chenxu Zhang · Jiashi Feng · Mike Zheng Shou
National University of Singapore | ByteDance
📢 News
🏃♂️ Getting Started
Download the pretrained base models for StableDiffusion V1.5 and MSE-finetuned VAE.
Download our MagicAnimate checkpoints.
Please follow the huggingface download instructions to download the above models and checkpoints,
git lfsis recommended.Place the based models and checkpoints as follows:
⚒️ Installation
prerequisites:
python>=3.8,CUDA>=11.3, andffmpeg.Install with
conda:or
pip:💃 Inference
Run inference on single GPU:
Run inference with multiple GPUs:
🎨 Gradio Demo
Online Gradio Demo:
Try our online gradio demo quickly.
Local Gradio Demo:
Launch local gradio demo on single GPU:
Launch local gradio demo if you have multiple GPUs:
Then open gradio demo in local browser.
🙏 Acknowledgements
We would like to thank AK(@_akhaliq) and huggingface team for the help of setting up oneline gradio demo.
🎓 Citation
If you find this codebase useful for your research, please use the following entry.