Download the processed voxceleb dataset, uncompress the video clips and camera sequences.
We do not have the plan to release the video clips for other two datasets due to the potential copyright issues. Instead, we provide the list for selected video clips. Please follow the preprocessing pipeline described in our paper to process them.
Training
Start training using the following script and override the arguments:
Download the pretrained checkpoints from release page.
Note: We use a rendering resolution of 64 for all the experiments except for the geometry visualization in qualitative comparison, where we used a rendering resolution of 128. Please download voxceleb_res64 for reproducing the results. As for the teasers, please use the 128 models trained on voxceleb and mixed data.
If you find this codebase useful for your research, please use the following entry.
@inproceedings{xu2022pv3d,
author = {Xu, Zhongcong and Zhang, Jianfeng and Liew, Junhao and Zhang, Wenqing and Bai, Song and Feng, Jiashi and Shou, Mike Zheng},
title = {PV3D: A 3D Generative Model for Portrait Video Generation},
booktitle={The Tenth International Conference on Learning Representations},
year = {2023}
}
PV3D: A 3D Generative Model for Portrait Video Generation
Zhongcong Xu*, Jianfeng Zhang*, Jun Hao Liew, Wenqing Zhang, Song Bai, Jiashi Feng, Mike Zheng Shou
[Project Page], [OpenReview]
Demo Videos
Unconditional Generation
Installation
Install pytorch>=1.11.0 based on your CUDA version (e.g. CUDA 11.3):
Install other dependencies:
Install pytorch3d:
Download pretrained i3d checkpoints from VideoGPT for Frechet Video Distance evaluation:
Download pretrained ArcFace from InsightFace for multi-view identity consistency evaluation:
Preparing Dataset
Download the processed voxceleb dataset, uncompress the video clips and camera sequences.
We do not have the plan to release the video clips for other two datasets due to the potential copyright issues. Instead, we provide the list for selected video clips. Please follow the preprocessing pipeline described in our paper to process them.
Training
Start training using the following script and override the arguments:
Inference
Download the pretrained checkpoints from release page.
Note: We use a rendering resolution of 64 for all the experiments except for the geometry visualization in qualitative comparison, where we used a rendering resolution of 128. Please download voxceleb_res64 for reproducing the results. As for the teasers, please use the 128 models trained on voxceleb and mixed data.
Generating video clips:
Evaluation
Compute Frechet Video Distance:
Generating multi-view results for evaluation:
Compute Chamfer Distance:
Compute Identity (ID) Consistency:
Compute Multi-view Warping Error:
Citation
If you find this codebase useful for your research, please use the following entry.