If you like our project, please give us a star ⭐ on GitHub for the latest update.
📢 News
[2024.05.26] Release refinement code for Instant3D and InstantMesh. Better refinement method and more coarse-stage models support are coming soon, stay tuned!
[2024.04.09] Release MagicBoost paper and project page.
We are not able to open the reproduced Instant3D model currently. However, we provide two ways to use our model:
Instant3D
We provide pre-computed four view images and extracted meshes from the reproduced instant3d, which allows users to do a simple test of the model.
Please download the pre-computed images and meshes from Huggingface MagicBoost Demo Page and put the images and meshes into ./load/mv_instant3d and ./load/mesh_instant3d
Thanks to the open project InstantMesh, our model now supports using InstantMesh as a base model. Please install InstantMesh following the open repo InstantMesh and run commands
By default, we use the first image in the multi-view condition set as the reference image. To use other views, please adjust the default_[elevation_deg/azimuth_deg/fov/camera_dist] according to the selected view in the config.
Recommend: We also provide a script to automaticly search the ckpts and write the commands, simply run
We use batchsize 4 as default in our experiments, which would need an A100 GPU to do the computation in the refinement stage. To use the model with less VRAM, please adjust data.random_camera.batchsize in the config file to be lower. However, this may lead to slightly degraded results compared to batchsize of 4. Increasing the total refinement steps may help to address the problem and get a better result.
For diffusion only model, refer to subdir ./extern/MVC/. Check ./extern/MVC/README.md for instruction.
🚩TODO/Updates
Release Magic-Boost Refine code.
Support for InstantMesh.
Release our re-produced Instant3D.
Release our own Gaussian-based Reconstruction Model.
Release huggingface gradio demo.
Higher resolution and better mesh refinement methods are coming soon.
If you find Magic-Boost helpful, please consider citing:
@article@misc{yang2024magicboost,
title={Magic-Boost: Boost 3D Generation with Mutli-View Conditioned Diffusion},
author={Fan Yang and Jianfeng Zhang and Yichun Shi and Bowen Chen and Chenxu Zhang and Huichao Zhang and Xiaofeng Yang and Jiashi Feng and Guosheng Lin},
year={2024},
eprint={2404.06429},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Magic-Boost: Boost 3D Generation with Multi-View Conditioned Diffusion
Fan Yang · Jianfeng Zhang† · Yichun Shi · Bowen Chen · Chenxu Zhang · Huichao Zhang · Xiaofeng Yang · Xiu Li · Jiashi Feng · Guosheng Lin
Nanyang Technological University | ByteDance
If you like our project, please give us a star ⭐ on GitHub for the latest update.
📢 News
⚒️ Installation
This part is the same as original MVDream-threestudio or ImageDream-threestudio. Skip it if you already have installed the environment.
Pretrained weights
Clone the modelcard on the Huggingface MagicBoost Model Page and put the .pt files under
./extern/MVC/checkpoints/🔥 Quick Start
1. Get the multi-view images and coarse meshes.
We are not able to open the reproduced Instant3D model currently. However, we provide two ways to use our model:
Instant3D
We provide pre-computed four view images and extracted meshes from the reproduced instant3d, which allows users to do a simple test of the model. Please download the pre-computed images and meshes from Huggingface MagicBoost Demo Page and put the images and meshes into./load/mv_instant3dand./load/mesh_instant3dhttps://github.com/magic-research/magic-boost/assets/25397555/a42c96d2-6d8e-4227-b94b-c3951d267155
InstantMesh
Thanks to the open project InstantMesh, our model now supports using InstantMesh as a base model. Please install InstantMesh following the open repo InstantMesh and run commands to get the multi-view images and meshes. We provide a script to preprocess the multi-view images into the right format. Simply run Put the final mv images and the meshes into./load/mv_instantmeshand./load/mesh_instantmeshhttps://github.com/magic-research/magic-boost/assets/25397555/9ba9cc5b-0848-48be-b270-3ea2220bde0e
2. Convert Mesh into NeRF
We first convert the coarse mesh into Nerf for differentiable rendering. To convert the mesh into Nerf, simply run
Recommend: We provide a script to generate the commends automaicly, simply run
3. Refine
To refine the converted Nerf, simply run
Notes:
Recommend: We also provide a script to automaticly search the ckpts and write the commands, simply run
Tips:
We use batchsize 4 as default in our experiments, which would need an A100 GPU to do the computation in the refinement stage. To use the model with less VRAM, please adjust
data.random_camera.batchsizein the config file to be lower. However, this may lead to slightly degraded results compared to batchsize of 4. Increasing the total refinement steps may help to address the problem and get a better result.For diffusion only model, refer to subdir
./extern/MVC/. Check./extern/MVC/README.mdfor instruction.🚩TODO/Updates
🙏 Acknowledgements
This code is built based on several open repos including threestudio, MVDream, ImageDream-threestudio, InstantMesh and Dreamcraft3d. We sincerely thank the authors of these projects for their excellent contributions to 3D generation.
🎓 Citation
If you find Magic-Boost helpful, please consider citing: