2025.9.05: We released our latest project Puppeteer for automatic rigging and animation, check it now!
2025.8.07: We updated the skeleton generation weights to fix a training data loader normalization bug causing mesh-skeleton misalignment. Check here for an example.
2025.6.18: We release the blender script for reading rig (.txt) and mesh (.obj) from GLB files.
2025.4.18: We have updated the preprocessed dataset to exclude entries with skinning issues (118 from the training and 3 from the test, whose skinning weight row sums fell below 1) and duplicated joint names (2 from the training). You can download the cleaned data again or update it yourself by running: python data_utils/update_npz_rm_issue_data.py. Still remember to normalize skinning weights in your dataloader.
2025.4.16: Release weights for skeleton generation.
2025.3.28: Release inference codes for skeleton generation.
2025.3.20: Release preprocessed data of Articulation-XL2.0 (add vertex normals), we split it into training (46.7k) and testing set (2k). Try it now!!!
2025.2.27: MagicArticulate was accepted by CVPR2025, see you in Nashville! Data and code are coming soon—stay tuned! 🚀
We introduce Articulation-XL2.0, a large-scale dataset featuring over 48K 3D models with high-quality articulation annotations, filtered from Objaverse-XL. Compared to version 1.0, Articulation-XL2.0 includes 3D models with multiple components. For further details, please refer to the statistics below.
Note: The data with rigging has been deduplicated (over 150K). The quality of most data has been manually verified.
Metadata
We provide the following information in the metadata of Articulation-XL2.0.
We provide a method for visualizing 3D models with skeleton using Pyrender, modified from Lab4D. For more details, please refer here.
Autoregressive skeleton generation
Overview
We formulate skeleton generation as a sequence modeling problem, leveraging an autoregressive transformer to naturally handle varying numbers of bones or joints within skeletons. If you are interested in autoregressive in GenAI, check this awesome list.
Sequence ordering
We provide two ways for sequence ordering: spatial and hierarchical sequence ordering. More details please refer to the paper.
You can run the following command for evaluating our models on Articulation-XL2.0-test and ModelResource-test from RigNet. For your convenience, we also save ModelResource-test in our format (download it here). The inference process requires 4.6 GB of VRAM and takes 1–2 seconds per inference.
bash eval.sh
You can change save_name for different evaluation and check the quantitative results afterwards in evaluate_results.txt.
These are the numbers (the metrics are in units of 10−2) that you should be able to reproduce using the released weights and the current version of the codebase:
Test set
Articulation-XL2.0-test
ModelResource-test
CD-J2J
CD-J2B
CD-B2B
CD-J2J
CD-J2B
CD-B2B
Paper (train on 1.0, spatial)
-
-
-
4.103
3.101
2.672
Paper (train on 1.0, hier)
-
-
-
4.451
3.454
2.998
train on Arti-XL2.0 (spatial)
3.024
2.260
1.915
4.003
3.026
2.586
train on Arti-XL2.0 (hier)
3.172
2.419
2.050
4.129
3.149
2.705
The performance comparison between models trained on Articulation-XL1.0 versus 2.0 demonstrates the importance of dataset scaling with high quality. If you wish to compare your methods with MagicArticulate trained on Articulation-XL2.0, you may reference these results as a baseline for comparison.
Demo
We provide some examples to test our models by running the following command. You can also test our models on your 3D objects, remeber to change the input_dir.
@InProceedings{Song_2025_CVPR,
author = {Song, Chaoyue and Zhang, Jianfeng and Li, Xiu and Yang, Fan and Chen, Yiwen and Xu, Zhongcong and Liew, Jun Hao and Guo, Xiaoyang and Liu, Fayao and Feng, Jiashi and Lin, Guosheng},
title = {MagicArticulate: Make Your 3D Models Articulation-Ready},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages = {15998-16007}
}
MagicArticulate: Make Your 3D Models Articulation-Ready
Chaoyue Song1,2, Jianfeng Zhang2*, Xiu Li2, Fan Yang1, Yiwen Chen1, Zhongcong Xu2,
Jun Hao Liew2, Xiaoyang Guo2, Fayao Liu3, Jiashi Feng2, Guosheng Lin1*
*Corresponding authors
1 Nanyang Technological University 2 Bytedance Seed 3 A*STAR
CVPR 2025
Project | Paper | Video | Data: Articulation-XL2.0
News
python data_utils/update_npz_rm_issue_data.py. Still remember to normalize skinning weights in your dataloader.Dataset: Articulation-XL2.0
Overview
We introduce Articulation-XL2.0, a large-scale dataset featuring over 48K 3D models with high-quality articulation annotations, filtered from Objaverse-XL. Compared to version 1.0, Articulation-XL2.0 includes 3D models with multiple components. For further details, please refer to the statistics below.
Note: The data with rigging has been deduplicated (over 150K). The quality of most data has been manually verified.Metadata
We provide the following information in the metadata of Articulation-XL2.0.
Preprocessed data
We provide the preprocessed data that saved in NPZ files, which contain the following information:
Check here to see how to read and how we save it.
Data visualization
We provide a method for visualizing 3D models with skeleton using Pyrender, modified from Lab4D. For more details, please refer here.
Autoregressive skeleton generation
Overview
We formulate skeleton generation as a sequence modeling problem, leveraging an autoregressive transformer to naturally handle varying numbers of bones or joints within skeletons. If you are interested in autoregressive in GenAI, check this awesome list.
Sequence ordering
We provide two ways for sequence ordering: spatial and hierarchical sequence ordering. More details please refer to the paper.
Installtation
Then download checkpoints of Michelangelo and our released weights for skeleton generation:
Evaluation
You can run the following command for evaluating our models on
Articulation-XL2.0-testandModelResource-testfrom RigNet. For your convenience, we also save ModelResource-test in our format (download it here). The inference process requires 4.6 GB of VRAM and takes 1–2 seconds per inference.You can change
save_namefor different evaluation and check the quantitative results afterwards inevaluate_results.txt.These are the numbers (the metrics are in units of 10−2) that you should be able to reproduce using the released weights and the current version of the codebase:
Demo
We provide some examples to test our models by running the following command. You can also test our models on your 3D objects, remeber to change the
input_dir.Acknowledgment
We appreciate the insightful discussions with Zhan Xu regrading RigNet and with Biao Zhang regrading Functional Diffusion. The code is built based on MeshAnything, Functional Diffusion, RigNet, Michelangelo and Lab4D.
Citation