If you find this project helpful, please consider citing our paper:
@article{lai2024world,
title={World Model-based Perception for Visual Legged Locomotion},
author={Lai, Hang and Cao, Jiahang and Xu, Jiafeng and Wu, Hongtao and Lin, Yunfeng and Kong, Tao and Yu, Yong and Zhang, Weinan},
journal={arXiv preprint arXiv:2409.16784},
year={2024}
}
WMP
Code for the paper:
World Model-based Perception for Visual Legged Locomotion
Hang Lai, Jiahang Cao, JiaFeng Xu, Hongtao Wu, Yunfeng Lin, Tao Kong, Yong Yu, Weinan Zhang
🌐 Project Website | 📄 Paper
Requirements
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117cd isaacgym/python && pip install -e .sudo apt-get install build-essential --fix-missingsudo apt-get install ninja-buildpip install setuptools==59.5.0pip install ruamel_yaml==0.17.4sudo apt install libgl1-mesa-glx -ypip install opencv-contrib-pythonpip install -r requirements.txtTraining
Training takes about 23G GPU memory, and at least 10k iterations recommended.
Visualization
Please make sure you have trained the WMP before
Acknowledgments
We thank the authors of the following projects for making their code open source:
Citation
If you find this project helpful, please consider citing our paper: