目录

WMP

Code for the paper:

World Model-based Perception for Visual Legged Locomotion

Hang Lai, Jiahang Cao, JiaFeng Xu, Hongtao Wu, Yunfeng Lin, Tao Kong, Yong Yu, Weinan Zhang

🌐 Project Website | 📄 Paper

Requirements

  1. Create a new python virtual env with python 3.6, 3.7 or 3.8 (3.8 recommended)
  2. Install pytorch:
    • pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
  3. Install Isaac Gym
  4. Install other packages:
    • sudo apt-get install build-essential --fix-missing
    • sudo apt-get install ninja-build
    • pip install setuptools==59.5.0
    • pip install ruamel_yaml==0.17.4
    • sudo apt install libgl1-mesa-glx -y
    • pip install opencv-contrib-python
    • pip install -r requirements.txt

Training

python legged_gym/scripts/train.py --task=a1_amp --headless --sim_device=cuda:0

Training takes about 23G GPU memory, and at least 10k iterations recommended.

Visualization

Please make sure you have trained the WMP before

python legged_gym/scripts/play.py --task=a1_amp --sim_device=cuda:0 --terrain=climb

Acknowledgments

We thank the authors of the following projects for making their code open source:

Citation

If you find this project helpful, please consider citing our paper:

@article{lai2024world,
  title={World Model-based Perception for Visual Legged Locomotion},
  author={Lai, Hang and Cao, Jiahang and Xu, Jiafeng and Wu, Hongtao and Lin, Yunfeng and Kong, Tao and Yu, Yong and Zhang, Weinan},
  journal={arXiv preprint arXiv:2409.16784},
  year={2024}
}
关于
10.2 MB
邀请码
    Gitlink(确实开源)
  • 加入我们
  • 官网邮箱:gitlink@ccf.org.cn
  • QQ群
  • QQ群
  • 公众号
  • 公众号

版权所有:中国计算机学会技术支持:开源发展技术委员会
京ICP备13000930号-9 京公网安备 11010802032778号