[CICD] Add Buildkite test pipeline to workflows (#5)
Add Buildkite test pipeline to workflows
Purpose
Transfer the testing process defined in
.buildkite/test-pipeline.yamlto GitHub Actions and implement the corresponding workflow configuration in theworkflows/*. yamlfile.Test Plan
Run a new GitHub Actions workflow locally for validation
Test Result
Essential Elements of an Effective PR Description Checklist
- Most of the existing test cases pass, with a pass rate above 85%;
- CI run time remains stable;
- Log outputs are clear and traceable.
Co-authored-by: zihugithub fbye@baai.ac.cn Co-authored-by: liyuzhuo lee.yuzhuo@gmail.com
版权所有:中国计算机学会技术支持:开源发展技术委员会
京ICP备13000930号-9
京公网安备 11010802032778号
vLLM-FL is a fork of vLLM that introduces a plugin-based architecture for supporting diverse AI chips, built on top of FlagOS, a unified open-source AI system software stack.
Easy, fast, and cheap LLM serving for everyone
| Documentation | Blog | Paper | Twitter/X | User Forum | Developer Slack |
Join us at the PyTorch Conference, October 22-23 and Ray Summit, November 3-5 in San Francisco for our latest updates on vLLM and to meet the vLLM team! Register now for the largest vLLM community events of the year!
Latest News 🔥
Previous News
About
vLLM is a fast and easy-to-use library for LLM inference and serving.
Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.
vLLM is fast with:
vLLM is flexible and easy to use with:
vLLM seamlessly supports most popular open-source models on HuggingFace, including:
Find the full list of supported models here.
Getting Started
Install vLLM with
pipor from source:Visit our documentation to learn more.
Contributing
We welcome and value any contributions and collaborations. Please check out Contributing to vLLM for how to get involved.
Sponsors
vLLM is a community project. Our compute resources for development and testing are supported by the following organizations. Thank you for your support!
Cash Donations:
Compute Resources:
Slack Sponsor: Anyscale
We also have an official fundraising venue through OpenCollective. We plan to use the fund to support the development, maintenance, and adoption of vLLM.
Citation
If you use vLLM for your research, please cite our paper:
Contact Us
Media Kit