fix readme (#139)
PR Category
Others
PR Type
Docs
Description
fix readme
Related Issues
Changes
-
Testing
-
Checklist
- I have run the existing tests and they pass
- I have added tests for my changes (if applicable)
- I have updated the documentation (if applicable)
Co-authored-by: XMing xmhubj@gmail.com
版权所有:中国计算机学会技术支持:开源发展技术委员会
京ICP备13000930号-9
京公网安备 11010802032778号
vllm-plugin-FL
vllm-plugin-FL is a plugin for the vLLM inference/serving framework, built on FlagOS’s unified multi-chip backend — including the unified operator library FlagGems and the unified communication library FlagCX. It extends vLLM’s capabilities and performance across diverse hardware environments. Without changing vLLM’s original interfaces or usage patterns, the same command can run model inference/serving on different chips.
Supported Models and Chips
In theory, vllm-plugin-FL can support all models available in vLLM, as long as no unsupported operators are involved. The tables below summarize the current support status of end-to-end verified models and chips, including both fully supported and in-progress (“Merging”) entries.
Supported Models
Supported Chips
Quick Start
Setup
Install vllm-plugin-FL
2.1 Clone the repository:
2.2 install
Install FlagGems
3.1 Install Build Dependencies
3.2 Installation FlagGems
(Optional) Install FlagCX
4.1 Clone the repository:
4.2 Build the library with different flags targeting to different platforms:
4.3 Set environment
4.4 Installation FlagCX
Note: [xxx] should be selected according to the current platform, e.g., nvidia, ascend, etc.
If there are multiple plugins in the current environment, you can specify use vllm-plugin-fl via VLLM_PLUGINS=’fl’.
Additional Steps for Ascend
Install FlagTree
Set required environment variable
Enable eager execution
Ascend requires eager execution. Add
enforce_eager=Trueto theLLMconstructor or pass--enforce-eageron the command line.Run a Task
Offline Batched Inference
With vLLM and vLLM-fl installed, you can start generating texts for list of input prompts (i.e. offline batch inferencing). See the example script: offline_inference. Or use blow python script directly.
Advanced use
For dispatch environment variable usage, see environment variables usage.
Using Cuda Communication library
If you want to use the original Cuda Communication, you can unset the following environment variables.
Using native CUDA operators
If you want to use the original CUDA operators, you can set the following environment variables.