Flux is a communication-overlapping library for dense/MoE models on GPUs, providing high-performance and pluggable kernels to support various parallelisms in model training/inference.
Flux’s efficient kernels are compatible with Pytorch and can be integrated into existing frameworks easily, supporting various Nvidia GPU architectures and data types.
News
[2025/03/10]🔥We have released COMET: Computation-communication Overlapping for Mixture-of-Experts.
Getting started
Install Flux either from source or from PyPI.
Install from Source
git clone --recursive https://github.com/bytedance/flux.git && cd flux
# For Ampere(sm80) GPU
./build.sh --arch 80 --nvshmem
# For Ada Lovelace(sm89) GPU
./build.sh --arch 89 --nvshmem
# For Hopper(sm90) GPU
./build.sh --arch 90 --nvshmem
Install in a virtual environment
Here is a snippet to install Flux in a virtual environment. Let’s finish the installation in an virtual environment with CUDA 12.4, torch 2.6.0 and python 3.11.
Then you would expect a wheel package under dist/ folder that is suitable for your virtual environment.
Install from PyPI
We also provide some pre-built wheels for Flux, and you can directly install with pip if your wanted version is available. Currently we provide wheels for the following configurations: torch(2.4.0, 2.5.0, 2.6.0), python(3.10, 3.11), cuda(12.4).
# Make sure that PyTorch is installed.
pip install byte-flux
Customized Installation
Build options for source installation
Add --nvshmem to build Flux with NVSHMEM support. It is essential for the MoE kernels.
If you are tired of the cmake process, you can set environment variable FLUX_BUILD_SKIP_CMAKE to 1 to skip cmake if build/CMakeCache.txt already exists.
If you want to build a wheel package, add --package to the build command. find the output wheel file under dist/
Dependencies
Flux depends on NCCL and CUTLASS, which are located under 3rdparty/, and NVSHMEM, which you can install by pip.
NCCL: Managed by git submodule automatically.
NVSHMEM: It’s suggested that you install nvshmem by pip install nvidia-nvshmem-cu12; If you want to build nvshmem from source, you can download it from https://developer.nvidia.com/nvshmem. Flux is tested with nvshmem 3.2.5/3.3.9
CUTLASS: Flux leverages CUTLASS to generate high-performance GEMM kernels. We currently use CUTLASS 4.0.0
Quick Start
Below are commands to run some basic demos once you have installed Flux successfully.
You can check out the documentations for more details!
For a more detailed usage on MoE kernels, please refer to Flux MoE Usage. Try some examples as a quick start. A minimal MoE layer can be implemented within only a few tens of lines of code using Flux!
For some performance numbers, please refer to Performance Doc.
To learn more about the design principles of Flux, please refer to Design Doc.
The Flux Project is under the Apache License v2.0.
Citation
If you use Flux in a scientific publication, we encourage you to add the following reference
to the related papers:
@misc{chang2024flux,
title={FLUX: Fast Software-based Communication Overlap On GPUs Through Kernel Fusion},
author={Li-Wen Chang and Wenlei Bao and Qi Hou and Chengquan Jiang and Ningxin Zheng and Yinmin Zhong and Xuanrun Zhang and Zuquan Song and Ziheng Jiang and Haibin Lin and Xin Jin and Xin Liu},
year={2024},
eprint={2406.06858},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
@misc{zhang2025comet,
title={Comet: Fine-grained Computation-communication Overlapping for Mixture-of-Experts},
author={Shulai Zhang, Ningxin Zheng, Haibin Lin, Ziheng Jiang, Wenlei Bao, Chengquan Jiang, Qi Hou, Weihao Cui, Size Zheng, Li-Wen Chang, Quan Chen and Xin Liu},
year={2025},
eprint={2502.19811},
archivePrefix={arXiv},
primaryClass={cs.DC}
}
Founded in 2023, ByteDance Seed Team is dedicated to crafting the industry’s most advanced AI foundation models. The team aspires to become a world-class research team and make significant contributions to the advancement of science and society.
We are ByteDance Seed team.
You can get to know us better through the following channels👇
Flux: Fine-grained Computation-communication Overlapping GPU Kernel Library
Flux is a communication-overlapping library for dense/MoE models on GPUs, providing high-performance and pluggable kernels to support various parallelisms in model training/inference.
Flux’s efficient kernels are compatible with Pytorch and can be integrated into existing frameworks easily, supporting various Nvidia GPU architectures and data types.
News
[2025/03/10]🔥We have released COMET: Computation-communication Overlapping for Mixture-of-Experts.
Getting started
Install Flux either from source or from PyPI.
Install from Source
Install in a virtual environment
Here is a snippet to install Flux in a virtual environment. Let’s finish the installation in an virtual environment with CUDA 12.4, torch 2.6.0 and python 3.11.
Then you would expect a wheel package under
dist/folder that is suitable for your virtual environment.Install from PyPI
We also provide some pre-built wheels for Flux, and you can directly install with pip if your wanted version is available. Currently we provide wheels for the following configurations: torch(2.4.0, 2.5.0, 2.6.0), python(3.10, 3.11), cuda(12.4).
Customized Installation
Build options for source installation
--nvshmemto build Flux with NVSHMEM support. It is essential for the MoE kernels.FLUX_BUILD_SKIP_CMAKEto 1 to skip cmake ifbuild/CMakeCache.txtalready exists.--packageto the build command. find the output wheel file under dist/Dependencies
Flux depends on
NCCLandCUTLASS, which are located under3rdparty/, andNVSHMEM, which you can install by pip.pip install nvidia-nvshmem-cu12; If you want to build nvshmem from source, you can download it from https://developer.nvidia.com/nvshmem. Flux is tested with nvshmem 3.2.5/3.3.9Quick Start
Below are commands to run some basic demos once you have installed Flux successfully.
You can check out the documentations for more details!
License
The Flux Project is under the Apache License v2.0.
Citation
If you use Flux in a scientific publication, we encourage you to add the following reference to the related papers:
Reference
About ByteDance Seed Team
Founded in 2023, ByteDance Seed Team is dedicated to crafting the industry’s most advanced AI foundation models. The team aspires to become a world-class research team and make significant contributions to the advancement of science and society.