This package consists of a small extension library of highly optimized sparse update (scatter and segment) operations for the use in PyTorch, which are missing in the main package.
Scatter and segment operations can be roughly described as reduce operations based on a given “group-index” tensor.
Segment operations require the “group-index” tensor to be sorted, whereas scatter operations are not subject to these requirements.
The package consists of the following operations with reduction types "sum"|"mean"|"min"|"max":
segment_csr based on compressed indices via pointers
In addition, we provide the following composite functions which make use of scatter_* operations under the hood: scatter_std, scatter_logsumexp, scatter_softmax and scatter_log_softmax.
All included operations are broadcastable, work on varying data types, are implemented both for CPU and GPU with corresponding backward implementations, and are fully traceable.
Installation
Binaries
We provide pip wheels for all major OS/PyTorch/CUDA combinations, see here.
PyTorch 2.11
To install the binaries for PyTorch 2.11, simply run
where ${CUDA} should be replaced by either cpu, cu126, cu128, or cu130 depending on your PyTorch installation.
cpu
cu126
cu128
cu130
Linux
✅
✅
✅
✅
Windows
✅
✅
✅
✅
macOS
✅
Note: Binaries of older versions are also provided for PyTorch 1.4.0, PyTorch 1.5.0, PyTorch 1.6.0, PyTorch 1.7.0/1.7.1, PyTorch 1.8.0/1.8.1, PyTorch 1.9.0, PyTorch 1.10.0/1.10.1/1.10.2, PyTorch 1.11.0, PyTorch 1.12.0/1.12.1, PyTorch 1.13.0/1.13.1, PyTorch 2.0.0/2.0.1, PyTorch 2.1.0/2.1.1/2.1.2, PyTorch 2.2.0/2.2.1/2.2.2, PyTorch 2.3.0/2.3.1, PyTorch 2.4.0/2.4.1, PyTorch 2.5.0/2.5.1, PyTorch 2.6.0, PyTorch 2.7.0/2.7.1, and PyTorch 2.8.0 (following the same procedure).
For older versions, you need to explicitly specify the latest supported version number or install via pip install --no-index in order to prevent a manual installation from source.
You can look up the latest supported version number here.
From source
Ensure that at least PyTorch 1.4.0 is installed and verify that cuda/bin and cuda/include are in your $PATH and $CPATH respectively, e.g.:
When running in a docker container without NVIDIA driver, PyTorch needs to evaluate the compute capabilities and may fail.
In this case, ensure that the compute capabilities are set via TORCH_CUDA_ARCH_LIST, e.g.:
torch-scatter also offers a C++ API that contains C++ equivalent of python models.
For this, we need to add TorchLib to the -DCMAKE_PREFIX_PATH (run import torch; print(torch.utils.cmake_prefix_path) to obtain it).
mkdir build
cd build
# Add -DWITH_CUDA=on support for CUDA support
cmake -DCMAKE_PREFIX_PATH="..." ..
make
make install
PyTorch Scatter
Documentation
This package consists of a small extension library of highly optimized sparse update (scatter and segment) operations for the use in PyTorch, which are missing in the main package. Scatter and segment operations can be roughly described as reduce operations based on a given “group-index” tensor. Segment operations require the “group-index” tensor to be sorted, whereas scatter operations are not subject to these requirements.
The package consists of the following operations with reduction types
"sum"|"mean"|"min"|"max":In addition, we provide the following composite functions which make use of
scatter_*operations under the hood:scatter_std,scatter_logsumexp,scatter_softmaxandscatter_log_softmax.All included operations are broadcastable, work on varying data types, are implemented both for CPU and GPU with corresponding backward implementations, and are fully traceable.
Installation
Binaries
We provide pip wheels for all major OS/PyTorch/CUDA combinations, see here.
PyTorch 2.11
To install the binaries for PyTorch 2.11, simply run
where
${CUDA}should be replaced by eithercpu,cu126,cu128, orcu130depending on your PyTorch installation.cpucu126cu128cu130PyTorch 2.10
To install the binaries for PyTorch 2.10, simply run
where
${CUDA}should be replaced by eithercpu,cu126,cu128, orcu130depending on your PyTorch installation.cpucu126cu128cu130PyTorch 2.9
To install the binaries for PyTorch 2.9, simply run
where
${CUDA}should be replaced by eithercpu,cu126,cu128, orcu130depending on your PyTorch installation.cpucu126cu128cu130Note: Binaries of older versions are also provided for PyTorch 1.4.0, PyTorch 1.5.0, PyTorch 1.6.0, PyTorch 1.7.0/1.7.1, PyTorch 1.8.0/1.8.1, PyTorch 1.9.0, PyTorch 1.10.0/1.10.1/1.10.2, PyTorch 1.11.0, PyTorch 1.12.0/1.12.1, PyTorch 1.13.0/1.13.1, PyTorch 2.0.0/2.0.1, PyTorch 2.1.0/2.1.1/2.1.2, PyTorch 2.2.0/2.2.1/2.2.2, PyTorch 2.3.0/2.3.1, PyTorch 2.4.0/2.4.1, PyTorch 2.5.0/2.5.1, PyTorch 2.6.0, PyTorch 2.7.0/2.7.1, and PyTorch 2.8.0 (following the same procedure). For older versions, you need to explicitly specify the latest supported version number or install via
pip install --no-indexin order to prevent a manual installation from source. You can look up the latest supported version number here.From source
Ensure that at least PyTorch 1.4.0 is installed and verify that
cuda/binandcuda/includeare in your$PATHand$CPATHrespectively, e.g.:Then run:
When running in a docker container without NVIDIA driver, PyTorch needs to evaluate the compute capabilities and may fail. In this case, ensure that the compute capabilities are set via
TORCH_CUDA_ARCH_LIST, e.g.:Example
Running tests
C++ API
torch-scatteralso offers a C++ API that contains C++ equivalent of python models. For this, we need to addTorchLibto the-DCMAKE_PREFIX_PATH(runimport torch; print(torch.utils.cmake_prefix_path)to obtain it).