This package consists of a small extension library of optimized sparse matrix operations with autograd support.
This package currently consists of the following methods:
All included operations work on varying data types and are implemented both for CPU and GPU.
To avoid the hazzle of creating torch.sparse_coo_tensor, this package defines operations on sparse tensors by simply passing index and value tensors as arguments (with same shapes as defined in PyTorch).
Note that only value comes with autograd support, as index is discrete and therefore not differentiable.
Installation
Binaries
We provide pip wheels for all major OS/PyTorch/CUDA combinations, see here.
PyTorch 2.11
To install the binaries for PyTorch 2.11, simply run
where ${CUDA} should be replaced by either cpu, cu126, cu128, or cu130 depending on your PyTorch installation.
cpu
cu126
cu128
cu130
Linux
✅
✅
✅
✅
Windows
✅
✅
✅
✅
macOS
✅
Note: Binaries of older versions are also provided for PyTorch 1.4.0, PyTorch 1.5.0, PyTorch 1.6.0, PyTorch 1.7.0/1.7.1, PyTorch 1.8.0/1.8.1, PyTorch 1.9.0, PyTorch 1.10.0/1.10.1/1.10.2, PyTorch 1.11.0, PyTorch 1.12.0/1.12.1, PyTorch 1.13.0/1.13.1, PyTorch 2.0.0/2.0.1, PyTorch 2.1.0/2.1.1/2.1.2, PyTorch 2.2.0/2.2.1/2.2.2, PyTorch 2.3.0/2.3.1, PyTorch 2.4.0/2.4.1, PyTorch 2.5.0/2.5.1, PyTorch 2.6.0, PyTorch 2.7.0/2.7.1, and PyTorch 2.8.0 (following the same procedure).
For older versions, you need to explicitly specify the latest supported version number or install via pip install --no-index in order to prevent a manual installation from source.
You can look up the latest supported version number here.
From source
Ensure that at least PyTorch 1.7.0 is installed and verify that cuda/bin and cuda/include are in your $PATH and $CPATH respectively, e.g.:
If you want to additionally build torch-sparse with METIS support, e.g. for partioning, please download and install the METIS library by following the instructions in the Install.txt file.
Note that METIS needs to be installed with 64 bit IDXTYPEWIDTH by changing include/metis.h.
Afterwards, set the environment variable WITH_METIS=1.
Then run:
pip install torch-scatter torch-sparse
When running in a docker container without NVIDIA driver, PyTorch needs to evaluate the compute capabilities and may fail.
In this case, ensure that the compute capabilities are set via TORCH_CUDA_ARCH_LIST, e.g.:
torch_sparse.coalesce(index, value, m, n, op="add") -> (torch.LongTensor, torch.Tensor)
Row-wise sorts index and removes duplicate entries.
Duplicate entries are removed by scattering them together.
For scattering, any operation of torch_scatter can be used.
Parameters
index(LongTensor) - The index tensor of sparse matrix.
value(Tensor) - The value tensor of sparse matrix.
m(int) - The first dimension of sparse matrix.
n(int) - The second dimension of sparse matrix.
op(string, optional) - The scatter operation to use. (default: "add")
Returns
index(LongTensor) - The coalesced index tensor of sparse matrix.
value(Tensor) - The coalesced value tensor of sparse matrix.
torch-sparse also offers a C++ API that contains C++ equivalent of python models.
For this, we need to add TorchLib to the -DCMAKE_PREFIX_PATH (run import torch; print(torch.utils.cmake_prefix_path) to obtain it).
mkdir build
cd build
# Add -DWITH_CUDA=on support for CUDA support
cmake -DCMAKE_PREFIX_PATH="..." ..
make
make install
PyTorch Sparse
This package consists of a small extension library of optimized sparse matrix operations with autograd support. This package currently consists of the following methods:
All included operations work on varying data types and are implemented both for CPU and GPU. To avoid the hazzle of creating
torch.sparse_coo_tensor, this package defines operations on sparse tensors by simply passingindexandvaluetensors as arguments (with same shapes as defined in PyTorch). Note that onlyvaluecomes with autograd support, asindexis discrete and therefore not differentiable.Installation
Binaries
We provide pip wheels for all major OS/PyTorch/CUDA combinations, see here.
PyTorch 2.11
To install the binaries for PyTorch 2.11, simply run
where
${CUDA}should be replaced by eithercpu,cu126,cu128, orcu130depending on your PyTorch installation.cpucu126cu128cu130PyTorch 2.10
To install the binaries for PyTorch 2.10, simply run
where
${CUDA}should be replaced by eithercpu,cu126,cu128, orcu130depending on your PyTorch installation.cpucu126cu128cu130PyTorch 2.9
To install the binaries for PyTorch 2.9, simply run
where
${CUDA}should be replaced by eithercpu,cu126,cu128, orcu130depending on your PyTorch installation.cpucu126cu128cu130Note: Binaries of older versions are also provided for PyTorch 1.4.0, PyTorch 1.5.0, PyTorch 1.6.0, PyTorch 1.7.0/1.7.1, PyTorch 1.8.0/1.8.1, PyTorch 1.9.0, PyTorch 1.10.0/1.10.1/1.10.2, PyTorch 1.11.0, PyTorch 1.12.0/1.12.1, PyTorch 1.13.0/1.13.1, PyTorch 2.0.0/2.0.1, PyTorch 2.1.0/2.1.1/2.1.2, PyTorch 2.2.0/2.2.1/2.2.2, PyTorch 2.3.0/2.3.1, PyTorch 2.4.0/2.4.1, PyTorch 2.5.0/2.5.1, PyTorch 2.6.0, PyTorch 2.7.0/2.7.1, and PyTorch 2.8.0 (following the same procedure). For older versions, you need to explicitly specify the latest supported version number or install via
pip install --no-indexin order to prevent a manual installation from source. You can look up the latest supported version number here.From source
Ensure that at least PyTorch 1.7.0 is installed and verify that
cuda/binandcuda/includeare in your$PATHand$CPATHrespectively, e.g.:If you want to additionally build
torch-sparsewith METIS support, e.g. for partioning, please download and install the METIS library by following the instructions in theInstall.txtfile. Note that METIS needs to be installed with 64 bitIDXTYPEWIDTHby changinginclude/metis.h. Afterwards, set the environment variableWITH_METIS=1.Then run:
When running in a docker container without NVIDIA driver, PyTorch needs to evaluate the compute capabilities and may fail. In this case, ensure that the compute capabilities are set via
TORCH_CUDA_ARCH_LIST, e.g.:Functions
Coalesce
Row-wise sorts
indexand removes duplicate entries. Duplicate entries are removed by scattering them together. For scattering, any operation oftorch_scattercan be used.Parameters
"add")Returns
Example
Transpose
Transposes dimensions 0 and 1 of a sparse matrix.
Parameters
False, will not coalesce the output. (default:True)Returns
Example
Sparse Dense Matrix Multiplication
Matrix product of a sparse matrix with a dense matrix.
Parameters
Returns
Example
Sparse Sparse Matrix Multiplication
Matrix product of two sparse tensors. Both input sparse matrices need to be coalesced (use the
coalescedattribute to force).Parameters
True, will coalesce both input sparse matrices. (default:False)Returns
Example
Running tests
C++ API
torch-sparsealso offers a C++ API that contains C++ equivalent of python models. For this, we need to addTorchLibto the-DCMAKE_PREFIX_PATH(runimport torch; print(torch.utils.cmake_prefix_path)to obtain it).