Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

PyTorch Extension Library of Optimized Autograd Sparse Matrix Operations

License

NotificationsYou must be signed in to change notification settings

rusty1s/pytorch_sparse

Repository files navigation

PyPI VersionTesting StatusLinting StatusCode Coverage


This package consists of a small extension library of optimized sparse matrix operations with autograd support.This package currently consists of the following methods:

All included operations work on varying data types and are implemented both for CPU and GPU.To avoid the hazzle of creatingtorch.sparse_coo_tensor, this package defines operations on sparse tensors by simply passingindex andvalue tensors as arguments (with same shapes as defined in PyTorch).Note that onlyvalue comes with autograd support, asindex is discrete and therefore not differentiable.

Installation

Binaries

We provide pip wheels for all major OS/PyTorch/CUDA combinations, seehere.

PyTorch 2.8

To install the binaries for PyTorch 2.8.0, simply run

pip install torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-2.8.0+${CUDA}.html

where${CUDA} should be replaced by eithercpu,cu126,cu128, orcu129 depending on your PyTorch installation.

cpucu126cu128cu129
Linux
Windows
macOS

PyTorch 2.7

To install the binaries for PyTorch 2.7.0, simply run

pip install torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-2.7.0+${CUDA}.html

where${CUDA} should be replaced by eithercpu,cu118,cu126, orcu128 depending on your PyTorch installation.

cpucu118cu126cu128
Linux
Windows
macOS

PyTorch 2.6

To install the binaries for PyTorch 2.6.0, simply run

pip install torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-2.6.0+${CUDA}.html

where${CUDA} should be replaced by eithercpu,cu118,cu124, orcu126 depending on your PyTorch installation.

cpucu118cu124cu126
Linux
Windows
macOS

Note: Binaries of older versions are also provided for PyTorch 1.4.0, PyTorch 1.5.0, PyTorch 1.6.0, PyTorch 1.7.0/1.7.1, PyTorch 1.8.0/1.8.1, PyTorch 1.9.0, PyTorch 1.10.0/1.10.1/1.10.2, PyTorch 1.11.0, PyTorch 1.12.0/1.12.1, PyTorch 1.13.0/1.13.1, PyTorch 2.0.0/2.0.1, PyTorch 2.1.0/2.1.1/2.1.2, PyTorch 2.2.0/2.2.1/2.2.2, PyTorch 2.3.0/2.3.1, PyTorch 2.4.0/2.4.1, and PyTorch 2.5.0/2.5.1 (following the same procedure).For older versions, you need to explicitly specify the latest supported version number or install viapip install --no-index in order to prevent a manual installation from source.You can look up the latest supported version numberhere.

From source

Ensure that at least PyTorch 1.7.0 is installed and verify thatcuda/bin andcuda/include are in your$PATH and$CPATH respectively,e.g.:

$ python -c "import torch; print(torch.__version__)">>> 1.7.0$ echo $PATH>>> /usr/local/cuda/bin:...$ echo $CPATH>>> /usr/local/cuda/include:...

If you want to additionally buildtorch-sparse with METIS support,e.g. for partioning, please download and install theMETIS library by following the instructions in theInstall.txt file.Note that METIS needs to be installed with 64 bitIDXTYPEWIDTH by changinginclude/metis.h.Afterwards, set the environment variableWITH_METIS=1.

Then run:

pip install torch-scatter torch-sparse

When running in a docker container without NVIDIA driver, PyTorch needs to evaluate the compute capabilities and may fail.In this case, ensure that the compute capabilities are set viaTORCH_CUDA_ARCH_LIST,e.g.:

export TORCH_CUDA_ARCH_LIST="6.0 6.1 7.2+PTX 7.5+PTX"

Functions

Coalesce

torch_sparse.coalesce(index, value, m, n, op="add") -> (torch.LongTensor, torch.Tensor)

Row-wise sortsindex and removes duplicate entries.Duplicate entries are removed by scattering them together.For scattering, any operation oftorch_scatter can be used.

Parameters

  • index(LongTensor) - The index tensor of sparse matrix.
  • value(Tensor) - The value tensor of sparse matrix.
  • m(int) - The first dimension of sparse matrix.
  • n(int) - The second dimension of sparse matrix.
  • op(string, optional) - The scatter operation to use. (default:"add")

Returns

  • index(LongTensor) - The coalesced index tensor of sparse matrix.
  • value(Tensor) - The coalesced value tensor of sparse matrix.

Example

importtorchfromtorch_sparseimportcoalesceindex=torch.tensor([[1,0,1,0,2,1],                      [0,1,1,1,0,0]])value=torch.Tensor([[1,2], [2,3], [3,4], [4,5], [5,6], [6,7]])index,value=coalesce(index,value,m=3,n=2)
print(index)tensor([[0, 1, 1, 2],        [1, 0, 1, 0]])print(value)tensor([[6.0, 8.0],        [7.0, 9.0],        [3.0, 4.0],        [5.0, 6.0]])

Transpose

torch_sparse.transpose(index, value, m, n) -> (torch.LongTensor, torch.Tensor)

Transposes dimensions 0 and 1 of a sparse matrix.

Parameters

  • index(LongTensor) - The index tensor of sparse matrix.
  • value(Tensor) - The value tensor of sparse matrix.
  • m(int) - The first dimension of sparse matrix.
  • n(int) - The second dimension of sparse matrix.
  • coalesced(bool, optional) - If set toFalse, will not coalesce the output. (default:True)

Returns

  • index(LongTensor) - The transposed index tensor of sparse matrix.
  • value(Tensor) - The transposed value tensor of sparse matrix.

Example

importtorchfromtorch_sparseimporttransposeindex=torch.tensor([[1,0,1,0,2,1],                      [0,1,1,1,0,0]])value=torch.Tensor([[1,2], [2,3], [3,4], [4,5], [5,6], [6,7]])index,value=transpose(index,value,3,2)
print(index)tensor([[0, 0, 1, 1],        [1, 2, 0, 1]])print(value)tensor([[7.0, 9.0],        [5.0, 6.0],        [6.0, 8.0],        [3.0, 4.0]])

Sparse Dense Matrix Multiplication

torch_sparse.spmm(index, value, m, n, matrix) -> torch.Tensor

Matrix product of a sparse matrix with a dense matrix.

Parameters

  • index(LongTensor) - The index tensor of sparse matrix.
  • value(Tensor) - The value tensor of sparse matrix.
  • m(int) - The first dimension of sparse matrix.
  • n(int) - The second dimension of sparse matrix.
  • matrix(Tensor) - The dense matrix.

Returns

  • out(Tensor) - The dense output matrix.

Example

importtorchfromtorch_sparseimportspmmindex=torch.tensor([[0,0,1,2,2],                      [0,2,1,0,1]])value=torch.Tensor([1,2,4,1,3])matrix=torch.Tensor([[1,4], [2,5], [3,6]])out=spmm(index,value,3,3,matrix)
print(out)tensor([[7.0, 16.0],        [8.0, 20.0],        [7.0, 19.0]])

Sparse Sparse Matrix Multiplication

torch_sparse.spspmm(indexA, valueA, indexB, valueB, m, k, n) -> (torch.LongTensor, torch.Tensor)

Matrix product of two sparse tensors.Both input sparse matrices need to becoalesced (use thecoalesced attribute to force).

Parameters

  • indexA(LongTensor) - The index tensor of first sparse matrix.
  • valueA(Tensor) - The value tensor of first sparse matrix.
  • indexB(LongTensor) - The index tensor of second sparse matrix.
  • valueB(Tensor) - The value tensor of second sparse matrix.
  • m(int) - The first dimension of first sparse matrix.
  • k(int) - The second dimension of first sparse matrix and first dimension of second sparse matrix.
  • n(int) - The second dimension of second sparse matrix.
  • coalesced(bool, optional): If set toTrue, will coalesce both input sparse matrices. (default:False)

Returns

  • index(LongTensor) - The output index tensor of sparse matrix.
  • value(Tensor) - The output value tensor of sparse matrix.

Example

importtorchfromtorch_sparseimportspspmmindexA=torch.tensor([[0,0,1,2,2], [1,2,0,0,1]])valueA=torch.Tensor([1,2,3,4,5])indexB=torch.tensor([[0,2], [1,0]])valueB=torch.Tensor([2,4])indexC,valueC=spspmm(indexA,valueA,indexB,valueB,3,3,2)
print(indexC)tensor([[0, 1, 2],        [0, 1, 1]])print(valueC)tensor([8.0, 6.0, 8.0])

Running tests

pytest

C++ API

torch-sparse also offers a C++ API that contains C++ equivalent of python models.For this, we need to addTorchLib to the-DCMAKE_PREFIX_PATH (runimport torch; print(torch.utils.cmake_prefix_path) to obtain it).

mkdir buildcd build# Add -DWITH_CUDA=on support for CUDA supportcmake -DCMAKE_PREFIX_PATH="..." ..makemake install

About

PyTorch Extension Library of Optimized Autograd Sparse Matrix Operations

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp