# Quickstart¶

The tutorial provides a quick walkthrough of the classes and operators provided by the dgl.sparse package.

[1]:

# Install the required packages.

import os
import torch
os.environ['TORCH'] = torch.__version__
os.environ['DGLBACKEND'] = "pytorch"

# Uncomment below to install required packages. If the CUDA version is not 11.6,
# check the https://www.dgl.ai/pages/start.html to find the supported CUDA
# version and corresponding command to install DGL.
#!pip install dgl -f https://data.dgl.ai/wheels/cu116/repo.html > /dev/null

try:
import dgl.sparse as dglsp
installed = True
except ImportError:
installed = False

DGL installed!


## Sparse Matrix¶

The core abstraction of DGL’s sparse package is the SparseMatrix class. Compared with other sparse matrix libraries (such as scipy.sparse and torch.sparse), DGL’s SparseMatrix is specialized for the deep learning workloads on structure data (e.g., Graph Neural Networks), with the following features:

• Auto sparse format. Don’t bother choosing between different sparse formats. There is only one SparseMatrix and it will select the best format for the operation to be performed.

• Non-zero elements can be scalar or vector. Easy for modeling relations (e.g., edges) by vector representation.

• Fully PyTorch compatible. The package is built upon PyTorch and is natively compatible with other tools in the PyTorch ecosystem.

### Creating a DGL Sparse Matrix¶

The simplest way to create a sparse matrix is using the spmatrix API by providing the indices of the non-zero elements. The indices are stored in a tensor of shape (2, nnz), where the i-th non-zero element is stored at position (indices[0][i], indices[1][i]). The code below creates a 3x3 sparse matrix.

[2]:

import torch
import dgl.sparse as dglsp

i = torch.tensor([[1, 1, 2],
[0, 2, 0]])
A = dglsp.spmatrix(i)  # 1.0 is default value for nnz elements.

print(A)
print("")
print("In dense format:")
print(A.to_dense())

SparseMatrix(indices=tensor([[1, 1, 2],
[0, 2, 0]]),
values=tensor([1., 1., 1.]),
shape=(3, 3), nnz=3)

In dense format:
tensor([[0., 0., 0.],
[1., 0., 1.],
[1., 0., 0.]])


If not specified, the shape is inferred automatically from the indices but you can specify it explicitly too.

[3]:

i = torch.tensor([[0, 0, 1],
[0, 2, 0]])

A1 = dglsp.spmatrix(i)
print(f"Implicit Shape: {A1.shape}")
print(A1.to_dense())
print("")

A2 = dglsp.spmatrix(i, shape=(3, 3))
print(f"Explicit Shape: {A2.shape}")
print(A2.to_dense())

Implicit Shape: (2, 3)
tensor([[1., 0., 1.],
[1., 0., 0.]])

Explicit Shape: (3, 3)
tensor([[1., 0., 1.],
[1., 0., 0.],
[0., 0., 0.]])


Both scalar values and vector values can be set for nnz elements in Sparse Matrix.

[4]:

i = torch.tensor([[1, 1, 2],
[0, 2, 0]])
# The length of the value should match the nnz elements represented by the
# sparse matrix format.
scalar_val = torch.tensor([1., 2., 3.])
vector_val = torch.tensor([[1., 1.], [2., 2.], [3., 3.]])

print("-----Scalar Values-----")
A = dglsp.spmatrix(i, scalar_val)
print(A)
print("")
print("In dense format:")
print(A.to_dense())
print("")

print("-----Vector Values-----")
A = dglsp.spmatrix(i, vector_val)
print(A)
print("")
print("In dense format:")
print(A.to_dense())

-----Scalar Values-----
SparseMatrix(indices=tensor([[1, 1, 2],
[0, 2, 0]]),
values=tensor([1., 2., 3.]),
shape=(3, 3), nnz=3)

In dense format:
tensor([[0., 0., 0.],
[1., 0., 2.],
[3., 0., 0.]])

-----Vector Values-----
SparseMatrix(indices=tensor([[1, 1, 2],
[0, 2, 0]]),
values=tensor([[1., 1.],
[2., 2.],
[3., 3.]]),
shape=(3, 3), nnz=3, val_size=(2,))

In dense format:
tensor([[[0., 0.],
[0., 0.],
[0., 0.]],

[[1., 1.],
[0., 0.],
[2., 2.]],

[[3., 3.],
[0., 0.],
[0., 0.]]])


Duplicated indices

[5]:

i = torch.tensor([[0, 0, 0, 1],
[0, 2, 2, 0]])
val = torch.tensor([1., 2., 3., 4])
A = dglsp.spmatrix(i, val)
print(A)
print(f"Whether A contains duplicate indices: {A.has_duplicate()}")
print("")

B = A.coalesce()
print(B)
print(f"Whether B contains duplicate indices: {B.has_duplicate()}")

SparseMatrix(indices=tensor([[0, 0, 0, 1],
[0, 2, 2, 0]]),
values=tensor([1., 2., 3., 4.]),
shape=(2, 3), nnz=4)
Whether A contains duplicate indices: True

SparseMatrix(indices=tensor([[0, 0, 1],
[0, 2, 0]]),
values=tensor([1., 5., 4.]),
shape=(2, 3), nnz=3)
Whether B contains duplicate indices: False


val_like

You can create a new sparse matrix by retaining the non-zero indices of a given sparse matrix but with different non-zero values.

[6]:

i = torch.tensor([[1, 1, 2],
[0, 2, 0]])
val = torch.tensor([1., 2., 3.])
A = dglsp.spmatrix(i, val)

new_val = torch.tensor([4., 5., 6.])
B = dglsp.val_like(A, new_val)
print(B)

SparseMatrix(indices=tensor([[1, 1, 2],
[0, 2, 0]]),
values=tensor([4., 5., 6.]),
shape=(3, 3), nnz=3)


Create a sparse matrix from various sparse formats

• from_coo(): Create a sparse matrix from COO format.

• from_csr(): Create a sparse matrix from CSR format.

• from_csc(): Create a sparse matrix from CSC format.

[7]:

row = torch.tensor([0, 1, 2, 2, 2])
col = torch.tensor([1, 2, 0, 1, 2])

print("-----Create from COO format-----")
A = dglsp.from_coo(row, col)
print(A)
print("")
print("In dense format:")
print(A.to_dense())
print("")

indptr = torch.tensor([0, 1, 2, 5])
indices = torch.tensor([1, 2, 0, 1, 2])

print("-----Create from CSR format-----")
A = dglsp.from_csr(indptr, indices)
print(A)
print("")
print("In dense format:")
print(A.to_dense())
print("")

print("-----Create from CSC format-----")
B = dglsp.from_csc(indptr, indices)
print(B)
print("")
print("In dense format:")
print(B.to_dense())

-----Create from COO format-----
SparseMatrix(indices=tensor([[0, 1, 2, 2, 2],
[1, 2, 0, 1, 2]]),
values=tensor([1., 1., 1., 1., 1.]),
shape=(3, 3), nnz=5)

In dense format:
tensor([[0., 1., 0.],
[0., 0., 1.],
[1., 1., 1.]])

-----Create from CSR format-----
SparseMatrix(indices=tensor([[0, 1, 2, 2, 2],
[1, 2, 0, 1, 2]]),
values=tensor([1., 1., 1., 1., 1.]),
shape=(3, 3), nnz=5)

In dense format:
tensor([[0., 1., 0.],
[0., 0., 1.],
[1., 1., 1.]])

-----Create from CSC format-----
SparseMatrix(indices=tensor([[1, 2, 0, 1, 2],
[0, 1, 2, 2, 2]]),
values=tensor([1., 1., 1., 1., 1.]),
shape=(3, 3), nnz=5)

In dense format:
tensor([[0., 0., 1.],
[1., 0., 1.],
[0., 1., 1.]])


### Attributes and methods of a DGL Sparse Matrix¶

[8]:

i = torch.tensor([[0, 1, 1, 2],
[1, 0, 2, 0]])
val = torch.tensor([1., 2., 3., 4.])
A = dglsp.spmatrix(i, val)

print(f"Shape of sparse matrix: {A.shape}")
print(f"The number of nonzero elements of sparse matrix: {A.nnz}")
print(f"Datatype of sparse matrix: {A.dtype}")
print(f"Device sparse matrix is stored on: {A.device}")
print(f"Get the values of the nonzero elements: {A.val}")
print(f"Get the row indices of the nonzero elements: {A.row}")
print(f"Get the column indices of the nonzero elements: {A.col}")
print(f"Get the coordinate (COO) representation: {A.coo()}")
print(f"Get the compressed sparse row (CSR) representation: {A.csr()}")
print(f"Get the compressed sparse column (CSC) representation: {A.csc()}")

Shape of sparse matrix: (3, 3)
The number of nonzero elements of sparse matrix: 4
Datatype of sparse matrix: torch.float32
Device sparse matrix is stored on: cpu
Get the values of the nonzero elements: tensor([1., 2., 3., 4.])
Get the row indices of the nonzero elements: tensor([0, 1, 1, 2])
Get the column indices of the nonzero elements: tensor([1, 0, 2, 0])
Get the coordinate (COO) representation: (tensor([0, 1, 1, 2]), tensor([1, 0, 2, 0]))
Get the compressed sparse row (CSR) representation: (tensor([0, 1, 3, 4]), tensor([1, 0, 2, 0]), tensor([0, 1, 2, 3]))
Get the compressed sparse column (CSC) representation: (tensor([0, 2, 3, 4]), tensor([1, 2, 0, 1]), tensor([1, 3, 0, 2]))


dtype and/or device conversion

[9]:

i = torch.tensor([[0, 1, 1, 2],
[1, 0, 2, 0]])
val = torch.tensor([1., 2., 3., 4.])
A = dglsp.spmatrix(i, val)

B = A.to(device='cpu', dtype=torch.int32)
print(f"Device sparse matrix is stored on: {B.device}")
print(f"Datatype of sparse matrix: {B.dtype}")

Device sparse matrix is stored on: cpu
Datatype of sparse matrix: torch.int32


Similar to pytorch, we also provide various fine-grained APIs (Doc) for dtype and/or device conversion.

## Diagonal Matrix¶

Diagonal Matrix is a special type of Sparse Matrix, in which the entries outside the main diagonal are all zero.

### Initializing a DGL Diagonal Matrix¶

A DGL Diagonal Matrix can be initiate by dglsp.diag().

Identity Matrix is a special type of Diagonal Matrix, in which all the value on the diagonal are 1.0. Use dglsp.identity() to initiate a Diagonal Matrix.

[10]:

val = torch.tensor([1., 2., 3., 4.])
D = dglsp.diag(val)
print(D)

I = dglsp.identity(shape=(3, 3))
print(I)

DiagMatrix(values=tensor([1., 2., 3., 4.]),
shape=(4, 4))
DiagMatrix(values=tensor([1., 1., 1.]),
shape=(3, 3))


### Attributes and methods of a DGL Diagonal Matrix¶

[11]:

val = torch.tensor([1., 2., 3., 4.])
D = dglsp.diag(val)

print(f"Shape of sparse matrix: {D.shape}")
print(f"The number of nonzero elements of sparse matrix: {D.nnz}")
print(f"Datatype of sparse matrix: {D.dtype}")
print(f"Device sparse matrix is stored on: {D.device}")
print(f"Get the values of the nonzero elements: {D.val}")

Shape of sparse matrix: (4, 4)
The number of nonzero elements of sparse matrix: 4
Datatype of sparse matrix: torch.float32
Device sparse matrix is stored on: cpu
Get the values of the nonzero elements: tensor([1., 2., 3., 4.])


## Operations on Sparse Matrix and Diagonal Matrix¶

• Elementwise operations

• A + B

• A - B

• A * B

• A / B

• A ** scalar

• Reduce operations

• reduce()

• sum()

• smax()

• smin()

• smean()

• Matrix transformations

• SparseMatrix.transpose() or SparseMatrix.T

• SparseMatrix.neg()

• DiagMatrix.transpose() or DiagMatrix.T

• DiagMatrix.neg()

• DiagMatrix.inv()

• Matrix multiplication

• matmul()

• sddmm()

We are using dense format to print sparse matrix in this tutorial since it is more intuitive to read.

### Elementwise operations¶

add(A, B), equivalent to A + B

The supported combinations are shown as follows.

A \ B

DiagMatrix

SparseMatrix

scalar

DiagMatrix

Y

Y

N

SparseMatrix

Y

Y

N

scalar

N

N

N

[12]:

i = torch.tensor([[1, 1, 2],
[0, 2, 0]])
val = torch.tensor([1., 2., 3.])
A1 = dglsp.spmatrix(i, val, shape=(3, 3))
print("A1:")
print(A1.to_dense())

i = torch.tensor([[0, 1, 2],
[0, 2, 1]])
val = torch.tensor([4., 5., 6.])
A2 = dglsp.spmatrix(i, val, shape=(3, 3))
print("A2:")
print(A2.to_dense())

val = torch.tensor([-1., -2., -3.])
D1 = dglsp.diag(val)
print("D1:")
print(D1.to_dense())

val = torch.tensor([-4., -5., -6.])
D2 = dglsp.diag(val)
print("D2:")
print(D2.to_dense())

print("A1 + A2:")
print((A1 + A2).to_dense())

print("A1 + D1:")
print((A1 + D1).to_dense())

print("D1 + D2:")
print((D1 + D2).to_dense())

A1:
tensor([[0., 0., 0.],
[1., 0., 2.],
[3., 0., 0.]])
A2:
tensor([[4., 0., 0.],
[0., 0., 5.],
[0., 6., 0.]])
D1:
tensor([[-1.,  0.,  0.],
[ 0., -2.,  0.],
[ 0.,  0., -3.]])
D2:
tensor([[-4.,  0.,  0.],
[ 0., -5.,  0.],
[ 0.,  0., -6.]])
A1 + A2:
tensor([[4., 0., 0.],
[1., 0., 7.],
[3., 6., 0.]])
A1 + D1:
tensor([[-1.,  0.,  0.],
[ 1., -2.,  2.],
[ 3.,  0., -3.]])
D1 + D2:
tensor([[-5.,  0.,  0.],
[ 0., -7.,  0.],
[ 0.,  0., -9.]])


sub(A, B), equivalent to A - B

The supported combinations are shown as follows.

A \ B

DiagMatrix

SparseMatrix

scalar

DiagMatrix

Y

Y

N

SparseMatrix

Y

Y

N

scalar

N

N

N

[13]:

i = torch.tensor([[1, 1, 2],
[0, 2, 0]])
val = torch.tensor([1., 2., 3.])
A1 = dglsp.spmatrix(i, val, shape=(3, 3))
print("A1:")
print(A1.to_dense())

i = torch.tensor([[0, 1, 2],
[0, 2, 1]])
val = torch.tensor([4., 5., 6.])
A2 = dglsp.spmatrix(i, val, shape=(3, 3))
print("A2:")
print(A2.to_dense())

val = torch.tensor([-1., -2., -3.])
D1 = dglsp.diag(val)
print("D1:")
print(D1.to_dense())

val = torch.tensor([-4., -5., -6.])
D2 = dglsp.diag(val)
print("D2:")
print(D2.to_dense())

print("A1 - A2:")
print((A1 - A2).to_dense())

print("A1 - D1:")
print((A1 - D1).to_dense())

print("D1 - A1:")
print((D1 - A1).to_dense())

print("D1 - D2:")
print((D1 - D2).to_dense())

A1:
tensor([[0., 0., 0.],
[1., 0., 2.],
[3., 0., 0.]])
A2:
tensor([[4., 0., 0.],
[0., 0., 5.],
[0., 6., 0.]])
D1:
tensor([[-1.,  0.,  0.],
[ 0., -2.,  0.],
[ 0.,  0., -3.]])
D2:
tensor([[-4.,  0.,  0.],
[ 0., -5.,  0.],
[ 0.,  0., -6.]])
A1 - A2:
tensor([[-4.,  0.,  0.],
[ 1.,  0., -3.],
[ 3., -6.,  0.]])
A1 - D1:
tensor([[1., 0., 0.],
[1., 2., 2.],
[3., 0., 3.]])
D1 - A1:
tensor([[-1.,  0.,  0.],
[-1., -2., -2.],
[-3.,  0., -3.]])
D1 - D2:
tensor([[3., 0., 0.],
[0., 3., 0.],
[0., 0., 3.]])


mul(A, B), equivalent to A * B

The supported combinations are shown as follows.

A \ B

DiagMatrix

SparseMatrix

scalar

DiagMatrix

Y

N

Y

SparseMatrix

N

N

Y

scalar

Y

Y

N

[14]:

i = torch.tensor([[1, 1, 2],
[0, 2, 0]])
val = torch.tensor([1., 2., 3.])
A = dglsp.spmatrix(i, val, shape=(3, 3))
print("A:")
print(A.to_dense())

print("A * 3:")
print((A * 3).to_dense())
print("3 * A:")
print((3 * A).to_dense())

val = torch.tensor([-1., -2., -3.])
D1 = dglsp.diag(val)
print("D1:")
print(D1.to_dense())

val = torch.tensor([-4., -5., -6.])
D2 = dglsp.diag(val)
print("D2:")
print(D2.to_dense())

print("D1 * -2:")
print((D1 * -2).to_dense())
print("-2 * D1:")
print((-2 * D1).to_dense())

print("D1 * D2:")
print((D1 * D2).to_dense())

A:
tensor([[0., 0., 0.],
[1., 0., 2.],
[3., 0., 0.]])
A * 3:
tensor([[0., 0., 0.],
[3., 0., 6.],
[9., 0., 0.]])
3 * A:
tensor([[0., 0., 0.],
[3., 0., 6.],
[9., 0., 0.]])
D1:
tensor([[-1.,  0.,  0.],
[ 0., -2.,  0.],
[ 0.,  0., -3.]])
D2:
tensor([[-4.,  0.,  0.],
[ 0., -5.,  0.],
[ 0.,  0., -6.]])
D1 * -2:
tensor([[2., 0., 0.],
[0., 4., 0.],
[0., 0., 6.]])
-2 * D1:
tensor([[2., 0., 0.],
[0., 4., 0.],
[0., 0., 6.]])
D1 * D2:
tensor([[ 4.,  0.,  0.],
[ 0., 10.,  0.],
[ 0.,  0., 18.]])


div(A, B), equivalent to A / B

The supported combinations are shown as follows.

A \ B

DiagMatrix

SparseMatrix

scalar

DiagMatrix

Y

N

Y

SparseMatrix

N

N

Y

scalar

N

N

N

[15]:

i = torch.tensor([[1, 1, 2],
[0, 2, 0]])
val = torch.tensor([1., 2., 3.])
A = dglsp.spmatrix(i, val, shape=(3, 3))
print("A:")
print(A.to_dense())

print("A / 2:")
print((A / 2).to_dense())

val = torch.tensor([-1., -2., -3.])
D1 = dglsp.diag(val)
print("D1:")
print(D1.to_dense())

val = torch.tensor([-4., -5., -6.])
D2 = dglsp.diag(val)
print("D2:")
print(D2.to_dense())

print("D1 / D2:")
print((D1 / D2).to_dense())

print("D1 / 2:")
print((D1 / 2).to_dense())

A:
tensor([[0., 0., 0.],
[1., 0., 2.],
[3., 0., 0.]])
A / 2:
tensor([[0.0000, 0.0000, 0.0000],
[0.5000, 0.0000, 1.0000],
[1.5000, 0.0000, 0.0000]])
D1:
tensor([[-1.,  0.,  0.],
[ 0., -2.,  0.],
[ 0.,  0., -3.]])
D2:
tensor([[-4.,  0.,  0.],
[ 0., -5.,  0.],
[ 0.,  0., -6.]])
D1 / D2:
tensor([[0.2500, 0.0000, 0.0000],
[0.0000, 0.4000, 0.0000],
[0.0000, 0.0000, 0.5000]])
D1 / 2:
tensor([[-0.5000,  0.0000,  0.0000],
[ 0.0000, -1.0000,  0.0000],
[ 0.0000,  0.0000, -1.5000]])


power(A, B), equivalent to A ** B

The supported combinations are shown as follows.

A \ B

DiagMatrix

SparseMatrix

scalar

DiagMatrix

N

N

Y

SparseMatrix

N

N

Y

scalar

N

N

N

[16]:

i = torch.tensor([[1, 1, 2],
[0, 2, 0]])
val = torch.tensor([1., 2., 3.])
A = dglsp.spmatrix(i, val, shape=(3, 3))
print("A:")
print(A.to_dense())

print("A ** 3:")
print((A ** 3).to_dense())

val = torch.tensor([-1., -2., -3.])
D = dglsp.diag(val)
print("D:")
print(D.to_dense())

print("D1 ** 2:")
print((D1 ** 2).to_dense())

A:
tensor([[0., 0., 0.],
[1., 0., 2.],
[3., 0., 0.]])
A ** 3:
tensor([[ 0.,  0.,  0.],
[ 1.,  0.,  8.],
[27.,  0.,  0.]])
D:
tensor([[-1.,  0.,  0.],
[ 0., -2.,  0.],
[ 0.,  0., -3.]])
D1 ** 2:
tensor([[1., 0., 0.],
[0., 4., 0.],
[0., 0., 9.]])


### Reduce operations¶

All DGL sparse reduce operations only consider non-zero elements. To distinguish them from dense PyTorch reduce operations that consider zero elements, we use name smax, smin and smean (s stands for sparse).

[17]:

i = torch.tensor([[0, 1, 1, 2],
[1, 0, 2, 0]])
val = torch.tensor([1., 2., 3., 4.])
A = dglsp.spmatrix(i, val)
print(A.T.to_dense())
print("")

# O1, O2 will have the same value.
O1 = A.reduce(0, 'sum')
O2 = A.sum(0)
print("Reduce with reducer:sum along dim = 0:")
print(O1)
print("")

# O3, O4 will have the same value.
O3 = A.reduce(0, 'smax')
O4 = A.smax(0)
print("Reduce with reducer:max along dim = 0:")
print(O3)
print("")

# O5, O6 will have the same value.
O5 = A.reduce(0, 'smin')
O6 = A.smin(0)
print("Reduce with reducer:min along dim = 0:")
print(O5)
print("")

# O7, O8 will have the same value.
O7 = A.reduce(0, 'smean')
O8 = A.smean(0)
print("Reduce with reducer:smean along dim = 0:")
print(O7)
print("")

tensor([[0., 2., 4.],
[1., 0., 0.],
[0., 3., 0.]])

Reduce with reducer:sum along dim = 0:
tensor([6., 1., 3.])

Reduce with reducer:max along dim = 0:
tensor([4., 1., 3.])

Reduce with reducer:min along dim = 0:
tensor([2., 1., 3.])

Reduce with reducer:smean along dim = 0:
tensor([3., 1., 3.])


[W TensorAdvancedIndexing.cpp:1615] Warning: scatter_reduce() is in beta and the API may change at any time. (function operator())


### Matrix transformations¶

Sparse Matrix

[18]:

i = torch.tensor([[0, 1, 1, 2],
[1, 0, 2, 0]])
val = torch.tensor([1., 2., 3., 4.])
A = dglsp.spmatrix(i, val)
print(A.to_dense())
print("")

print("Get transpose of sparse matrix.")
print(A.T.to_dense())
# Alias
# A.transpose()
# A.t()
print("")

print("Get a sparse matrix with the negation of the original nonzero values.")
print(A.neg().to_dense())
print("")

tensor([[0., 1., 0.],
[2., 0., 3.],
[4., 0., 0.]])

Get transpose of sparse matrix.
tensor([[0., 2., 4.],
[1., 0., 0.],
[0., 3., 0.]])

Get a sparse matrix with the negation of the original nonzero values.
tensor([[ 0., -1.,  0.],
[-2.,  0., -3.],
[-4.,  0.,  0.]])



Diagonal Matrix

[19]:

val = torch.tensor([1., 2., 3., 4.])
D = dglsp.diag(val)
print(D.to_dense())
print("")

print("Get inverse of diagonal matrix:")
print(D.inv().to_dense())
print("")

print("Get a diagonal matrix with the negation of the original nonzero values.")
print(D.neg().to_dense())
print("")

tensor([[1., 0., 0., 0.],
[0., 2., 0., 0.],
[0., 0., 3., 0.],
[0., 0., 0., 4.]])

Get inverse of diagonal matrix:
tensor([[1.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.5000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.3333, 0.0000],
[0.0000, 0.0000, 0.0000, 0.2500]])

Get a diagonal matrix with the negation of the original nonzero values.
tensor([[-1.,  0.,  0.,  0.],
[ 0., -2.,  0.,  0.],
[ 0.,  0., -3.,  0.],
[ 0.,  0.,  0., -4.]])



### Matrix multiplication¶

matmul(A, B), equivalent to A @ B

The supported combinations are shown as follows.

A \ B

Tensor

DiagMatrix

SparseMatrix

Tensor

Y

N

N

DiagMatrix

Y

Y

Y

SparseMatrix

Y

Y

Y

Union[DiagMatrix, SparseMatrix] @ Union[DiagMatrix, SparseMatrix] -> Union[SparseMatrix, DiagMatrix]:

For a $$L \times M$$ sparse matrix A and a $$M \times N$$ sparse matrix B, the shape of A @ B will be $$L \times N$$ sparse matrix.

[20]:

i = torch.tensor([[1, 1, 2],
[0, 2, 0]])
val = torch.tensor([1., 2., 3.])
A1 = dglsp.spmatrix(i, val, shape=(3, 3))
print("A1:")
print(A1.to_dense())

i = torch.tensor([[0, 1, 2],
[0, 2, 1]])
val = torch.tensor([4., 5., 6.])
A2 = dglsp.spmatrix(i, val, shape=(3, 3))
print("A2:")
print(A2.to_dense())

val = torch.tensor([-1., -2., -3.])
D1 = dglsp.diag(val)
print("D1:")
print(D1.to_dense())

val = torch.tensor([-4., -5., -6.])
D2 = dglsp.diag(val)
print("D2:")
print(D2.to_dense())

print("A1 @ A2:")
print((A1 @ A2).to_dense())

print("A1 @ D1:")
print((A1 @ D1).to_dense())

print("D1 @ A1:")
print((D1 @ A1).to_dense())

print("D1 @ D2:")
print((D1 @ D2).to_dense())

A1:
tensor([[0., 0., 0.],
[1., 0., 2.],
[3., 0., 0.]])
A2:
tensor([[4., 0., 0.],
[0., 0., 5.],
[0., 6., 0.]])
D1:
tensor([[-1.,  0.,  0.],
[ 0., -2.,  0.],
[ 0.,  0., -3.]])
D2:
tensor([[-4.,  0.,  0.],
[ 0., -5.,  0.],
[ 0.,  0., -6.]])
A1 @ A2:
tensor([[ 0.,  0.,  0.],
[ 4., 12.,  0.],
[12.,  0.,  0.]])
A1 @ D1:
tensor([[ 0.,  0.,  0.],
[-1.,  0., -6.],
[-3.,  0.,  0.]])
D1 @ A1:
tensor([[ 0.,  0.,  0.],
[-2.,  0., -4.],
[-9.,  0.,  0.]])
D1 @ D2:
tensor([[ 4.,  0.,  0.],
[ 0., 10.,  0.],
[ 0.,  0., 18.]])


Union[DiagMatrix, SparseMatrix] @ Tensor -> Tensor:

For a $$L \times M$$ sparse matrix A and a $$M \times N$$ dense matrix B, the shape of A @ B will be $$L \times N$$ dense matrix.

[21]:

i = torch.tensor([[1, 1, 2],
[0, 2, 0]])
val = torch.tensor([1., 2., 3.])
A = dglsp.spmatrix(i, val, shape=(3, 3))
print("A:")
print(A.to_dense())

val = torch.tensor([-1., -2., -3.])
D = dglsp.diag(val)
print("D:")
print(D.to_dense())

X = torch.tensor([[11., 22.], [33., 44.], [55., 66.]])
print("X:")
print(X)

print("A @ X:")
print(A @ X)

print("D @ X:")
print(D @ X)

A:
tensor([[0., 0., 0.],
[1., 0., 2.],
[3., 0., 0.]])
D:
tensor([[-1.,  0.,  0.],
[ 0., -2.,  0.],
[ 0.,  0., -3.]])
X:
tensor([[11., 22.],
[33., 44.],
[55., 66.]])
A @ X:
tensor([[  0.,   0.],
[121., 154.],
[ 33.,  66.]])
D @ X:
tensor([[ -11.,  -22.],
[ -66.,  -88.],
[-165., -198.]])


This operator also supports batched sparse-dense matrix multiplication. The sparse matrix A should have shape $$L \times M$$, where the non-zero values are vectors of length $$K$$. The dense matrix B should have shape $$M \times N \times K$$. The output is a dense matrix of shape $$L \times N \times K$$.

[22]:

i = torch.tensor([[1, 1, 2],
[0, 2, 0]])
val = torch.tensor([[1., 1.], [2., 2.], [3., 3.]])
A = dglsp.spmatrix(i, val, shape=(3, 3))
print("A:")
print(A.to_dense())

X = torch.tensor([[[1., 1.], [1., 2.]],
[[1., 3.], [1., 4.]],
[[1., 5.], [1., 6.]]])
print("X:")
print(X)

print("A @ X:")
print(A @ X)

A:
tensor([[[0., 0.],
[0., 0.],
[0., 0.]],

[[1., 1.],
[0., 0.],
[2., 2.]],

[[3., 3.],
[0., 0.],
[0., 0.]]])
X:
tensor([[[1., 1.],
[1., 2.]],

[[1., 3.],
[1., 4.]],

[[1., 5.],
[1., 6.]]])
A @ X:
tensor([[[ 0.,  0.],
[ 0.,  0.]],

[[ 3., 11.],
[ 3., 14.]],

[[ 3.,  3.],
[ 3.,  6.]]])


Sampled-Dense-Dense Matrix Multiplication (SDDMM)

sddmm matrix-multiplies two dense matrices X1 and X2, then elementwise-multiplies the result with sparse matrix A at the nonzero locations. This is designed for sparse matrix with scalar values.

$out = (X_1 @ X_2) * A$

For a $$L \times N$$ sparse matrix A, a $$L \times M$$ dense matrix X1 and a $$M \times N$$ dense matrix X2, sddmm(A, X1, X2) will be a $$L \times N$$ sparse matrix.

[23]:

i = torch.tensor([[1, 1, 2],
[2, 3, 3]])
val = torch.tensor([1., 2., 3.])
A = dglsp.spmatrix(i, val, (3, 4))
print("A:")
print(A.to_dense())

X1 = torch.randn(3, 5)
X2 = torch.randn(5, 4)
print("X1:")
print(X1)
print("X2:")
print(X2)

O = dglsp.sddmm(A, X1, X2)
print("dglsp.sddmm(A, X1, X2):")
print(O.to_dense())

A:
tensor([[0., 0., 0., 0.],
[0., 0., 1., 2.],
[0., 0., 0., 3.]])
X1:
tensor([[ 0.8613,  0.3677, -2.0697, -0.8301,  0.9497],
[ 0.4094,  0.2969,  0.3905, -0.1310,  0.0034],
[-1.8636, -0.4828, -0.8771, -1.1055,  0.4834]])
X2:
tensor([[ 2.4568, -0.1499,  0.4491, -1.0450],
[-1.4964,  0.5278, -2.0461,  0.3066],
[ 0.9970,  1.6177,  1.8326, -1.2754],
[ 1.7020,  2.0766,  1.0314,  0.2037],
[ 0.4924, -0.4662, -0.1545,  0.2356]])
dglsp.sddmm(A, X1, X2):
tensor([[ 0.0000,  0.0000,  0.0000,  0.0000],
[ 0.0000,  0.0000,  0.1564, -1.7214],
[ 0.0000,  0.0000,  0.0000,  8.4200]])


This operator also supports batched sampled-dense-dense matrix multiplication. For a $$L \times N$$ sparse matrix A with non-zero vector values of length $$𝐾$$, a $$L \times M \times K$$ dense matrix X1 and a $$M \times N \times K$$ dense matrix X2, sddmm(A, X1, X2) will be a $$L \times N \times K$$ sparse matrix.

[24]:

i = torch.tensor([[1, 1, 2],
[2, 3, 3]])
val = torch.tensor([[1., 1.], [2., 2.], [3., 3.]])
A = dglsp.spmatrix(i, val, (3, 4))
print("A:")
print(A.to_dense())

X1 = torch.randn(3, 5, 2)
X2 = torch.randn(5, 4, 2)
print("X1:")
print(X1)
print("X2:")
print(X2)

O = dglsp.sddmm(A, X1, X2)
print("dglsp.sddmm(A, X1, X2):")
print(O.to_dense())

A:
tensor([[[0., 0.],
[0., 0.],
[0., 0.],
[0., 0.]],

[[0., 0.],
[0., 0.],
[1., 1.],
[2., 2.]],

[[0., 0.],
[0., 0.],
[0., 0.],
[3., 3.]]])
X1:
tensor([[[ 1.2156,  0.9689],
[ 0.9643, -0.3590],
[ 0.0172, -0.3493],
[-0.3248,  1.0995],
[ 0.2393, -0.2977]],

[[ 0.9998,  0.3992],
[ 1.5220, -0.1461],
[-1.8535,  1.3250],
[ 1.0471, -0.3416],
[ 0.5792,  0.8288]],

[[ 0.3636, -1.7511],
[-0.5430, -0.5601],
[ 1.3816,  1.2848],
[-0.8275, -0.8935],
[ 0.1410, -0.9218]]])
X2:
tensor([[[ 0.8038, -1.2177],
[ 0.0706,  0.0364],
[-0.0499,  1.4782],
[-3.0499, -1.1744]],

[[-0.9049, -0.4949],
[-1.1670,  2.1755],
[ 0.4248, -0.9575],
[-0.3729,  0.4935]],

[[-0.1230,  1.0482],
[-0.3544,  1.3369],
[ 0.5512,  0.1056],
[ 0.0814, -0.5637]],

[[ 0.5783, -0.2099],
[-0.8111,  0.7693],
[-1.7773,  0.7031],
[ 0.6722, -1.6109]],

[[-0.4480, -0.3679],
[-0.9259,  0.1875],
[-1.4558, -0.2550],
[-1.8856,  0.4491]]])
dglsp.sddmm(A, X1, X2):
tensor([[[ 0.0000,  0.0000],
[ 0.0000,  0.0000],
[ 0.0000,  0.0000],
[ 0.0000,  0.0000]],

[[ 0.0000,  0.0000],
[ 0.0000,  0.0000],
[-3.1292,  0.4184],
[-8.3124, -0.7307]],

[[ 0.0000,  0.0000],
[ 0.0000,  0.0000],
[ 0.0000,  0.0000],
[-4.8479,  6.2434]]])


## Non-linear activation functions¶

### Element-wise functions¶

Most activation functions are element-wise and can be further grouped into two categories:

Sparse-preserving functions such as sin(), tanh(), sigmoid(), relu(), etc. You can directly apply them on the val tensor of the sparse matrix and then recreate a new matrix of the same sparsity using val_like.

[25]:

i = torch.tensor([[0, 1, 1, 2],
[1, 0, 2, 0]])
val = torch.randn(4)
A = dglsp.spmatrix(i, val)
print(A.to_dense())

print("Apply tanh.")
A_new = dglsp.val_like(A, torch.tanh(A.val))
print(A_new.to_dense())

tensor([[ 0.0000,  0.6126,  0.0000],
[-1.1462,  0.0000,  0.6290],
[ 1.4216,  0.0000,  0.0000]])
Apply tanh.
tensor([[ 0.0000,  0.5460,  0.0000],
[-0.8165,  0.0000,  0.5573],
[ 0.8899,  0.0000,  0.0000]])


Non-sparse-preserving functions such as exp(), cos(), etc. You can first convert the sparse matrix to dense before applying the functions.

[26]:

i = torch.tensor([[0, 1, 1, 2],
[1, 0, 2, 0]])
val = torch.randn(4)
A = dglsp.spmatrix(i, val)
print(A.to_dense())

print("Apply exp.")
A_new = A.to_dense().exp()
print(A_new)

tensor([[ 0.0000, -0.2663,  0.0000],
[-0.3200,  0.0000,  0.3663],
[ 0.7997,  0.0000,  0.0000]])
Apply exp.
tensor([[1.0000, 0.7662, 1.0000],
[0.7261, 1.0000, 1.4424],
[2.2249, 1.0000, 1.0000]])


### Softmax¶

Apply row-wise softmax to the nonzero entries of the sparse matrix.

[27]:

i = torch.tensor([[0, 1, 1, 2],
[1, 0, 2, 0]])
val = torch.tensor([1., 2., 3., 4.])
A = dglsp.spmatrix(i, val)

print(A.softmax())
print("In dense format:")
print(A.softmax().to_dense())
print("\n")

SparseMatrix(indices=tensor([[0, 1, 1, 2],
[1, 0, 2, 0]]),
values=tensor([1.0000, 0.2689, 0.7311, 1.0000]),
shape=(3, 3), nnz=4)
In dense format:
tensor([[0.0000, 1.0000, 0.0000],
[0.2689, 0.0000, 0.7311],
[1.0000, 0.0000, 0.0000]])



## Exercise¶

Let’s test what you’ve learned. Feel free to |Open In Colab|.

Given a sparse symmetrical adjacency matrix $$A$$, calculate its symmetrically normalized adjacency matrix:

$norm = \hat{D}^{-\frac{1}{2}}\hat{A}\hat{D}^{-\frac{1}{2}}$

Where $$\hat{A} = A + I$$, $$I$$ is the identity matrix, and $$\hat{D}$$ is the diagonal node degree matrix of $$\hat{A}$$.

[28]:

i = torch.tensor([[0, 0, 1, 1, 2, 2, 3],
[1, 3, 2, 5, 3, 5, 4]])
asym_A = dglsp.spmatrix(i, shape=(6, 6))
# Step 1: create symmetrical adjacency matrix A from asym_A.
# A =

# Step 2: calculate A_hat from A.
# A_hat =

# Step 3: diagonal node degree matrix of A_hat
# D_hat =

# Step 4: calculate the norm from D_hat and A_hat.
# norm =