DegreeEncoder

class dgl.nn.pytorch.gt.DegreeEncoder(max_degree, embedding_dim, direction='both')[source]

Bases: Module

Degree Encoder, as introduced in Do Transformers Really Perform Bad for Graph Representation?

This module is a learnable degree embedding module.

Parameters:
  • max_degree (int) – Upper bound of degrees to be encoded. Each degree will be clamped into the range [0, max_degree].

  • embedding_dim (int) – Output dimension of embedding vectors.

  • direction (str, optional) – Degrees of which direction to be encoded, selected from in, out and both. both encodes degrees from both directions and output the addition of them. Default : both.

Example

>>> import dgl
>>> from dgl.nn import DegreeEncoder
>>> import torch as th
>>> from torch.nn.utils.rnn import pad_sequence
>>> g1 = dgl.graph(([0,0,0,1,1,2,3,3], [1,2,3,0,3,0,0,1]))
>>> g2 = dgl.graph(([0,1], [1,0]))
>>> in_degree = pad_sequence([g1.in_degrees(), g2.in_degrees()], batch_first=True)
>>> out_degree = pad_sequence([g1.out_degrees(), g2.out_degrees()], batch_first=True)
>>> print(in_degree.shape)
torch.Size([2, 4])
>>> degree_encoder = DegreeEncoder(5, 16)
>>> degree_embedding = degree_encoder(th.stack((in_degree, out_degree)))
>>> print(degree_embedding.shape)
torch.Size([2, 4, 16])
forward(degrees)[source]
Parameters:

degrees (Tensor) – If direction is both, it should be stacked in degrees and out degrees of the batched graph with zero padding, a tensor of shape \((2, B, N)\). Otherwise, it should be zero-padded in degrees or out degrees of the batched graph, a tensor of shape \((B, N)\), where \(B\) is the batch size of the batched graph, and \(N\) is the maximum number of nodes.

Returns:

Return degree embedding vectors of shape \((B, N, d)\), where \(d\) is embedding_dim.

Return type:

Tensor