dgl.transformsο
Transform for structures and features
An abstract class for writing transforms. |
|
Create a transform composed of multiple transforms in sequence. |
|
Add self-loops for each node in the graph and return a new graph. |
|
Remove self-loops for each node in the graph and return a new graph. |
|
Add a reverse edge \((i,j)\) for each edge \((j,i)\) in the input graph and return a new graph. |
|
Convert a graph to a simple graph without parallel edges and return a new graph. |
|
Return the line graph of the input graph. |
|
Return the graph whose edges connect the \(k\)-hop neighbors of the original graph. |
|
Add new edges to an input graph based on given metapaths, as described in Heterogeneous Graph Attention Network. |
|
Apply symmetric adjacency normalization to an input graph and save the result edge weights, as described in Semi-Supervised Classification with Graph Convolutional Networks. |
|
Apply personalized PageRank (PPR) to an input graph for diffusion, as introduced in The pagerank citation ranking: Bringing order to the web. |
|
Apply heat kernel to an input graph for diffusion, as introduced in Diffusion kernels on graphs and other discrete structures. |
|
Apply graph diffusion convolution (GDC) to an input graph, as introduced in Diffusion Improves Graph Learning. |
|
Randomly shuffle the nodes. |
|
Randomly drop nodes, as described in Graph Contrastive Learning with Augmentations. |
|
Randomly drop edges, as described in DropEdge: Towards Deep Graph Convolutional Networks on Node Classification and Graph Contrastive Learning with Augmentations. |
|
Randomly add edges, as described in Graph Contrastive Learning with Augmentations. |
|
Random Walk Positional Encoding, as introduced in Graph Neural Networks with Learnable Structural and Positional Representations |
|
Laplacian Positional Encoding, as introduced in Benchmarking Graph Neural Networks |
|
Randomly mask columns of the node and edge feature tensors, as described in Graph Contrastive Learning with Augmentations. |
|
Row-normalizes the features given in |
|
The diffusion operator from SIGN: Scalable Inception Graph Neural Networks |
|
This function transforms the original graph to its heterogeneous Levi graph, by converting edges to intermediate nodes, only support homogeneous directed graph. |
|
SVD-based Positional Encoding, as introduced in Global Self-Attention as a Replacement for Graph Convolution |