DenseGraphConv

class dgl.nn.mxnet.conv.DenseGraphConv(in_feats, out_feats, norm='both', bias=True, activation=None)[source]

Bases: mxnet.gluon.block.Block

Graph Convolutional layer from Semi-Supervised Classification with Graph Convolutional Networks

We recommend user to use this module when applying graph convolution on dense graphs.

Parameters
  • in_feats (int) – Input feature size; i.e, the number of dimensions of \(h_j^{(l)}\).

  • out_feats (int) – Output feature size; i.e., the number of dimensions of \(h_i^{(l+1)}\).

  • norm (str, optional) – How to apply the normalizer. If is ‘right’, divide the aggregated messages by each node’s in-degrees, which is equivalent to averaging the received messages. If is ‘none’, no normalization is applied. Default is ‘both’, where the \(c_{ij}\) in the paper is applied.

  • bias (bool, optional) – If True, adds a learnable bias to the output. Default: True.

  • activation (callable activation function/layer or None, optional) – If not None, applies an activation function to the updated node features. Default: None.

Notes

Zero in-degree nodes will lead to all-zero output. A common practice to avoid this is to add a self-loop for each node in the graph, which can be achieved by setting the diagonal of the adjacency matrix to be 1.

See also

GraphConv

forward(adj, feat)[source]

Compute (Dense) Graph Convolution layer.

Parameters
  • adj (mxnet.NDArray) – The adjacency matrix of the graph to apply Graph Convolution on, when applied to a unidirectional bipartite graph, adj should be of shape should be of shape \((N_{out}, N_{in})\); when applied to a homo graph, adj should be of shape \((N, N)\). In both cases, a row represents a destination node while a column represents a source node.

  • feat (mxnet.NDArray) – The input feature.

Returns

The output feature of shape \((N, D_{out})\) where \(D_{out}\) is size of output feature.

Return type

mxnet.NDArray