SAGEConvΒΆ
-
class
dgl.nn.tensorflow.conv.
SAGEConv
(*args, **kwargs)[source]ΒΆ Bases:
tensorflow.python.keras.engine.base_layer.Layer
GraphSAGE layer from Inductive Representation Learning on Large Graphs
\[ \begin{align}\begin{aligned}h_{\mathcal{N}(i)}^{(l+1)} &= \mathrm{aggregate} \left(\{h_{j}^{l}, \forall j \in \mathcal{N}(i) \}\right)\\h_{i}^{(l+1)} &= \sigma \left(W \cdot \mathrm{concat} (h_{i}^{l}, h_{\mathcal{N}(i)}^{l+1}) \right)\\h_{i}^{(l+1)} &= \mathrm{norm}(h_{i}^{(l+1)})\end{aligned}\end{align} \]- Parameters
in_feats (int, or pair of ints) β
Input feature size; i.e, the number of dimensions of \(h_i^{(l)}\).
GATConv can be applied on homogeneous graph and unidirectional bipartite graph. If the layer applies on a unidirectional bipartite graph,
in_feats
specifies the input feature size on both the source and destination nodes. If a scalar is given, the source and destination node feature size would take the same value.If aggregator type is
gcn
, the feature size of source and destination nodes are required to be the same.out_feats (int) β Output feature size; i.e, the number of dimensions of \(h_i^{(l+1)}\).
aggregator_type (str) β Aggregator type to use (
mean
,gcn
,pool
,lstm
).feat_drop (float) β Dropout rate on features, default:
0
.bias (bool) β If True, adds a learnable bias to the output. Default:
True
.norm (callable activation function/layer or None, optional) β If not None, applies normalization to the updated node features.
activation (callable activation function/layer or None, optional) β If not None, applies an activation function to the updated node features. Default:
None
.
Examples
>>> import dgl >>> import numpy as np >>> import tensorflow as tf >>> from dgl.nn import SAGEConv >>> >>> # Case 1: Homogeneous graph >>> with tf.device("CPU:0"): >>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3])) >>> g = dgl.add_self_loop(g) >>> feat = tf.ones((6, 10)) >>> conv = SAGEConv(10, 2, 'pool') >>> res = conv(g, feat) >>> res <tf.Tensor: shape=(6, 2), dtype=float32, numpy= array([[-3.6633523 , -0.90711546], [-3.6633523 , -0.90711546], [-3.6633523 , -0.90711546], [-3.6633523 , -0.90711546], [-3.6633523 , -0.90711546], [-3.6633523 , -0.90711546]], dtype=float32)>
>>> # Case 2: Unidirectional bipartite graph >>> with tf.device("CPU:0"): >>> u = [0, 1, 0, 0, 1] >>> v = [0, 1, 2, 3, 2] >>> g = dgl.heterograph({('_N', '_E', '_N'):(u, v)}) >>> u_fea = tf.convert_to_tensor(np.random.rand(2, 5)) >>> v_fea = tf.convert_to_tensor(np.random.rand(4, 5)) >>> conv = SAGEConv((5, 10), 2, 'mean') >>> res = conv(g, (u_fea, v_fea)) >>> res <tf.Tensor: shape=(4, 2), dtype=float32, numpy= array([[-0.59453356, -0.4055441 ], [-0.47459763, -0.717764 ], [ 0.3221837 , -0.29876417], [-0.63356155, 0.09390211]], dtype=float32)>