Link Predictionο
This tutorial will show how to train a multi-layer GraphSAGE for link prediction on CoraGraphDataset. The dataset contains 2708 nodes and 10556 edges.
By the end of this tutorial, you will be able to
Train a GNN model for link prediction on target device with DGLβs neighbor sampling components.
Install DGL packageο
[1]:
# Install required packages.
import os
import torch
os.environ['TORCH'] = torch.__version__
os.environ['DGLBACKEND'] = "pytorch"
# Install the CPU version in default. If you want to install CUDA version,
# please refer to https://www.dgl.ai/pages/start.html and change runtime type
# accordingly.
device = torch.device("cpu")
!pip install --pre dgl -f https://data.dgl.ai/wheels-test/repo.html
try:
import dgl
import dgl.graphbolt as gb
installed = True
except ImportError as error:
installed = False
print(error)
print("DGL installed!" if installed else "DGL not found!")
Looking in links: https://data.dgl.ai/wheels-test/repo.html
Requirement already satisfied: dgl in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages/dgl-2.1.0-py3.8-linux-x86_64.egg (2.1.0)
Requirement already satisfied: numpy>=1.14.0 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from dgl) (1.24.4)
Requirement already satisfied: scipy>=1.1.0 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from dgl) (1.10.1)
Requirement already satisfied: networkx>=2.1 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from dgl) (3.1)
Requirement already satisfied: requests>=2.19.0 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from dgl) (2.31.0)
Requirement already satisfied: tqdm in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from dgl) (4.66.2)
Requirement already satisfied: psutil>=5.8.0 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from dgl) (5.9.8)
Requirement already satisfied: torchdata>=0.5.0 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from dgl) (0.7.1)
Requirement already satisfied: charset-normalizer<4,>=2 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from requests>=2.19.0->dgl) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from requests>=2.19.0->dgl) (3.6)
Requirement already satisfied: urllib3<3,>=1.21.1 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from requests>=2.19.0->dgl) (2.2.1)
Requirement already satisfied: certifi>=2017.4.17 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from requests>=2.19.0->dgl) (2024.2.2)
Requirement already satisfied: torch>=2 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from torchdata>=0.5.0->dgl) (2.0.0)
Requirement already satisfied: filelock in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from torch>=2->torchdata>=0.5.0->dgl) (3.13.1)
Requirement already satisfied: typing-extensions in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from torch>=2->torchdata>=0.5.0->dgl) (4.10.0)
Requirement already satisfied: sympy in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from torch>=2->torchdata>=0.5.0->dgl) (1.12)
Requirement already satisfied: jinja2 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from torch>=2->torchdata>=0.5.0->dgl) (3.1.3)
Requirement already satisfied: nvidia-cuda-nvrtc-cu11==11.7.99 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from torch>=2->torchdata>=0.5.0->dgl) (11.7.99)
Requirement already satisfied: nvidia-cuda-runtime-cu11==11.7.99 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from torch>=2->torchdata>=0.5.0->dgl) (11.7.99)
Requirement already satisfied: nvidia-cuda-cupti-cu11==11.7.101 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from torch>=2->torchdata>=0.5.0->dgl) (11.7.101)
Requirement already satisfied: nvidia-cudnn-cu11==8.5.0.96 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from torch>=2->torchdata>=0.5.0->dgl) (8.5.0.96)
Requirement already satisfied: nvidia-cublas-cu11==11.10.3.66 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from torch>=2->torchdata>=0.5.0->dgl) (11.10.3.66)
Requirement already satisfied: nvidia-cufft-cu11==10.9.0.58 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from torch>=2->torchdata>=0.5.0->dgl) (10.9.0.58)
Requirement already satisfied: nvidia-curand-cu11==10.2.10.91 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from torch>=2->torchdata>=0.5.0->dgl) (10.2.10.91)
Requirement already satisfied: nvidia-cusolver-cu11==11.4.0.1 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from torch>=2->torchdata>=0.5.0->dgl) (11.4.0.1)
Requirement already satisfied: nvidia-cusparse-cu11==11.7.4.91 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from torch>=2->torchdata>=0.5.0->dgl) (11.7.4.91)
Requirement already satisfied: nvidia-nccl-cu11==2.14.3 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from torch>=2->torchdata>=0.5.0->dgl) (2.14.3)
Requirement already satisfied: nvidia-nvtx-cu11==11.7.91 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from torch>=2->torchdata>=0.5.0->dgl) (11.7.91)
Requirement already satisfied: triton==2.0.0 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from torch>=2->torchdata>=0.5.0->dgl) (2.0.0)
Requirement already satisfied: setuptools in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from nvidia-cublas-cu11==11.10.3.66->torch>=2->torchdata>=0.5.0->dgl) (69.1.0)
Requirement already satisfied: wheel in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from nvidia-cublas-cu11==11.10.3.66->torch>=2->torchdata>=0.5.0->dgl) (0.42.0)
Requirement already satisfied: cmake in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from triton==2.0.0->torch>=2->torchdata>=0.5.0->dgl) (3.28.3)
Requirement already satisfied: lit in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from triton==2.0.0->torch>=2->torchdata>=0.5.0->dgl) (17.0.6)
Requirement already satisfied: MarkupSafe>=2.0 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from jinja2->torch>=2->torchdata>=0.5.0->dgl) (2.1.5)
Requirement already satisfied: mpmath>=0.19 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.1.x/lib/python3.8/site-packages (from sympy->torch>=2->torchdata>=0.5.0->dgl) (1.3.0)
DGL installed!
Loading Datasetο
cora
is already prepared as BuiltinDataset
in GraphBolt.
[2]:
dataset = gb.BuiltinDataset("cora").load()
Downloading datasets/cora.zip from https://data.dgl.ai/dataset/graphbolt/cora.zip...
Extracting file to datasets
Start to preprocess the on-disk dataset.
Finish preprocessing the on-disk dataset.
Dataset consists of graph, feature and tasks. You can get the training-validation-test set from the tasks. Seed nodes and corresponding labels are already stored in each training-validation-test set. This dataset contains 2 tasks, one for node classification and the other for link prediction. We will use the link prediction task.
[3]:
graph = dataset.graph.to(device)
feature = dataset.feature.to(device)
train_set = dataset.tasks[1].train_set
test_set = dataset.tasks[1].test_set
task_name = dataset.tasks[1].metadata["name"]
print(f"Task: {task_name}.")
Task: link_prediction.
Defining Neighbor Sampler and Data Loader in DGLο
Different from the link prediction tutorial for full graph, a common practice to train GNN on large graphs is to iterate over the edges in minibatches, since computing the probability of all edges is usually impossible. For each minibatch of edges, you compute the output representation of their incident nodes using neighbor sampling and GNN, in a similar fashion introduced in the node classification tutorial.
To perform link prediction, you need to specify a negative sampler. DGL provides builtin negative samplers such as dgl.graphbolt.UniformNegativeSampler
. Here this tutorial uniformly draws 5 negative examples per positive example.
Except for the negative sampler, the rest of the code is identical to the node classification tutorial.
[4]:
from functools import partial
def create_train_dataloader():
datapipe = gb.ItemSampler(train_set, batch_size=256, shuffle=True)
datapipe = datapipe.copy_to(device)
datapipe = datapipe.sample_uniform_negative(graph, 5)
datapipe = datapipe.sample_neighbor(graph, [5, 5])
datapipe = datapipe.transform(partial(gb.exclude_seed_edges, include_reverse_edges=True))
datapipe = datapipe.fetch_feature(feature, node_feature_keys=["feat"])
return gb.DataLoader(datapipe)
You can peek one minibatch from train_dataloader and see what it will give you.
[5]:
data = next(iter(create_train_dataloader()))
print(f"MiniBatch: {data}")
MiniBatch: MiniBatch(seeds=None,
seed_nodes=None,
sampled_subgraphs=[SampledSubgraphImpl(sampled_csc=CSCFormatBase(indptr=tensor([ 0, 1, 6, ..., 6927, 6929, 6931], dtype=torch.int32),
indices=tensor([ 178, 1101, 51, ..., 1268, 2216, 1268], dtype=torch.int32),
),
original_row_node_ids=tensor([1847, 109, 1927, ..., 121, 2252, 1252], dtype=torch.int32),
original_edge_ids=None,
original_column_node_ids=tensor([1847, 109, 1927, ..., 1356, 404, 1170], dtype=torch.int32),
),
SampledSubgraphImpl(sampled_csc=CSCFormatBase(indptr=tensor([ 0, 1, 6, ..., 3794, 3795, 3797], dtype=torch.int32),
indices=tensor([ 178, 1172, 1269, ..., 2215, 2216, 2217], dtype=torch.int32),
),
original_row_node_ids=tensor([1847, 109, 1927, ..., 1356, 404, 1170], dtype=torch.int32),
original_edge_ids=None,
original_column_node_ids=tensor([1847, 109, 1927, ..., 1025, 1613, 1476], dtype=torch.int32),
)],
positive_node_pairs=(tensor([ 0, 1, 0, 2, 3, 4, 5, 916, 6, 7, 8, 9,
10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21,
22, 23, 24, 25, 26, 27, 1035, 28, 29, 1084, 30, 31,
32, 33, 34, 901, 35, 36, 37, 38, 39, 40, 41, 42,
43, 44, 45, 41, 46, 47, 48, 49, 50, 51, 52, 53,
54, 43, 55, 1048, 981, 56, 1084, 57, 58, 59, 60, 1113,
1026, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71,
1048, 957, 72, 73, 74, 75, 76, 77, 78, 1026, 79, 80,
81, 80, 82, 83, 84, 1053, 22, 85, 86, 87, 88, 89,
90, 91, 92, 93, 94, 21, 95, 81, 96, 1053, 97, 98,
99, 100, 101, 874, 102, 103, 104, 105, 990, 106, 107, 108,
109, 110, 31, 111, 995, 112, 113, 114, 115, 73, 116, 117,
118, 119, 120, 121, 122, 121, 892, 123, 58, 1057, 1, 1189,
124, 125, 126, 127, 128, 14, 129, 130, 131, 132, 32, 133,
134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 873,
145, 146, 147, 148, 149, 1, 150, 151, 152, 146, 153, 154,
155, 156, 315, 157, 138, 158, 899, 159, 160, 1175, 161, 162,
1166, 163, 876, 946, 164, 165, 166, 167, 168, 138, 169, 326,
170, 171, 120, 329, 172, 1162, 173, 885, 1051, 1099, 174, 175,
326, 176, 177, 178, 1129, 313, 1114, 179, 1011, 180, 181, 156,
600, 1178, 182, 87, 183, 1114, 41, 184, 185, 182, 186, 187,
188, 156, 189, 190], dtype=torch.int32),
tensor([ 191, 40, 14, 192, 955, 95, 316, 151, 193, 194, 195, 971,
196, 197, 198, 199, 200, 1210, 201, 202, 203, 328, 204, 205,
206, 207, 208, 209, 210, 888, 211, 212, 73, 1249, 213, 50,
214, 215, 216, 921, 40, 217, 608, 218, 219, 220, 187, 129,
221, 222, 223, 224, 225, 1179, 921, 226, 1168, 227, 228, 41,
1143, 229, 43, 230, 231, 232, 337, 233, 234, 68, 235, 236,
237, 327, 238, 1114, 239, 240, 241, 242, 243, 244, 245, 1025,
1007, 246, 247, 248, 249, 250, 610, 80, 251, 1251, 1155, 186,
231, 109, 1214, 252, 174, 253, 1078, 995, 254, 255, 256, 257,
151, 138, 645, 258, 259, 260, 6, 982, 261, 262, 263, 255,
264, 345, 265, 266, 267, 268, 178, 269, 270, 271, 272, 273,
77, 352, 177, 274, 1200, 275, 276, 277, 278, 29, 602, 1114,
279, 280, 281, 282, 283, 32, 284, 285, 286, 287, 224, 288,
116, 1095, 392, 289, 1093, 140, 42, 290, 291, 292, 293, 143,
972, 294, 295, 873, 636, 296, 297, 298, 299, 300, 677, 322,
301, 302, 303, 52, 304, 305, 43, 306, 1030, 307, 351, 308,
309, 310, 311, 312, 1111, 122, 313, 314, 315, 316, 317, 318,
319, 320, 321, 322, 1152, 52, 323, 1092, 32, 40, 92, 324,
325, 326, 327, 979, 328, 329, 1083, 330, 331, 332, 333, 334,
335, 336, 337, 1171, 338, 339, 104, 340, 341, 342, 343, 999,
1140, 344, 345, 346, 347, 117, 348, 349, 350, 972, 80, 106,
351, 352, 138, 353], dtype=torch.int32)),
node_pairs_with_labels=((tensor([ 0, 1, 0, ..., 190, 190, 190], dtype=torch.int32), tensor([ 191, 40, 14, ..., 1266, 1267, 1268], dtype=torch.int32)),
tensor([1., 1., 1., ..., 0., 0., 0.])),
node_pairs=(tensor([1847, 109, 1847, 1927, 2507, 128, 157, 628, 1415, 517, 1987, 2182,
2450, 942, 2459, 2250, 88, 1061, 399, 1426, 2198, 388, 2619, 1103,
1401, 1708, 2115, 1050, 1273, 387, 1777, 2706, 1695, 102, 1546, 2034,
1358, 403, 211, 928, 699, 210, 1728, 1089, 2016, 2045, 306, 2630,
1986, 1087, 1549, 306, 2670, 118, 1996, 839, 2038, 519, 733, 656,
2140, 1986, 2007, 2423, 606, 2362, 102, 620, 39, 279, 1598, 2117,
73, 1645, 1264, 215, 1963, 787, 2316, 2153, 270, 2144, 1054, 2673,
2423, 1226, 1624, 2133, 252, 61, 40, 1536, 973, 73, 1550, 186,
2, 186, 2226, 513, 382, 1416, 1401, 1387, 477, 736, 1889, 2127,
1427, 2078, 790, 2644, 2516, 1103, 392, 2, 2662, 1416, 99, 2197,
436, 444, 1655, 1912, 94, 263, 1846, 970, 661, 1623, 203, 2230,
2702, 2518, 2034, 1368, 1190, 2109, 2376, 753, 644, 2133, 1229, 205,
2327, 377, 1810, 111, 665, 111, 409, 1957, 39, 507, 109, 298,
1506, 876, 1021, 1586, 2080, 88, 2699, 1932, 877, 1714, 1358, 849,
1102, 119, 1653, 1850, 1776, 2168, 498, 487, 1795, 2351, 1731, 32,
2156, 1481, 2147, 2035, 2001, 109, 1999, 1725, 2329, 1481, 2021, 1479,
1884, 1257, 1602, 105, 1776, 2122, 2620, 2564, 2072, 316, 590, 985,
921, 2059, 823, 490, 739, 1192, 2390, 701, 1752, 1776, 171, 1950,
1837, 875, 1810, 1616, 645, 447, 1374, 2359, 1838, 286, 1203, 1262,
1950, 2614, 417, 1013, 1542, 1925, 1701, 1703, 120, 336, 1395, 1257,
2553, 2429, 584, 736, 175, 1701, 306, 536, 1936, 584, 2580, 1771,
1540, 1257, 1499, 976], dtype=torch.int32),
tensor([ 415, 2045, 88, 1152, 1250, 392, 598, 1725, 370, 952, 2004, 1139,
1284, 1924, 760, 1260, 1882, 1196, 544, 480, 2055, 1899, 2618, 1760,
761, 2313, 2519, 1320, 2124, 231, 2143, 165, 2133, 1561, 1804, 2038,
1492, 464, 1394, 327, 2045, 1614, 961, 1088, 2010, 2046, 1771, 2699,
566, 89, 2286, 1779, 1194, 1538, 327, 485, 293, 1574, 678, 306,
927, 45, 1986, 304, 1666, 442, 1871, 923, 1965, 270, 1090, 1867,
1745, 795, 443, 1701, 1022, 325, 1343, 1224, 838, 137, 990, 2671,
2151, 807, 1788, 2131, 711, 2162, 1364, 186, 277, 1189, 2083, 2580,
1666, 2702, 2224, 657, 1203, 1926, 1153, 1190, 930, 1388, 2039, 1119,
1725, 1776, 1548, 1885, 694, 1888, 1415, 1454, 12, 1922, 123, 1388,
1978, 623, 2136, 1704, 195, 364, 1013, 16, 1879, 153, 1869, 938,
1536, 996, 417, 1911, 1184, 604, 1202, 1880, 41, 1695, 2442, 1701,
1778, 2291, 1819, 1762, 397, 1358, 407, 1441, 1522, 253, 1779, 607,
1229, 834, 2309, 173, 655, 498, 2630, 402, 1177, 1169, 1154, 2351,
2464, 646, 1585, 32, 1572, 2169, 1652, 1772, 1796, 577, 1717, 1973,
651, 2332, 2486, 733, 574, 1785, 1986, 1712, 2330, 2335, 1628, 873,
854, 1135, 6, 2651, 1735, 665, 1925, 1380, 1602, 598, 1954, 848,
2293, 718, 2221, 1973, 1958, 733, 489, 2062, 1358, 2045, 790, 310,
1330, 1950, 795, 1402, 1899, 1616, 748, 687, 95, 33, 1410, 564,
1605, 494, 1871, 1625, 338, 1042, 1846, 463, 514, 1944, 2266, 2679,
1470, 250, 623, 1006, 955, 205, 103, 1894, 1413, 2464, 186, 1623,
1628, 996, 1776, 1431], dtype=torch.int32)),
node_features={'feat': tensor([[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
...,
[0.0000, 0.0000, 0.0400, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]])},
negative_srcs=None,
negative_node_pairs=(tensor([[ 0, 0, 0, 0, 0],
[ 1, 1, 1, 1, 1],
[ 0, 0, 0, 0, 0],
...,
[156, 156, 156, 156, 156],
[189, 189, 189, 189, 189],
[190, 190, 190, 190, 190]], dtype=torch.int32),
tensor([[ 318, 40, 354, 355, 356],
[ 357, 358, 331, 359, 901],
[ 360, 361, 956, 362, 363],
...,
[ 147, 1258, 1259, 1260, 1261],
[ 53, 912, 1262, 1263, 1264],
[ 138, 1265, 1266, 1267, 1268]], dtype=torch.int32)),
negative_dsts=tensor([[ 848, 2045, 2248, 2149, 2379],
[2525, 273, 95, 1041, 928],
[ 648, 268, 1738, 470, 148],
...,
[2147, 2179, 29, 1211, 2522],
[ 656, 940, 1729, 1629, 735],
[1776, 2481, 1025, 1613, 1476]], dtype=torch.int32),
labels=None,
input_nodes=tensor([1847, 109, 1927, ..., 121, 2252, 1252], dtype=torch.int32),
indexes=None,
edge_features=[{},
{}],
compacted_seeds=None,
compacted_node_pairs=(tensor([ 0, 1, 0, 2, 3, 4, 5, 916, 6, 7, 8, 9,
10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21,
22, 23, 24, 25, 26, 27, 1035, 28, 29, 1084, 30, 31,
32, 33, 34, 901, 35, 36, 37, 38, 39, 40, 41, 42,
43, 44, 45, 41, 46, 47, 48, 49, 50, 51, 52, 53,
54, 43, 55, 1048, 981, 56, 1084, 57, 58, 59, 60, 1113,
1026, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71,
1048, 957, 72, 73, 74, 75, 76, 77, 78, 1026, 79, 80,
81, 80, 82, 83, 84, 1053, 22, 85, 86, 87, 88, 89,
90, 91, 92, 93, 94, 21, 95, 81, 96, 1053, 97, 98,
99, 100, 101, 874, 102, 103, 104, 105, 990, 106, 107, 108,
109, 110, 31, 111, 995, 112, 113, 114, 115, 73, 116, 117,
118, 119, 120, 121, 122, 121, 892, 123, 58, 1057, 1, 1189,
124, 125, 126, 127, 128, 14, 129, 130, 131, 132, 32, 133,
134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 873,
145, 146, 147, 148, 149, 1, 150, 151, 152, 146, 153, 154,
155, 156, 315, 157, 138, 158, 899, 159, 160, 1175, 161, 162,
1166, 163, 876, 946, 164, 165, 166, 167, 168, 138, 169, 326,
170, 171, 120, 329, 172, 1162, 173, 885, 1051, 1099, 174, 175,
326, 176, 177, 178, 1129, 313, 1114, 179, 1011, 180, 181, 156,
600, 1178, 182, 87, 183, 1114, 41, 184, 185, 182, 186, 187,
188, 156, 189, 190], dtype=torch.int32),
tensor([ 191, 40, 14, 192, 955, 95, 316, 151, 193, 194, 195, 971,
196, 197, 198, 199, 200, 1210, 201, 202, 203, 328, 204, 205,
206, 207, 208, 209, 210, 888, 211, 212, 73, 1249, 213, 50,
214, 215, 216, 921, 40, 217, 608, 218, 219, 220, 187, 129,
221, 222, 223, 224, 225, 1179, 921, 226, 1168, 227, 228, 41,
1143, 229, 43, 230, 231, 232, 337, 233, 234, 68, 235, 236,
237, 327, 238, 1114, 239, 240, 241, 242, 243, 244, 245, 1025,
1007, 246, 247, 248, 249, 250, 610, 80, 251, 1251, 1155, 186,
231, 109, 1214, 252, 174, 253, 1078, 995, 254, 255, 256, 257,
151, 138, 645, 258, 259, 260, 6, 982, 261, 262, 263, 255,
264, 345, 265, 266, 267, 268, 178, 269, 270, 271, 272, 273,
77, 352, 177, 274, 1200, 275, 276, 277, 278, 29, 602, 1114,
279, 280, 281, 282, 283, 32, 284, 285, 286, 287, 224, 288,
116, 1095, 392, 289, 1093, 140, 42, 290, 291, 292, 293, 143,
972, 294, 295, 873, 636, 296, 297, 298, 299, 300, 677, 322,
301, 302, 303, 52, 304, 305, 43, 306, 1030, 307, 351, 308,
309, 310, 311, 312, 1111, 122, 313, 314, 315, 316, 317, 318,
319, 320, 321, 322, 1152, 52, 323, 1092, 32, 40, 92, 324,
325, 326, 327, 979, 328, 329, 1083, 330, 331, 332, 333, 334,
335, 336, 337, 1171, 338, 339, 104, 340, 341, 342, 343, 999,
1140, 344, 345, 346, 347, 117, 348, 349, 350, 972, 80, 106,
351, 352, 138, 353], dtype=torch.int32)),
compacted_negative_srcs=None,
compacted_negative_dsts=tensor([[ 318, 40, 354, 355, 356],
[ 357, 358, 331, 359, 901],
[ 360, 361, 956, 362, 363],
...,
[ 147, 1258, 1259, 1260, 1261],
[ 53, 912, 1262, 1263, 1264],
[ 138, 1265, 1266, 1267, 1268]], dtype=torch.int32),
blocks=[Block(num_src_nodes=2503, num_dst_nodes=2218, num_edges=6931),
Block(num_src_nodes=2218, num_dst_nodes=1269, num_edges=3797)],
)
Defining Model for Node Representationο
Letβs consider training a 2-layer GraphSAGE with neighbor sampling. The model can be written as follows:
[6]:
import dgl.nn as dglnn
import torch.nn as nn
import torch.nn.functional as F
class SAGE(nn.Module):
def __init__(self, in_size, hidden_size):
super().__init__()
self.layers = nn.ModuleList()
self.layers.append(dglnn.SAGEConv(in_size, hidden_size, "mean"))
self.layers.append(dglnn.SAGEConv(hidden_size, hidden_size, "mean"))
self.hidden_size = hidden_size
self.predictor = nn.Sequential(
nn.Linear(hidden_size, hidden_size),
nn.ReLU(),
nn.Linear(hidden_size, 1),
)
def forward(self, blocks, x):
hidden_x = x
for layer_idx, (layer, block) in enumerate(zip(self.layers, blocks)):
hidden_x = layer(block, hidden_x)
is_last_layer = layer_idx == len(self.layers) - 1
if not is_last_layer:
hidden_x = F.relu(hidden_x)
return hidden_x
Defining Traing Loopο
The following initializes the model and defines the optimizer.
[7]:
in_size = feature.size("node", None, "feat")[0]
model = SAGE(in_size, 128).to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
The following is the training loop for link prediction and evaluation.
[8]:
from tqdm.auto import tqdm
for epoch in range(3):
model.train()
total_loss = 0
for step, data in tqdm(enumerate(create_train_dataloader())):
# Get node pairs with labels for loss calculation.
compacted_pairs, labels = data.node_pairs_with_labels
node_feature = data.node_features["feat"]
# Convert sampled subgraphs to DGL blocks.
blocks = data.blocks
# Get the embeddings of the input nodes.
y = model(blocks, node_feature)
logits = model.predictor(
y[compacted_pairs[0]] * y[compacted_pairs[1]]
).squeeze()
# Compute loss.
loss = F.binary_cross_entropy_with_logits(logits, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
total_loss += loss.item()
print(f"Epoch {epoch:03d} | Loss {total_loss / (step + 1):.3f}")
Epoch 000 | Loss 0.554
Epoch 001 | Loss 0.449
Epoch 002 | Loss 0.445
Evaluating Performance with Link Predictionο
[9]:
model.eval()
datapipe = gb.ItemSampler(test_set, batch_size=256, shuffle=False)
datapipe = datapipe.copy_to(device)
# Since we need to use all neghborhoods for evaluation, we set the fanout
# to -1.
datapipe = datapipe.sample_neighbor(graph, [-1, -1])
datapipe = datapipe.fetch_feature(feature, node_feature_keys=["feat"])
eval_dataloader = gb.DataLoader(datapipe, num_workers=0)
logits = []
labels = []
for step, data in tqdm(enumerate(eval_dataloader)):
# Get node pairs with labels for loss calculation.
compacted_pairs, label = data.node_pairs_with_labels
# The features of sampled nodes.
x = data.node_features["feat"]
# Forward.
y = model(data.blocks, x)
logit = (
model.predictor(y[compacted_pairs[0]] * y[compacted_pairs[1]])
.squeeze()
.detach()
)
logits.append(logit)
labels.append(label)
logits = torch.cat(logits, dim=0)
labels = torch.cat(labels, dim=0)
# Compute the AUROC score.
from sklearn.metrics import roc_auc_score
auc = roc_auc_score(labels.cpu(), logits.cpu())
print("Link Prediction AUC:", auc)
Link Prediction AUC: 0.6889198355832079
Conclusionο
In this tutorial, you have learned how to train a multi-layer GraphSAGE for link prediction with neighbor sampling.