dgl.distributed.edge_split

dgl.distributed.edge_split(edges, partition_book=None, etype='_E', rank=None, force_even=True, edge_trainer_ids=None)[source]

Split edges and return a subset for the local rank.

This function splits the input edges based on the partition book and returns a subset of edges for the local rank. This method is used for dividing workloads for distributed training.

The input edges can be stored as a vector of masks. The length of the vector is the same as the number of edges in a graph; 1 indicates that the edge in the corresponding location exists.

There are two strategies to split the edges. By default, it splits the edges in a way to maximize data locality. That is, all edges that belong to a process are returned. If force_even is set to true, the edges are split evenly so that each process gets almost the same number of edges.

When force_even is True, the data locality is still preserved if a graph is partitioned with Metis and the node/edge IDs are shuffled. In this case, majority of the nodes returned for a process are the ones that belong to the process. If node/edge IDs are not shuffled, data locality is not guaranteed.

Parameters:
  • edges (1D tensor or DistTensor) – A boolean mask vector that indicates input edges.

  • partition_book (GraphPartitionBook, optional) – The graph partition book

  • etype (str or (str, str, str), optional) – The edge type of the input edges.

  • rank (int, optional) – The rank of a process. If not given, the rank of the current process is used.

  • force_even (bool, optional) – Force the edges are split evenly.

  • edge_trainer_ids (1D tensor or DistTensor, optional) – If not None, split the edges to the trainers on the same machine according to trainer IDs assigned to each edge. Otherwise, split randomly.

Returns:

The vector of edge IDs that belong to the rank.

Return type:

1D-tensor