dgl.DGLGraph.to

DGLGraph.to(device, **kwargs)[source]

Move ndata, edata and graph structure to the targeted device (cpu/gpu).

If the graph is already on the specified device, the function directly returns it. Otherwise, it returns a cloned graph on the specified device.

Note that data of node and edge features are not moved to the specified device before being accessed or materialize_data() is called.

Parameters:
  • device (Framework-specific device context object) – The context to move data to (e.g., torch.device).

  • kwargs (Key-word arguments.) – Key-word arguments fed to the framework copy function.

Returns:

The graph on the specified device.

Return type:

DGLGraph

Examples

The following example uses PyTorch backend.

>>> import dgl
>>> import torch
>>> g = dgl.graph((torch.tensor([1, 0]), torch.tensor([1, 2])))
>>> g.ndata['h'] = torch.ones(3, 1)
>>> g.edata['h'] = torch.zeros(2, 2)
>>> g1 = g.to(torch.device('cuda:0'))
>>> print(g1.device)
device(type='cuda', index=0)
>>> print(g1.ndata['h'].device)
device(type='cuda', index=0)
>>> print(g1.nodes().device)
device(type='cuda', index=0)

The original graph is still on CPU.

>>> print(g.device)
device(type='cpu')
>>> print(g.ndata['h'].device)
device(type='cpu')
>>> print(g.nodes().device)
device(type='cpu')

The case of heterogeneous graphs is the same.