{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n\n# Understand Graph Attention Network\n\n**Authors:** [Hao Zhang](https://github.com/sufeidechabei/), [Mufei Li](https://github.com/mufeili), [Minjie Wang](https://jermainewang.github.io/) [Zheng Zhang](https://shanghai.nyu.edu/academics/faculty/directory/zheng-zhang)\n\n
The tutorial aims at gaining insights into the paper, with code as a mean\n of explanation. The implementation thus is NOT optimized for running\n efficiency. For recommended implementation, please refer to the [official\n examples](https://github.com/dmlc/dgl/tree/master/examples).
This is showing how to implement a GAT from scratch. DGL provides a more\n efficient :class:`builtin GAT layer module
Below is the calculation process of F1 score:\n\n .. math::\n\n precision=\\frac{\\sum_{t=1}^{n}TP_{t}}{\\sum_{t=1}^{n}(TP_{t} +FP_{t})}\n\n recall=\\frac{\\sum_{t=1}^{n}TP_{t}}{\\sum_{t=1}^{n}(TP_{t} +FN_{t})}\n\n F1_{micro}=2\\frac{precision*recall}{precision+recall}\n\n * $TP_{t}$ represents for number of nodes that both have and are predicted to have label $t$\n * $FP_{t}$ represents for number of nodes that do not have but are predicted to have label $t$\n * $FN_{t}$ represents for number of output classes labeled as $t$ but predicted as others.\n * $n$ is the number of labels, i.e. $121$ in our case.