Xinyue Chen, Boxuan Li
In this project, we implemented parallel Graph Attention Networks (GATs), a variant of Graph Convolutional Networks (GCNs), using methods including OpenMP, CUDA, and Graph Vertex Program. We benchmarked their performances using graphs of different sizes and connectivity. We only implemented the forward pass (inference) of the algorithm.
Graph neural networks, as a kind of modelling for structured data, have been brought to people's attention in recent years. Popular applications include node classification, and graph classification. GNNs are also integrated in other downstream tasks, such as named entity recognition, visual question answering, to help understand structured information inherent to the tasks.
The idea of GNNs is basically as follows. There is a feature tensor for each node, and each layer
In GCNs, the aggregation is instantiated as average pooling over the incoming messages. GATs further incorporate attention mechanism for aggregation. Specifically, we introduce a weight for each incoming message (from node
We could also do a step further and introduce multi-head attention mechanism, where within each layer we use multiple
Module | Description |
---|---|
CPP | Sequential C++ version and OpenMP parallel version |
CUDA | CUDA parallel version |
Vertex Program | Java parallel version implemented in TinkerPop's vertex program framework |
ORACLE | PyTorch implementation of GAT |
DATA | Graph Datasets |
MODELS | Parameters of trained GAT models |
Please see the presentation and final report.