Neural Networks#
|
Linear layer with Xavier Glorot weight inititalization. |
|
Base class for creating Message Passing Neural Networks (MPNNs) [1]. |
|
Implements a generic Graph Network block as defined in [1]. |
|
A simple non-trainable message passing layer. |
|
Graph Attention Network convolution layer. |
|
Applies a GCN convolution over input node features. |
|
Graph Isomorphism Network convolution layer from "How Powerful are Graph Neural Networks?" paper. |
|
GraphSAGE convolution layer from "Inductive Representation Learning on Large Graphs" paper. |
|
Generalized relational convolution layer from Neural Bellman-Ford Networks: A General Graph Neural Network Framework for Link Prediction paper. |
|
Applies batch normalization over a batch of features as described in the Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" paper. |
|
Instance normalization over each individual example in batch of node features as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization paper. |
|
Applies layer normalization over each individual example in a batch of features as described in the Layer Normalization paper. |
|
Sums all node features to obtain a global graph-level representation. |
|
Takes the feature-wise maximum value along all node features to obtain a global graph-level representation. |
|
Takes the feature-wise mean value along all node features to obtain a global graph-level representation. |