In my research, there are many problems involve networks of different types, e.g. social network, online-trading networks, crowd-sourcing, etc. I was so happy to find a new powerful tool for my research, the graph convolutional network, which applies deep learning on graph structures.
Graph convolutional network (GCN)
There is currently no official definition for GCN. As most graph neural network models have a somewhat universal architecture in common. Thomas referred to the using convolution on graph-liked structures as graph convolutional neural network.
For these models, the goal is then to learn a function of signals/features on a graph \(G=(V,E) \) which takes as input:
A feature description $latexx_i\) for every node i; summarized in a \(N×D \) feature matrix \)latexX\) (N: number of nodes, D: number of input features).
A representative description of the graph structure in matrix form; typically in the form of an adjacency matrix A and produces a node-level output Z (an $latexN×F\) feature matrix, where $latex F \) is the number of output features per node). Graph-level outputs can be modelled by introducing some form of the pooling operation.
Zachary’s Karate Club
Zachary’s karate club is a commonly used social network. Its nodes represent members of a karate club and the edges their mutual relations. While Zachary was studying the karate club, a conflict arose between the administrator and the instructor which resulted in the club splitting in two.
Our task is to represent Zachary karate club using GCN. We will transform the graph features represented in X ($latexN \times D \)) to the new output that somehow reserves the structure of the original graph. The process is called graph autoencoder.
The ipynb below illustrates graph embedding. Please note that the dataset contains no node feature. That makes our model much simpler than ones in reality, such as transaction networks, citation networks.