GCNs
Graph convolutional networks (GCNs) are neural networks designed to operate on graph-structured data. They extend the convolution operation to graphs by learning node representations through iterative neighborhood aggregation. In a typical GCN layer, each node updates its feature vector by combining the features of its neighbors, weighted by a normalized adjacency structure. A common formulation uses the normalized symmetric adjacency matrix: H^{l+1} = sigma( D^{-1/2} à D^{-1/2} H^l W^l ), where à = A + I adds self-loops, D is the degree matrix of Ã, H^l is the matrix of node features at layer l, W^l is a learnable weight matrix, and sigma is a nonlinearity. The input H^0 contains the initial node features.
Two main families exist: spectral GCNs based on graph signal processing with filters defined in the Fourier
Applications include semi-supervised node classification on networks, graph classification, link prediction, and molecular property prediction. GCNs
Limitations include over-smoothing with many layers, sensitivity to graph quality, and computational challenges on very large