Home

GCNs

Graph convolutional networks (GCNs) are neural networks designed to operate on graph-structured data. They extend the convolution operation to graphs by learning node representations through iterative neighborhood aggregation. In a typical GCN layer, each node updates its feature vector by combining the features of its neighbors, weighted by a normalized adjacency structure. A common formulation uses the normalized symmetric adjacency matrix: H^{l+1} = sigma( D^{-1/2} à D^{-1/2} H^l W^l ), where à = A + I adds self-loops, D is the degree matrix of Ã, H^l is the matrix of node features at layer l, W^l is a learnable weight matrix, and sigma is a nonlinearity. The input H^0 contains the initial node features.

Two main families exist: spectral GCNs based on graph signal processing with filters defined in the Fourier

Applications include semi-supervised node classification on networks, graph classification, link prediction, and molecular property prediction. GCNs

Limitations include over-smoothing with many layers, sensitivity to graph quality, and computational challenges on very large

domain,
and
spatial
GCNs
that
perform
neighborhood
aggregation
directly
in
the
graph.
The
Kipf
and
Welling
2017
simplification
of
the
spectral
approach
popularized
the
first-order
approximation
and
is
widely
used.
can
scale
to
larger
graphs
with
sampling
and
sparse
computations,
and
have
inspired
many
extensions
such
as
Graph
Attention
Networks
(GATs),
GraphSAGE,
and
various
normalization
and
residual
strategies.
or
dynamic
graphs.
Overall,
GCNs
provide
a
practical
framework
for
learning
from
graph-structured
data
by
combining
feature
information
with
graph
topology.