Home

GNNs

Graph neural networks (GNNs) are neural networks designed to operate on graph-structured data, where entities are represented as nodes and relationships as edges, possibly with node and edge features. GNNs learn node embeddings by propagating and aggregating information from neighboring nodes across multiple layers, allowing representations that reflect local and broader graph structure. The design typically aims to be permutation-invariant with respect to node ordering, achieved through neighbor aggregation and nonlinear transformations.

A common formulation is message passing, where at each layer a node gathers messages from its neighbors,

GNNs are applied to node-level tasks (classification or regression), edge-level tasks (link prediction or edge classification),

Practical considerations include scalability, as full-batch training on large graphs is expensive. Techniques such as neighbor

aggregates
them
via
a
permutation-invariant
function
such
as
sum,
mean,
or
attention-weighted
sum,
and
updates
its
representation
with
a
learnable
function.
Variants
include
Graph
Convolutional
Networks
(GCN),
Graph
Attention
Networks
(GAT),
GraphSAGE,
and
the
broader
family
of
message
passing
neural
networks
(MPNNs).
and
graph-level
tasks
(graph
classification
or
regression).
They
can
be
trained
in
transductive
settings,
where
the
graph
is
fixed
during
training,
or
inductive
settings,
where
the
model
generalizes
to
unseen
nodes
or
graphs.
Classic
benchmarks
for
node
classification
include
Cora,
Citeseer,
and
PubMed,
with
larger-scale
graphs
used
in
industry
and
research
applications.
sampling,
mini-batching,
and
subgraph
training
address
this.
Challenges
include
oversmoothing
with
deep
architectures,
sensitivity
to
noisy
or
adversarial
graph
structures,
and
the
need
for
models
that
handle
heterogeneous
or
dynamic
graphs.
Extensions
cover
dynamic,
temporal,
and
multimodal
graphs,
with
ongoing
work
on
interpretability
and
theoretical
understanding.