Home

SNNs

Spiking neural networks (SNNs) are a class of artificial neural networks in which neurons communicate by discrete events called spikes. Time and the precise timing of spikes play a central role in computation, unlike traditional rate-based neural networks that rely on average activation. SNNs draw inspiration from biological neural systems, where neurons emit spikes only when their membrane potential crosses a threshold.

In SNNs, neurons are typically modeled with leaky integrate-and-fire or more complex conductance-based models. Inputs are

Training SNNs poses challenges because the spike function is non-differentiable, complicating gradient-based optimization. Researchers address this

SNNs have gained prominence with neuromorphic hardware designed for event-driven computation and low power consumption, such

Applications include processing of temporal and event-based data, robotics, audio and vision tasks using event cameras.

converted
into
spike
trains,
and
information
is
encoded
either
by
spike
rate
or
by
the
precise
timing
of
spikes
(temporal
coding).
In
a
network,
spikes
propagate
along
synapses
that
implement
synaptic
weights,
and
learning
rules
adjust
those
weights
based
on
spike
patterns.
with
surrogate
gradient
methods,
backpropagation
through
time,
or
hybrid
approaches
that
combine
spike-timing
dependent
plasticity
(STDP)
with
supervised
learning
signals.
Unsupervised
STDP
can
extract
features,
while
supervised
SNNs
use
spike-based
error
propagation
or
conversion
from
trained
conventional
networks.
as
Loihi
and
TrueNorth.
These
platforms
exploit
sparsity
and
asynchronous
processing
to
achieve
energy
efficiency,
suitable
for
real-time
sensing
and
embedded
applications.
Challenges
remain
in
training
efficiency,
standard
benchmarking,
and
integrating
SNNs
with
conventional
deep
learning
workflows.
Ongoing
research
seeks
to
close
the
gap
between
biological
plausibility,
computational
efficiency,
and
practical
performance.