Home

PSO

Particle Swarm Optimization (PSO) is a population-based stochastic optimization technique inspired by the collective behavior of birds flocking or fish schooling. It was introduced by James Kennedy and Russell C. Eberhart in 1995 as a simple, gradient-free method for searching nonlinear objective functions. PSO has both continuous and discrete variants and is widely used for real-valued optimization tasks, as well as for feature selection and combinatorial problems in its discrete forms.

In the standard continuous PSO, a swarm of particles explores a search space. Each particle has a

Binary PSO and other discrete forms map velocity to probabilities to handle categorical decisions, making PSO

Applications span engineering design, neural network training, control systems, scheduling, and other optimization problems. PSO is

position
x_i
and
a
velocity
v_i.
Each
particle
remembers
its
best
position
encountered
so
far,
pbest_i,
and
the
swarm
keeps
track
of
the
best
global
position
found,
gbest.
Iteratively,
velocities
are
updated
by
v_i
=
w
v_i
+
c1
r1
(pbest_i
−
x_i)
+
c2
r2
(gbest
−
x_i),
and
positions
are
updated
by
x_i
=
x_i
+
v_i.
Here
w
is
the
inertia
weight,
c1
and
c2
are
cognitive
and
social
coefficients,
and
r1,
r2
are
random
numbers
in
[0,1].
Variants
include
local-best
PSO
(lbest)
and
approaches
using
a
constriction
factor
to
improve
stability.
suitable
for
feature
selection
and
combinatorial
optimization.
Numerous
enhancements
exist,
including
bare-bones
PSO,
quantum-behaved
PSO,
inertia-weight
schedules,
velocity
clamping,
and
multi-objective
variants.
valued
for
its
simplicity,
few
tunable
parameters,
and
gradient-free
search,
but
it
can
suffer
from
premature
convergence
and
stagnation
in
local
optima,
particularly
in
high-dimensional
or
complex
landscapes.