Home

SRGAN

SRGAN, short for Super-Resolution Generative Adversarial Network, is a deep learning model developed for single-image super-resolution. Introduced by Ledig and colleagues in 2017, SRGAN aims to convert a low-resolution image into a higher-resolution version that preserves structure while exhibiting realistic texture details, addressing limitations of traditional pixel-wise loss methods.

At its core, SRGAN comprises two networks trained in opposition: a generator that upsamples a low-resolution

The training objective combines a perceptual content loss with an adversarial loss. The content loss measures

SRGAN has influenced subsequent work in perceptual super-resolution and popularized the use of GANs for this

input
to
a
high-resolution
image,
and
a
discriminator
that
attempts
to
distinguish
real
high-resolution
images
from
generated
ones.
The
generator
is
built
from
residual
blocks
and
employs
a
upsampling
mechanism
based
on
sub-pixel
convolution
(PixelShuffle)
to
increase
resolution
efficiently.
The
discriminator
acts
as
a
binary
classifier,
guiding
the
generator
toward
more
natural
textures.
differences
in
high-level
feature
representations
extracted
from
a
pre-trained
convolutional
network
such
as
VGG,
while
the
adversarial
loss
encourages
the
generator
to
produce
photo-realistic
textures
that
fool
the
discriminator.
This
combination
improves
perceptual
quality,
particularly
for
4x
upscaling
tasks,
compared
with
traditional
mean-squared-error-based
methods.
task.
It
highlighted
the
trade-off
between
perceptual
realism
and
traditional
fidelity
metrics
like
PSNR,
prompting
further
research
into
perceptual
losses,
robust
upsampling,
and
higher-fidelity
texture
synthesis.