Home

ResNet152

ResNet-152, also known as ResNet-152, is a deep convolutional neural network that belongs to the Residual Network (ResNet) family. It was introduced in 2015 by Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun to enable the training of very deep networks through residual learning, which uses skip connections to improve gradient flow during backpropagation.

Architecturally, ResNet-152 uses a bottleneck design and stacks 152 layers. The network begins with a 7×7 convolution

The network ends with global average pooling and a fully connected layer for 1000-class ImageNet classification.

with
stride
2,
followed
by
a
3×3
max
pooling.
It
then
consists
of
four
stages
of
residual
blocks:
conv2_x
with
3
blocks,
conv3_x
with
8
blocks,
conv4_x
with
36
blocks,
and
conv5_x
with
3
blocks.
Each
bottleneck
block
contains
a
1×1
convolution
that
reduces
dimensionality,
a
3×3
convolution,
and
a
1×1
convolution
that
restores
dimensionality,
with
a
shortcut
connection
added
to
the
block’s
output.
When
downsampling
occurs,
the
shortcut
path
may
use
a
1×1
projection
to
match
the
dimensionality.
ResNet-152
contains
roughly
60
million
parameters
and
is
computationally
intensive,
but
it
delivers
strong
accuracy
and
is
widely
used
as
a
backbone
for
image
classification
and
transfer
learning
tasks.
It
is
part
of
a
family
that
includes
ResNet-50
and
ResNet-101,
with
ResNet-152
representing
one
of
the
deepest
commonly
used
variants.