Home

ResNet18

ResNet18 is a residual neural network architecture introduced as part of the ResNet family in 2015 by Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. It demonstrates that relatively shallow networks can benefit from residual learning, enabling effective training of deeper models by mitigating vanishing gradients through skip connections.

The network consists of 18 layers and uses a basic residual block. Each block contains two consecutive

Parameter count for ResNet18 is about 11.7 million. It is commonly trained on ImageNet and widely used

3x3
convolutional
layers
with
batch
normalization
and
a
ReLU
activation
after
each
convolution.
A
identity
skip
connection
adds
the
block’s
input
to
its
output.
If
the
input
and
output
dimensions
differ,
the
skip
path
uses
a
1x1
convolution
to
match
dimensions.
The
architecture
begins
with
a
7x7
convolution
with
64
filters
and
stride
2,
followed
by
a
3x3
max
pooling
layer.
This
is
followed
by
four
stages
containing
two
residual
blocks
each,
with
channel
sizes
64,
128,
256,
and
512.
Downsampling
occurs
at
the
first
block
of
each
stage
except
the
first.
The
network
ends
with
global
average
pooling
and
a
fully
connected
layer
for
1000-class
classification.
as
a
backbone
for
transfer
learning
in
computer
vision
tasks.
The
design
emphasizes
simple,
uniform
residual
blocks
to
enable
efficient
gradient
flow
and
faster
convergence
compared
with
plain
deep
networks.
ResNet18
remains
a
popular
baseline
due
to
its
balance
of
accuracy
and
computational
efficiency,
and
is
often
contrasted
with
deeper
ResNet
variants
that
use
bottleneck
blocks.