Home

MobileNets

MobileNets is a family of efficient convolutional neural networks designed for mobile and embedded vision applications. Introduced by Google researchers in 2017, the architecture emphasizes reduced computation and smaller model sizes to enable on-device inference without requiring high-end hardware.

The core idea of MobileNets is the use of depthwise separable convolutions, which split a standard convolution

MobileNets has gone through several major revisions. MobileNetV1 introduced the concepts of width multiplier (alpha) and

In practice, MobileNets serve as backbones for a variety of computer vision tasks beyond image classification,

Overall, MobileNets have influenced the development of lightweight neural networks by prioritizing efficiency through architectural choices

into
a
depthwise
spatial
convolution
followed
by
a
pointwise
1x1
convolution.
This
decomposition
significantly
lowers
the
number
of
multiply-accumulate
operations
and
parameters
compared
with
traditional
CNNs,
making
the
networks
faster
and
lighter
while
maintaining
competitive
accuracy.
resolution
multiplier
to
trade
off
accuracy
against
speed
and
memory
usage.
MobileNetV2
added
inverted
residuals
and
linear
bottlenecks,
promoting
feature
reuse
and
efficiency
with
skip
connections
when
dimensions
match.
MobileNetV3,
created
with
neural
architecture
search
and
platform-aware
design,
produced
two
models
(Large
and
Small)
optimized
for
mobile
CPUs;
it
also
incorporates
squeeze-and-excite
modules
and
new
activation
functions
to
improve
performance.
including
object
detection,
semantic
segmentation,
and
on-device
facial
recognition.
They
are
commonly
used
in
mobile
and
edge
devices
where
computational
resources,
memory,
and
power
consumption
are
constrained.
like
depthwise
separable
convolutions,
residual
connections,
and
NAS-based
optimizations,
sustaining
a
balance
between
accuracy,
latency,
and
model
size
for
on-device
AI.