Home

VMAF

VMAF, or Video Multimethod Assessment Fusion, is a full-reference perceptual video quality metric designed to predict human judgments of video quality. It aims to provide a single score that correlates with subjective opinions by fusing multiple quality indicators through a regression model trained on MOS (mean opinion score) data.

The metric combines outputs from several base quality measurements—such as Visual Information Fidelity (VIF), structural similarity

VMAF was developed by Netflix with contributions from researchers at academic institutions and released as an

Applications of VMAF include benchmarking and comparing video encoders, guiding bitrate ladders and encoding parameters in

measures,
and
other
perceptual
features—along
with
temporal
information
to
capture
motion
and
flicker.
A
machine-learning
fusion
model
integrates
these
features
into
a
final
VMAF
score,
typically
scaled
from
0
to
100,
where
higher
values
indicate
better
quality.
open-source
project
in
2015.
The
reference
implementation
is
available
on
GitHub
and
supports
common
computing
platforms,
with
implementations
in
C
and
Python
to
run
on
Linux,
Windows,
and
macOS.
The
score
is
designed
to
reflect
perceived
quality
across
a
range
of
content
types,
bitrates,
resolutions,
and
color
depths.
streaming
pipelines,
and
monitoring
quality
across
content
libraries.
It
has
been
widely
adopted
in
the
industry
for
codec
evaluation
and
quality
control
due
to
its
emphasis
on
perceptual
relevance
and
its
ability
to
align
with
subjective
viewing
experience.
Users
should
note
that
VMAF
relies
on
MOS-based
training
and
may
require
calibration
for
specific
displays,
color
spaces,
or
content
characteristics
to
maintain
predictive
accuracy.