Home

IOPs

IOPS, or Input/Output Operations Per Second, is a common storage performance metric that measures how many individual read or write operations a storage device or system can complete in one second. IOPS is most meaningful for workloads consisting of small, random I/O requests, such as database operations. It is typically reported as separate read IOPS and write IOPS, and sometimes as a mixed I/O value with a specified read/write ratio.

Measurement and influencing factors: The reported IOPS depends on work load characteristics, including block size, queue

Interpretation and caveats: IOPS alone do not capture sustained performance or data throughput. A system can

Typical ranges: Hard disk drives generally deliver hundreds of IOPS, varying with rotational speed and seek

Usage: IOPS are used for benchmarking, capacity planning, and performance SLAs, and for comparing storage systems.

depth,
and
the
mix
of
reads
and
writes.
Synthetic
tests
often
use
small
block
sizes
(for
example
4
KB)
and
vary
queue
depth
to
simulate
concurrent
access.
Other
factors
include
latency,
caching,
device
type
(HDD,
SSD,
NVMe),
interconnects,
and
RAID
configuration.
Benchmarking
tools
such
as
fio
or
Iometer
are
commonly
used
to
generate
workloads
and
measure
IOPS.
deliver
high
IOPS
with
tiny
data
transfers
but
low
bandwidth,
or
high
bandwidth
with
modest
IOPS.
Real
workloads
depend
on
latency,
queue
depth,
and
read/write
mix.
For
capacity
planning
and
performance
SLAs,
IOPS
should
be
evaluated
together
with
latency,
throughput,
and
the
expected
workload
profile.
time.
SSDs
provide
thousands
to
hundreds
of
thousands
of
IOPS
for
small,
random
operations,
and
enterprise
NVMe
SSDs
can
reach
into
the
low
millions
under
favorable
conditions.
Actual
numbers
vary
widely
with
hardware
and
workload.
When
evaluating
options,
tests
should
reflect
realistic
workloads
and
report
IOPS
alongside
latency
and
throughput.