Home

DPUs

DPUs, or data processing units, are purpose-built processors designed to offload data-plane tasks from host CPUs in data centers. They handle networking packet processing, cryptographic operations, and storage I/O offloads, enabling CPUs to run applications with higher throughput and lower latency. DPUs are typically deployed as PCIe cards or system-on-a-chip components embedded in NICs or programmable accelerators.

Architecturally, DPUs combine one or more general-purpose cores (often ARM or RISC-V), dedicated data-path engines for

DPUs differ from CPUs and GPUs in focus: they are optimized for offloading data-plane tasks rather than

Common use cases include virtualized network functions, firewall and VPN offload, TLS or IPsec offload, storage

DPUs gained prominence in the 2010s as data-center traffic and security requirements grew. Leading vendors include

packet
processing,
DMA
and
memory
interfaces,
and
hardware
accelerators
for
cryptography
and
compression.
They
run
software
stacks
such
as
DPDK
and
SPDK,
and
support
programmable
pipelines
with
languages
like
P4
and
run-times
compatible
with
eBPF.
general-purpose
computation
or
graphics,
enabling
better
CPU
utilization,
security
isolation,
and
lower
latency
for
networked
services.
protocol
acceleration
(for
NVMe
over
Fabrics),
and
policy
enforcement
in
multi-tenant
cloud
environments.
Nvidia
with
the
BlueField
family
and
Marvell
with
OCTEON
DPUs,
among
others.
The
software
ecosystem
continues
to
evolve
with
DPDK,
SPDK,
eBPF,
and
P4
support,
and
some
vendors
market
DPUs
as
successors
to
SmartNICs
or
as
part
of
broader
infrastructure
processing
platforms.