Home

HGX

HGX is a family of GPU accelerator platforms developed by NVIDIA for data centers. It refers to open, modular baseboards and associated reference designs that are intended to serve as building blocks for AI, HPC, and high-density server deployments. Variants such as HGX-1 and HGX-2 have been used by NVIDIA and its ecosystem to enable scalable multi-GPU configurations within a single chassis.

Architecture and features of HGX designs are centered on providing a high-density, high-bandwidth foundation for GPU

History and usage: NVIDIA introduced HGX designs to standardize the construction of AI and HPC servers. The

Impact: The HGX family helped standardize high-density GPU server platforms, enabling faster time-to-market for AI servers

acceleration.
The
baseboards
are
designed
to
host
multiple
NVIDIA
GPUs,
along
with
CPUs,
memory,
storage,
and
networking
components,
to
form
a
complete
server
solution.
Interconnect
options
across
GPUs
in
HGX
designs
typically
emphasize
rapid
data
transfer,
with
mechanisms
such
as
NVLink
and,
in
certain
generations,
NVSwitch
to
enable
efficient
multi-GPU
communication.
The
platforms
are
engineered
for
effective
cooling
and
power
delivery
to
support
sustained
workloads
in
data
centers.
HGX
reference
designs
have
formed
the
internal
platform
for
several
NVIDIA-based
systems,
including
DGX
appliances,
and
have
been
adopted
by
various
OEMs
and
cloud
providers
to
accelerate
deployment
of
AI
training
and
inference
workloads.
and
enabling
scalable,
enterprise-grade
accelerators
in
data
centers.
It
reflects
NVIDIA’s
approach
to
modular,
interoperable
hardware
that
supports
evolving
GPU
generations
and
workloads.