Home

Petascale

Petascale describes computing systems capable of performing at least one petaFLOP, or 10^15 floating-point operations per second. In practice, petascale denotes both peak theoretical performance and real-world sustained performance on large-scale workloads. The term arose with the growth of high-performance computing (HPC) in the late 2000s, as researchers pursued simulations that required extensive parallelism, memory bandwidth, and advanced interconnects.

Petascale computing underpins advanced scientific applications, including climate and weather modeling, computational chemistry and physics, materials

Historically, Roadrunner at Los Alamos National Laboratory became the first computer to achieve sustained petaflop performance

Since petascale, the HPC field has progressed toward exascale computing, focusing on even larger scales, better

science,
and
aerospace
simulations.
Achieving
petascale
performance
requires
massively
parallel
architectures,
often
combining
thousands
to
millions
of
processing
cores
with
high-bandwidth
memory
and
fast
networks.
Programming
models
such
as
MPI
and
OpenMP,
and
accelerator
technologies
like
GPUs
or
specialized
co-processors,
are
commonly
employed
to
exploit
the
scale.
in
2008,
signaling
the
transition
to
petascale
HPC.
Soon
after,
other
systems
at
national
laboratories
and
in
academia
surpassed
the
one-petaflop
mark.
The
petascale
era
prompted
ongoing
work
to
improve
energy
efficiency,
fault
tolerance,
and
software
scalability.
power
efficiency,
and
more
heterogeneous
architectures.
Petascale
systems
remain
important
benchmarks
and
production
platforms
for
modeling,
simulation,
and
data-intensive
science,
and
they
continue
to
influence
HPC
infrastructure
and
software
ecosystems.