Home

PopTorch

PopTorch is a library by Graphcore that enables PyTorch models to run on Graphcore IPU hardware. It provides PyTorch-compatible wrappers and utilities that compile and execute neural networks on IPUs via the Poplar toolchain, allowing developers to leverage IPU performance without rewriting code.

Core components include trainingModel and inferenceModel wrappers that convert a PyTorch module into an IPU-optimized version.

PopTorch integrates with standard PyTorch APIs, including DataLoader and optimizers, preserving familiar workflows while targeting IPUs.

Usage pattern: define a PyTorch model, create a PopTorch Options object with desired IPU settings, wrap the

Overview: PopTorch is part of Graphcore’s software stack and interfaces with the Poplar compiler to enable

An
Options
object
exposes
IPU-specific
settings
such
as
device
iterations,
micro-batch
size,
and
replication/pipeline
configurations.
PopTorch
manages
data
transfer
and
memory
to
optimize
performance
and
minimize
host-device
communication.
It
supports
both
training
and
inference
and
works
with
common
loss
functions
and
metrics.
model
with
trainingModel
or
inferenceModel,
and
run
training
or
inference
using
conventional
PyTorch
data
loading
and
loops.
The
program
executes
on
Graphcore
IPUs,
either
on-device
or
via
Graphcore
cloud
offerings.
efficient
IPU
execution.
It
aims
to
simplify
porting
PyTorch
workloads
to
IPUs
and
to
promote
IPU-accelerated
machine
learning
in
research
and
production
settings.
PopTorch
is
maintained
by
Graphcore
with
community
and
enterprise
support.