Home

padjust

Padjust is a general term used to describe procedures that adjust p-values when multiple statistical tests are performed. The aim is to reduce the likelihood of false positives by accounting for the increased chance of observing significant results purely by chance. Adjusted p-values are typically interpreted against the same significance level as the original p-values, but with a corrected threshold that reflects the number of tests and their dependencies.

Common methods for padjust include both family-wise error rate (FWER) controls and false discovery rate (FDR)

Implementation and usage vary by software. In the R programming environment, the p.adjust function performs p-value

Applications of padjust are common in genomics, proteomics, neuroimaging, and other fields involving large-scale hypothesis testing.

controls.
FWER
methods,
such
as
the
Bonferroni
correction
and
Holm’s
step-down
procedure,
aim
to
limit
the
probability
of
making
one
or
more
type
I
errors
among
all
tests.
Other
FWER
methods
include
Hochberg’s
step-up
procedure
and
Hommel’s
method,
which
can
offer
more
power
under
certain
conditions.
FDR
methods
aim
to
control
the
expected
proportion
of
false
discoveries
among
the
rejected
hypotheses;
the
Benjamini-Hochberg
(BH)
procedure
is
the
most
widely
used,
with
the
Benjamini-Yekutieli
(BY)
adjustment
providing
a
more
conservative
option
under
arbitrary
dependence
among
tests.
adjustments
using
methods
such
as
"holm",
"hochberg",
"hommel",
"bonferroni",
"BH",
and
"BY".
In
Python,
similar
functionality
is
provided
by
libraries
like
statsmodels,
under
functions
that
implement
multiple
testing
corrections.
When
applying
padjust,
researchers
consider
the
dependence
structure
of
tests
and
the
desired
balance
between
discovery
rate
and
type
I
error
risk.
Proper
reporting
typically
includes
both
raw
and
adjusted
p-values
to
convey
the
impact
of
multiple
testing
correction.