Home

logrank

Logrank is a nonparametric statistical test used to compare the survival distributions of two or more groups using time-to-event data. It is commonly applied in clinical trials to determine whether treatments have different effects on survival, while accounting for censored observations.

The test operates on data organized by observed event times. At each event time t, for each

Assumptions include non-informative censoring and independence between groups. The logrank test is most powerful against alternatives

group
i,
n_i(t)
denotes
the
number
at
risk
just
before
t,
and
d(t)
the
total
number
of
events
at
t.
The
total
at
risk
is
N(t)
=
sum_i
n_i(t).
Under
the
null
hypothesis
that
all
groups
share
the
same
survival
curve,
the
expected
number
of
events
in
group
i
at
t
is
E_i(t)
=
d(t)
*
[n_i(t)
/
N(t)],
and
the
observed
events
are
O_i(t).
The
logrank
statistic
aggregates
the
differences
O_i(t)
−
E_i(t)
over
all
event
times
and
groups.
A
common
formulation
uses
the
overall
statistic
X^2
=
sum_{i=1}^g
(O_i
−
E_i)^2
/
Var_i,
where
g
is
the
number
of
groups
and
Var_i
is
the
variance
of
the
differences
for
group
i.
Under
the
null
hypothesis,
X^2
follows
a
chi-squared
distribution
with
g
−
1
degrees
of
freedom.
For
two
groups,
this
reduces
to
the
familiar
one-degree-of-freedom
test.
consistent
with
proportional
hazards,
though
it
does
not
require
a
specific
parametric
survival
model.
It
remains
a
standard
tool
for
comparing
survival
curves
in
medical
research
and
other
fields
dealing
with
time-to-event
data.
Variants
and
related
tests,
such
as
Wilcoxon
(Breslow)
and
Tarone-Ware,
weight
early
or
late
events
differently,
offering
robustness
to
certain
alternative
patterns.