Home

Markovketen

Markovketen, commonly known as a Markov chain, is a mathematical model of a stochastic process that describes a sequence of random states in which the probability of each next state depends only on the current state and not on past states. The model consists of a set of states and rules that determine the likelihood of moving from one state to another.

In discrete-time Markov chains, time advances in steps and the chain is described by a transition matrix

Key concepts include irreducibility, aperiodicity, and stationarity. If a finite Markov chain is irreducible and aperiodic,

Common examples include simple weather models, queueing systems, and text or sequence models in natural language

P,
where
Pij
denotes
the
probability
of
transitioning
from
state
i
to
state
j
in
one
step.
The
process
satisfies
the
Markov
property:
the
future
state
depends
only
on
the
present
state,
not
on
the
past
history.
The
initial
distribution
over
states
can
be
specified,
after
which
the
distribution
over
states
evolves
as
the
chain
progresses.
Continuous-time
Markov
chains
use
a
rate
matrix
Q
to
describe
the
instantaneous
transition
rates
between
states.
it
has
a
unique
stationary
distribution
π
satisfying
π
=
πP.
Under
mild
conditions,
the
distribution
of
the
chain
converges
to
this
stationary
distribution
regardless
of
the
starting
state.
For
continuous-time
chains,
a
stationary
distribution
corresponds
to
a
balance
of
inbound
and
outbound
rates.
processing
where
the
next
item
depends
only
on
the
current
item
or
a
small
set
of
states.
Markov
chains
are
widely
used
in
statistics,
computer
science,
finance,
and
operations
research
due
to
their
tractable
structure
and
well-developed
theory,
though
they
rely
on
the
memoryless
assumption
that
may
not
hold
in
all
real-world
situations.