Home

Automatas

Automatas, also called automata, are abstract computational devices used to model and analyze computation and formal languages. They consist of a finite set of states, an input alphabet, a transition function, a start state, and a set of accepting states. Depending on determinism of the transition function, automata are classified as deterministic or nondeterministic.

Several canonical models are studied: finite automata (FAs) process finite strings and recognize regular languages; deterministic

Automata provide a formal link to formal languages. A language is regular if it can be recognized

Applications include lexical analysis and compilation, text search algorithms, protocol verification, model checking, and the design

History: the theory originated in the 1950s with work by Kleene and others, later expanded with Noam

finite
automata
(DFAs)
and
nondeterministic
finite
automata
(NFAs)
are
equivalent
in
expressive
power,
though
NFAs
can
be
more
compact.
Pushdown
automata
(PDAs)
extend
finite
automata
with
a
stack
and
recognize
context-free
languages.
Linear-bounded
automata
and
Turing
machines
are
more
powerful
models
capable
of
recognizing
a
broad
class
of
recursively
enumerable
languages.
Cellular
automata
are
parallel,
discrete
systems
with
vast
state
spaces
that
evolve
in
discrete
time
steps
according
to
local
rules.
by
a
finite
automaton;
context-free
if
recognized
by
a
pushdown
automaton;
more
complex
languages
require
machines
like
Turing
machines.
These
correspondences
are
part
of
the
Chomsky
hierarchy.
The
equivalence
of
DFAs
and
NFAs
is
a
foundational
result;
regular
expressions
describe
the
same
class
of
languages
as
finite
automata.
of
digital
circuits.
Automata
theory
also
informs
areas
such
as
formal
verification
and
computational
complexity.
Chomsky's
hierarchy
and
Turing's
machine
as
a
general
model
of
computation.
Today
automata
remain
a
core
tool
in
theoretical
computer
science
and
related
disciplines.