Home

BigONotation

Big-O notation, often written as O(...), is a mathematical concept used in computer science to describe the upper bound on the growth rate of an algorithm's running time or space usage as the input size n grows. It characterizes asymptotic behavior, focusing on how the resource consumption scales rather than exact values for finite inputs. A function T(n) is O(g(n)) if there exist constants c > 0 and n0 ≥ 1 such that T(n) ≤ c · g(n) for all n ≥ n0. Here, T(n) might represent time or memory, and g(n) is a simple growth function such as 1, log n, n, n log n, n^2, etc.

Common examples include O(1) for constant time, O(log n) for logarithmic time, O(n) for linear time, O(n

Other related notations provide tighter or complementary information. Theta notation, Θ(g(n)), denotes a tight bound where

Big-O is a tool for comparing algorithms and understanding scalability. It abstracts away constant factors and

log
n)
for
linearithmic
time,
and
O(n^2)
for
quadratic
time.
These
forms
describe
how
running
time
grows
in
the
worst
case
as
input
size
increases.
Big-O
is
an
upper
bound;
it
does
not
imply
that
all
inputs
take
that
long,
only
that
the
running
time
will
not
grow
faster
than
c·g(n)
beyond
some
n0.
T(n)
grows
at
the
rate
of
g(n)
up
to
constant
factors.
Omega
notation,
Ω(g(n)),
gives
a
lower
bound.
Together
they
can
describe
the
precise
asymptotic
behavior
of
T(n).
lower-order
terms,
but
it
has
limitations:
it
may
not
reflect
performance
for
small
inputs
or
practical
hardware
effects.