Home

BigO

Big-O notation, often written as O(f(n)), is a mathematical concept used in computer science to describe the upper bound on the growth rate of a function. In practice, it characterizes how the running time or space requirements of an algorithm grow as the size of the input, n, increases. It is part of asymptotic analysis and is used to compare algorithms independently of hardware and implementation details.

As an upper bound, Big-O describes the worst-case or non-tight bound on growth. For example, an O(1)

To compute Big-O, one looks at the dominant term as n becomes large and ignores constant factors

Big-O is one of several asymptotic notations, alongside Big-Omega (lower bound) and Big-Theta (tight bound). It

algorithm
takes
constant
time
regardless
of
n;
O(log
n)
grows
logarithmically;
O(n)
grows
linearly;
O(n^2)
grows
quadratically.
Common
examples:
binary
search
runs
in
O(log
n);
linear
search
O(n);
many
comparison-based
sorts
have
average
complexity
O(n
log
n)
and
worst-case
O(n^2)
(e.g.,
QuickSort
in
some
scenarios).
and
lower-order
terms.
For
instance,
3n^2
+
2n
+
8
is
O(n^2).
The
base
of
a
logarithm
is
disregarded
since
log_a(n)
=
O(log_b(n))
for
any
positive
a,
b.
supports
high-level
reasoning
about
scalability
but
does
not
capture
constant
factors,
typical
inputs,
or
practical
performance
on
real
machines.