Home

Vektorrepresentation

Vektorrepresentation, or vector representation, is the expression of an object as an element of a vector space using a coordinate vector or a set of features. In mathematics, a vector is an ordered n-tuple that can be added to and scaled by scalars; the specific coordinates depend on the chosen basis for the space.

A vector can be written as v = sum_i c_i b_i, where B = {b1, ..., bn} is a basis

In computer science and data analysis, objects are often represented as vectors of numbers, called feature

Vector representations enable standard operations: addition, scalar multiplication, inner products, norms, and distance measures, as well

Applications span similarity search, clustering, regression, and dimensionality reduction. The choice of representation influences algorithm performance

and
c_i
are
the
coordinates.
The
same
vector
has
different
coordinates
in
different
bases,
related
by
a
linear
transformation
represented
by
a
change-of-basis
matrix.
This
highlights
that
a
vector’s
numerical
representation
is
relative
to
the
chosen
basis
and
scale.
vectors
or
embeddings.
Common
representations
include
dense
vectors
and
sparse
vectors
(for
example,
one-hot
encodings).
Learned
embeddings,
such
as
word,
sentence,
or
graph
embeddings,
capture
semantic
or
structural
information
in
a
continuous
vector
space.
as
the
action
of
linear
maps
via
matrices.
Changing
basis
corresponds
to
multiplying
by
a
matrix,
and
metric
properties
can
depend
on
the
chosen
inner
product.
and
interpretability.
For
example,
a
point
in
three-dimensional
space
has
coordinates
(2,
-1,
5)
in
the
standard
basis,
but
a
different
basis
yields
different
coordinates
that
describe
the
same
point.
In
machine
learning,
vector
representations
are
fundamental
for
processing
and
learning
from
data.