contextvectoren
Context vectors are numerical representations of the linguistic context in which a word or phrase occurs. In distributional semantics, the meaning of a term is inferred from its surrounding words, following the distributional hypothesis that words appearing in similar contexts tend to have similar meanings. A context vector typically contains counts or weighted frequencies of neighboring words within a fixed window around the target term, or it may encode embeddings derived from neural models.
Count‑based methods build a high‑dimensional sparse vector from raw co‑occurrence statistics, often normalized by techniques such
Context vectors are used for a range of NLP tasks. They can compute lexical similarity, support clustering
While powerful, context vectors have limitations. Sparse count vectors suffer from high dimensionality and sparsity, whereas