SurprisalModelle
SurprisalModelle, or surprisal models, are a family of computational and theoretical models used in psycholinguistics and cognitive science to link language input with processing difficulty through surprisal, the information content of a linguistic unit given its context. Surprisal is defined as the negative logarithm of the probability of a word conditioned on its preceding context. In these models, the processing cost of a word is assumed to rise with its surprisal, predicting longer reading times or slower processing when a word is less expected.
Origin and variants: The approach draws from information theory and probabilistic language modeling, with influential work
Estimation and application: P(word|context) is estimated from large corpora or trained language models, allowing surprisal to
Limitations: The predictive performance of surprisal models depends on the quality of the language model and