Expectation-based Semantics in Language Comprehension
Expectation-based Semantics in Language Comprehension
The processing difficulty of each word we encounter in a sentence is
affected by both our prior linguistic experience and our general knowledge
about the world. Computational models of incremental language processing
have, however, been limited in accounting for the influence of world
knowledge. We develop an incremental model of language comprehension that
integrates linguistic experience and world knowledge at the level of
utterance interpretation. To this end, our model constructs–on a
word-by-word basis–rich, distributed representations that capture utterance
meaning in terms of propositional co-occurrence across formal model
structures. These representations implement a Distributional Formal
Semantics and are inherently compositional and probabilistic, capturing
entailment and probabilistic inference. To quantify linguistic processing
effort in the model, we adopt Surprisal Theory, which asserts that the
processing difficulty incurred by a word is inversely proportional to its
expectancy. In contrast with typical language model implementations of
surprisal, our model instantiates surprisal as a comprehension-centric
metric that reflects the likelihood of the unfolding utterance meaning as
established after processing each word. I will present simulations that
illustrate how the model captures processing effects from various semantic
phenomena, such as presupposition, quantification and reference resolution,
and how linguistic experience and world knowledge combine in determining
online expectations. Finally, I will discuss the implications of our
approach for neurocognitive theories and models of language comprehension.