Computational Linguistics
About

Computational Pragmatics

Computational pragmatics investigates how context, speaker intentions, and conversational principles shape meaning beyond literal semantics, developing formal and statistical models of language use in context.

M_pragmatic = f(M_semantic, Context, Intentions)

Pragmatics studies the aspects of meaning that arise from language use in context rather than from the linguistic expressions themselves. Computational pragmatics aims to formalize and implement these context-dependent meaning processes, addressing phenomena such as speech acts, implicature, presupposition, reference resolution, and conversational grounding. This subfield bridges theoretical linguistics and artificial intelligence, seeking to endow natural language processing systems with the ability to interpret and generate language as humans do — not merely decoding literal content, but reasoning about speaker intentions, shared knowledge, and communicative goals.

Gricean Foundations

Rational Speech Act Model Pragmatic listener: P_L₁(w | u) ∝ P_S₁(u | w) · P(w)

Pragmatic speaker: P_S₁(u | w) ∝ exp(α · log P_L₀(w | u) − C(u))

Literal listener: P_L₀(w | u) ∝ δ_{⟦u⟧(w)} · P(w)

α = speaker optimality parameter, C(u) = utterance cost

Paul Grice's (1975) theory of conversational implicature provides the philosophical foundation for much of computational pragmatics. Grice proposed that conversation is governed by a Cooperative Principle and associated maxims (Quantity, Quality, Relation, Manner). Speakers sometimes flout these maxims to convey meanings beyond what they literally say, and listeners use the assumption of cooperation to infer these intended meanings. The Rational Speech Act (RSA) framework, developed by Frank and Goodman (2012), formalizes Gricean reasoning as recursive Bayesian inference between speakers and listeners, providing a computational implementation of pragmatic interpretation.

Formal Models of Pragmatic Inference

The RSA framework models communication as a game between a pragmatic speaker and a pragmatic listener, each reasoning about the other. The literal listener interprets utterances according to their truth-conditional semantics. The pragmatic speaker chooses utterances to be informative to the literal listener while minimizing effort. The pragmatic listener inverts the speaker model using Bayes' rule. This recursive reasoning captures scalar implicature (interpreting "some" as "not all"), referential pragmatics (choosing the most informative description), and other phenomena. Extensions handle uncertainty about the speaker's goals, multiple levels of recursion, and social meaning.

Pragmatics in Large Language Models

Large language models (LLMs) exhibit intriguing pragmatic behavior despite being trained only on text prediction. Studies have shown that GPT-class models can resolve scalar implicatures, interpret indirect speech acts, and generate contextually appropriate responses. However, they struggle with non-literal language requiring theory of mind, such as irony detection and politeness interpretation. Whether LLMs genuinely perform pragmatic reasoning or merely pattern-match on distributional cues is a central question in computational pragmatics, with implications for both AI development and theories of human pragmatic competence.

Applications in NLP

Computational pragmatics informs a wide range of NLP applications. Dialogue systems must interpret user intentions beyond literal utterances — a user saying "It's cold in here" may be requesting that the temperature be raised rather than simply reporting a fact. Information extraction systems must resolve pragmatic phenomena like hedging and vagueness to accurately identify asserted facts. Machine translation must preserve pragmatic force across languages, translating not just what is said but what is meant. Sentiment analysis systems must distinguish between literal and ironic uses of evaluative language.

Recent work in computational pragmatics has increasingly focused on grounded, multimodal settings where language use is embedded in shared physical or visual context. Pragmatic models of reference in image captioning, instruction following in embodied environments, and collaborative task completion in dialogue all require reasoning about mutual knowledge, perspective-taking, and communicative intent. These settings provide rigorous testbeds for computational models of pragmatics and highlight the gap between current NLP capabilities and human-like language understanding.

Related Topics

References

  1. Grice, H. P. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Syntax and Semantics, Vol. 3: Speech Acts (pp. 41–58). Academic Press.
  2. Frank, M. C., & Goodman, N. D. (2012). Predicting pragmatic reasoning in language games. Science, 336(6084), 998. doi:10.1126/science.1218633
  3. Goodman, N. D., & Frank, M. C. (2016). Pragmatic language interpretation as probabilistic inference. Trends in Cognitive Sciences, 20(11), 818–829. doi:10.1016/j.tics.2016.08.005

External Links