Information exchanged in naturalistic human communication is implicitly grounded in its situational context. In particular, messages exchanged via social media on on-going events, like large-scale crisis events, often assume that the actual situational context is shared by the correspondents and thus not made explicit in the message text itself. Since these messages cannot be accurately interpreted without factoring in this situational context, Natural Language Processing (NLP) is a challenging task in these domains. The breakthrough capabilities on fine-grained contextual understanding and Natural Language Inference (NLI) of the recently introduced Large Language Models (LLMs), however, suggest novel avenues for tackling this problem. In the present work, we thus aim to analyze current LLMs' situation understanding and situational inference capabilities, seeking to answer the question: How well do LLMs understand situational context? We contribute i) a formalization of situational context as a conditioning factor affecting the outcome of the target task, and ii) an empirical examination of formulating this situation conditioning as a prompt engineering problem, explored on the target task of Named Entity Recognition (NER) on social media analysis for crisis computing.

Probing the Situational Reasoning Capabilities of ChatGPT

Salfinger A.;Snidaro L.
2025-01-01

Abstract

Information exchanged in naturalistic human communication is implicitly grounded in its situational context. In particular, messages exchanged via social media on on-going events, like large-scale crisis events, often assume that the actual situational context is shared by the correspondents and thus not made explicit in the message text itself. Since these messages cannot be accurately interpreted without factoring in this situational context, Natural Language Processing (NLP) is a challenging task in these domains. The breakthrough capabilities on fine-grained contextual understanding and Natural Language Inference (NLI) of the recently introduced Large Language Models (LLMs), however, suggest novel avenues for tackling this problem. In the present work, we thus aim to analyze current LLMs' situation understanding and situational inference capabilities, seeking to answer the question: How well do LLMs understand situational context? We contribute i) a formalization of situational context as a conditioning factor affecting the outcome of the target task, and ii) an empirical examination of formulating this situation conditioning as a prompt engineering problem, explored on the target task of Named Entity Recognition (NER) on social media analysis for crisis computing.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11390/1312528
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact