The recently introduced foundation models for language modeling, also known as Large Language Models (LLMs), have demonstrated breakthrough capabilities on text summarization and contextualized natural language processing. However, these also suffer from inherent deficiencies like the occasional generation of factually wrong information, known as hallucinations, and a weak consistency of produced answers strongly varying with the exact phrasing of their input query, i.e., prompt. Hence, this raises the question whether and how LLMs could replace or complement traditional information extraction and fusion modules in information fusion pipelines involving textual input sources. We empirically examine this question on a case study from crisis computing, taken from the established CrisisFacts benchmark dataset, by probing an LLM’s situation understanding and summarization capabilities on the target task of extracting information relevant for establishing crisis situation awareness from social media corpora. Since social media messages are exchanged in real-time, typically targeting human readers aware of the situational context, this domain represents a prime testbed for evaluating LLMs’ situational information extraction capabilities. In this work, we specifically investigate the consistency of extracted information across different model configurations and different but semantically similar prompts, which represents a crucial prerequisite for a reliable and trustworthy information extraction component.
Probing the Consistency of Situational Information Extraction with Large Language Models: A Case Study on Crisis Computing
Snidaro L.
2024-01-01
Abstract
The recently introduced foundation models for language modeling, also known as Large Language Models (LLMs), have demonstrated breakthrough capabilities on text summarization and contextualized natural language processing. However, these also suffer from inherent deficiencies like the occasional generation of factually wrong information, known as hallucinations, and a weak consistency of produced answers strongly varying with the exact phrasing of their input query, i.e., prompt. Hence, this raises the question whether and how LLMs could replace or complement traditional information extraction and fusion modules in information fusion pipelines involving textual input sources. We empirically examine this question on a case study from crisis computing, taken from the established CrisisFacts benchmark dataset, by probing an LLM’s situation understanding and summarization capabilities on the target task of extracting information relevant for establishing crisis situation awareness from social media corpora. Since social media messages are exchanged in real-time, typically targeting human readers aware of the situational context, this domain represents a prime testbed for evaluating LLMs’ situational information extraction capabilities. In this work, we specifically investigate the consistency of extracted information across different model configurations and different but semantically similar prompts, which represents a crucial prerequisite for a reliable and trustworthy information extraction component.| File | Dimensione | Formato | |
|---|---|---|---|
|
Probing the Consistencyof Situated Information.pdf
non disponibili
Tipologia:
Documento in Post-print
Licenza:
Non pubblico
Dimensione
200.22 kB
Formato
Adobe PDF
|
200.22 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


