Large Language Models (LLMs) are now a relevant part of the daily experience of many individuals. For instance, they can be used to generate text or to support working duties, such as programming tasks. However, LLMs can also lead to a multifaceted array of security issues. This paper discusses the research activity on LLMs carried out by the ICAR-IMATI group. Specifically, within the framework of three funded projects, it addresses our ideas on how to understand whether data has been generated by a human or a machine, track the use of information ingested by models, combat misinformation and disinformation, and boost cybersecurity via LLM-capable tools.

Dawn of LLM4Cyber: Current Solutions, Challenges, and New Perspectives in Harnessing LLMs for Cybersecurity

Ritacco E.;
2024-01-01

Abstract

Large Language Models (LLMs) are now a relevant part of the daily experience of many individuals. For instance, they can be used to generate text or to support working duties, such as programming tasks. However, LLMs can also lead to a multifaceted array of security issues. This paper discusses the research activity on LLMs carried out by the ICAR-IMATI group. Specifically, within the framework of three funded projects, it addresses our ideas on how to understand whether data has been generated by a human or a machine, track the use of information ingested by models, combat misinformation and disinformation, and boost cybersecurity via LLM-capable tools.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11390/1291566
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact