Large Language Models (LLMs) are now a relevant part of the daily experience of many individuals. For instance, they can be used to generate text or to support working duties, such as programming tasks. However, LLMs can also lead to a multifaceted array of security issues. This paper discusses the research activity on LLMs carried out by the ICAR-IMATI group. Specifically, within the framework of three funded projects, it addresses our ideas on how to understand whether data has been generated by a human or a machine, track the use of information ingested by models, combat misinformation and disinformation, and boost cybersecurity via LLM-capable tools.
Dawn of LLM4Cyber: Current Solutions, Challenges, and New Perspectives in Harnessing LLMs for Cybersecurity
Ritacco E.;
2024-01-01
Abstract
Large Language Models (LLMs) are now a relevant part of the daily experience of many individuals. For instance, they can be used to generate text or to support working duties, such as programming tasks. However, LLMs can also lead to a multifaceted array of security issues. This paper discusses the research activity on LLMs carried out by the ICAR-IMATI group. Specifically, within the framework of three funded projects, it addresses our ideas on how to understand whether data has been generated by a human or a machine, track the use of information ingested by models, combat misinformation and disinformation, and boost cybersecurity via LLM-capable tools.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.