Logic has been proved useful to model various aspects of the reasoning process of agents and Multi-Agent Systems (MAS). In this paper, we report about the last advances over a line of work aimed to explore social aspects of such systems. The objective is to formally model (aspects of) the group dynamics of cooperative agents. We have proposed and here extend a particular logical framework (the Logic of “Inferable” L-DINF), where a group of cooperative agents can jointly perform actions. I.e., at least one agent of the group can perform the action, either with the approval of the group or on behalf of the group. We have been able to take into consideration actions’ cost and the preferences that each agent can have for what concerns performing each action. Our focus here is on: (i) explainability, i.e., the syntax of our logic is especially devised to make it possible to transpose a proof into a natural language explanation, in the perspective of trustworthy Artificial Intelligence; (ii) the capability to construct and execute joint plans within a group of agents; (iii) the formalization of aspects of the Theory of Mind, which is an important social-cognitive skill involving the ability to attribute mental states, including emotions, desires, beliefs, and knowledge to oneself and to others, and to reason about the practical consequences of such mental states; such capability is very relevant when agents have to interact with humans, and in particular in robotic applications; (iv) connection between theory and practice, so as to make our logic actually usable by a system’s designers.
An Epistemic Logic for Modular Development of Multi-Agent Systems
Formisano A.;
2022-01-01
Abstract
Logic has been proved useful to model various aspects of the reasoning process of agents and Multi-Agent Systems (MAS). In this paper, we report about the last advances over a line of work aimed to explore social aspects of such systems. The objective is to formally model (aspects of) the group dynamics of cooperative agents. We have proposed and here extend a particular logical framework (the Logic of “Inferable” L-DINF), where a group of cooperative agents can jointly perform actions. I.e., at least one agent of the group can perform the action, either with the approval of the group or on behalf of the group. We have been able to take into consideration actions’ cost and the preferences that each agent can have for what concerns performing each action. Our focus here is on: (i) explainability, i.e., the syntax of our logic is especially devised to make it possible to transpose a proof into a natural language explanation, in the perspective of trustworthy Artificial Intelligence; (ii) the capability to construct and execute joint plans within a group of agents; (iii) the formalization of aspects of the Theory of Mind, which is an important social-cognitive skill involving the ability to attribute mental states, including emotions, desires, beliefs, and knowledge to oneself and to others, and to reason about the practical consequences of such mental states; such capability is very relevant when agents have to interact with humans, and in particular in robotic applications; (iv) connection between theory and practice, so as to make our logic actually usable by a system’s designers.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.