The Edinburgh Logical Framework LF was introduced both as a general theory of logics and as a metalanguage for a generic proof development environment. In this paper, we consider an extension of LF, called the Logical-Logical Framework LLF, that features predicates as first-class citizens that can be reified and passed as arguments, as a special kind of functional objects. These functional objects return the argument under the condition that it satisfies a logical predicate, or simply freeze the argument until this predicate becomes true. We study the language theory of LLF and provide proofs for subject reduction, confluence, and strong normalization. By way of example, we illustrate how special instances of LLF allow for encodings of side-conditions of Modal Logics in Hilbert style and a direct encoding of pre- and post-conditions on input and output of a theory of functions. Our motivation for introducing LLF is twofold. First, the type system of LF is too coarse with respect to the “side conditions” that it can enforce on the application of rules. Second, almost all logics are encodable in LF (with a greater or lesser degree of difficulty involved, depending on the choice of logic) but never participate actively in the reductions. This last point requires further attention: by the Curry-Howard isomorphism, types are formulæ (theorems) and λ-terms are logical proofs of the the validity of the formulæ (or the proof of the theorem). LLF moves the “periscope” of this 30- year-old principle onto another direction: λ-terms can also be formulæ the proof of which is not encoded in LLF itself but “externalized”, via an external call, to a logical system from which we request the verification of the truth of the formula itself. Following Curry-Howard, one can view this externalization as an “oracle call”, which will be typed with a suitable type representing the oracle itself. Historically, the idea of having stuck-reduction in objects and types in the setting of higher-order term rewriting systems with sophisticated pattern-matching capabilities was first introduced in Cirstea-Kirchner-Liquori’s Rho-cube, in order to design a hierarchy of type systems for the untyped Rewriting Calculus, and was later generalized to a framework of Pure Type Systems with Patterns. This typing protocol was essential in the effort to preserve the strong normalization of typable terms. The idea underlying the Logical-Logical Framework LLF is the same as that exploit for the General Logical Framework GLF. However, there is an important difference between the two frameworks in the definition of predicates. Predicates in are used both to determine whether β-reduction fires and to compute a substitution, while in the present paper they are truly first-class objects. A further attempt was done with the Constrained Logical Framework CLF by the same authors. The big difference between the two frameworks is that reduction was typed in CLF, while it is untyped in LLF, and that CLF relies on an infinite number of binders λP , each of which encoded one logical predicate, while the predicates in LLF are first-class objects declared in contexts: this latter point has a direct fallback in the viewing of LLF as a kernel for proof assistants, since we do not need to “parse” the metalanguage each time we add a new predicate. Apart from encodings of Modal Logics, we believe that our Logical-Logical Framework could also be very helpful in modeling dynamic and reactive systems: for example bio-inspired systems, where reactions of chemical processes take place only provided some extra structural or temporal conditions; or process algebras, where often no assumptions can be made about messages exchanged through the communication channels. Indeed, it could be the case that a redex, depending on the result of a communication, can remain stuck until a “good” message arrives from a given channel, firing in that case an appropriate reduction (this is a common situation in many protocols, where “bad” requests are ignored and “good ones” are served). Such dynamic (run-time) behaviour could hardly be captured by a rigid type discipline, where bad terms and hypotheses are ruled out a priori. Another interesting improvement is to be found in proving program correctness. It is well known that encoding pre- and post-conditions in LF if not so obvious: nevertheless, despite the theoretical difficulty, and driven by the importance of writing code which can be proven to be “correct”, this decade has seen a plæthora of software tools bridging the gap between proof assistants and prototype programming languages, featuring built-in Hoare-logic-inspired primitives (such as the Why pre-compiler, translating pre- and post-condition annotations into Coq proof obligations). Having the possibility to externalize the call to logical systems via a simple term application greatly simplifies this task.

### Logical Predicates as First-Class Citizens in LF

#####
*HONSELL, Furio;LENISA, Marina;SCAGNETTO, Ivan*

##### 2011-01-01

#### Abstract

The Edinburgh Logical Framework LF was introduced both as a general theory of logics and as a metalanguage for a generic proof development environment. In this paper, we consider an extension of LF, called the Logical-Logical Framework LLF, that features predicates as first-class citizens that can be reified and passed as arguments, as a special kind of functional objects. These functional objects return the argument under the condition that it satisfies a logical predicate, or simply freeze the argument until this predicate becomes true. We study the language theory of LLF and provide proofs for subject reduction, confluence, and strong normalization. By way of example, we illustrate how special instances of LLF allow for encodings of side-conditions of Modal Logics in Hilbert style and a direct encoding of pre- and post-conditions on input and output of a theory of functions. Our motivation for introducing LLF is twofold. First, the type system of LF is too coarse with respect to the “side conditions” that it can enforce on the application of rules. Second, almost all logics are encodable in LF (with a greater or lesser degree of difficulty involved, depending on the choice of logic) but never participate actively in the reductions. This last point requires further attention: by the Curry-Howard isomorphism, types are formulæ (theorems) and λ-terms are logical proofs of the the validity of the formulæ (or the proof of the theorem). LLF moves the “periscope” of this 30- year-old principle onto another direction: λ-terms can also be formulæ the proof of which is not encoded in LLF itself but “externalized”, via an external call, to a logical system from which we request the verification of the truth of the formula itself. Following Curry-Howard, one can view this externalization as an “oracle call”, which will be typed with a suitable type representing the oracle itself. Historically, the idea of having stuck-reduction in objects and types in the setting of higher-order term rewriting systems with sophisticated pattern-matching capabilities was first introduced in Cirstea-Kirchner-Liquori’s Rho-cube, in order to design a hierarchy of type systems for the untyped Rewriting Calculus, and was later generalized to a framework of Pure Type Systems with Patterns. This typing protocol was essential in the effort to preserve the strong normalization of typable terms. The idea underlying the Logical-Logical Framework LLF is the same as that exploit for the General Logical Framework GLF. However, there is an important difference between the two frameworks in the definition of predicates. Predicates in are used both to determine whether β-reduction fires and to compute a substitution, while in the present paper they are truly first-class objects. A further attempt was done with the Constrained Logical Framework CLF by the same authors. The big difference between the two frameworks is that reduction was typed in CLF, while it is untyped in LLF, and that CLF relies on an infinite number of binders λP , each of which encoded one logical predicate, while the predicates in LLF are first-class objects declared in contexts: this latter point has a direct fallback in the viewing of LLF as a kernel for proof assistants, since we do not need to “parse” the metalanguage each time we add a new predicate. Apart from encodings of Modal Logics, we believe that our Logical-Logical Framework could also be very helpful in modeling dynamic and reactive systems: for example bio-inspired systems, where reactions of chemical processes take place only provided some extra structural or temporal conditions; or process algebras, where often no assumptions can be made about messages exchanged through the communication channels. Indeed, it could be the case that a redex, depending on the result of a communication, can remain stuck until a “good” message arrives from a given channel, firing in that case an appropriate reduction (this is a common situation in many protocols, where “bad” requests are ignored and “good ones” are served). Such dynamic (run-time) behaviour could hardly be captured by a rigid type discipline, where bad terms and hypotheses are ruled out a priori. Another interesting improvement is to be found in proving program correctness. It is well known that encoding pre- and post-conditions in LF if not so obvious: nevertheless, despite the theoretical difficulty, and driven by the importance of writing code which can be proven to be “correct”, this decade has seen a plæthora of software tools bridging the gap between proof assistants and prototype programming languages, featuring built-in Hoare-logic-inspired primitives (such as the Why pre-compiler, translating pre- and post-condition annotations into Coq proof obligations). Having the possibility to externalize the call to logical systems via a simple term application greatly simplifies this task.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.