Wednesday, 28 September 2022

Discussing "Checking Interference with Fractional Permissions"

(Link)

Author: John Boyland (University of Wisconsin-Milwaukee)

Reference: Boyland, J. (2003). Checking Interference with Fractional Permissions. In: Cousot, R. (eds) Static Analysis. SAS 2003. Lecture Notes in Computer Science, vol 2694. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44898-5_4

Why this paper? Cited by Mechanised Robust Safety for Virtual Machines Communicating above FF-A, and Fractional Resources in Unbounded Separation Logic, and Verification-Preserving Inlining in Automatic Separation Logic Verifiers (extended version)

Of late the citations in my Google Scholar recommendations have been very concerned with the question of whether two different parts of a program interfere with each other, or depend on each other – see here, here, and (less directly) here. This is an important question if we want to reason about the behaviour of a program – we would like to reason about a large program piece by piece, but can only do so if we are sure the pieces do not interfere with each other (or, if they do interfere, we have a precise sense of how this interference might manifest).

Separation logic à la Reynolds
tells us that we can reason about parts of a program separately if they only access disjoint parts of memory. This is correct, but needlessly blunt. Two programs that read the same piece of memory do not effect each other; it is only if one of those program write to memory the other program is interested in that we have an issue. The solution proposed in this paper is to give different (parts of) programs different levels of permission to each part of the memory. Full permission, represented by the number 1, allows one to write and read, whereas a permission represented by a fraction strictly between 0 and 1 gives only read access. Where there is parallel composition or similar in the program, permissions can be divided between threads. Conversely, a program can add its permissions back together when it completes threads and hence, if full permission is reached, regain the ability to write.

The concepts of this paper and presented in the form of a type system, but it was soon realised by others that they can also be used to generalise separation logic as well; this is the context in which fractional permissions continue to appear in papers recommended to me.

Tuesday, 20 September 2022

Discussing "Step-Indexed Syntactic Logical Relations for Recursive and Quantified Types"

 (Link)

Author: Amal Ahmed (Harvard University)

Reference: Ahmed, A. (2006). Step-Indexed Syntactic Logical Relations for Recursive and Quantified Types. In: Sestoft, P. (eds) Programming Languages and Systems. ESOP 2006. Lecture Notes in Computer Science, vol 3924. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11693024_6

Why this paper? Cited by Fractional Resources in Unbounded Separation Logic, and Program Equivalence in an Untyped, Call-by-value Lambda Calculus with Uncurried Recursive Functions, and Recursive Session Logical Relations

When are two programs equivalent? There are various answers one can give to this depending on whether one wishes to consider factors like time and memory usage, but one fundamental notion is contextual equivalence: two programs A and B are equivalent in this sense if any program in the language with A as a component will behave the same (with respect to termination) if that component is replaced by B. Hence, there is no context which can tell them apart. This is an intrinsically difficult fact to prove because one must consider all possible contexts, of which there are infinitely many to choose from. A general technique for demonstrating contextual equivalence in typed languages is the method of logical relations, in which a general relation between programs is built up by induction on types, and the problem of proving equivalence for particular programs is narrowed to the problem of proving that these programs are related.

The problem with defining a relation in the inductive style is that it relies on types being defined in terms of simpler subtypes. This paper looks at two important type constructors for which this fails. Inductive types like lists and trees refer to themselves – lists are defined in terms of lists – in a way that would make a naïve inductive definition on them immediately circular. Quantified types – universal types for parametric polymorphism and existential types, which resemble abstract data types – contain type variables which can be replaced by arbitrarily complicated types. This paper, picking up an idea from Appel and McAllester, shows that the unpleasant circularities can be removed if one adds a natural number ‘step-index’ to the relation to make sure that the induction remains well defined – when one is working with a type and must consider a type that is not smaller, one reduces the step-index towards zero to compensate. A lot is proved about this style of reasoning – the full version of the paper weighs in at a hefty 169 pages!

This is certainly a relevant ‘classic paper’ with respect to my own research interests; I get involved in this area via later work by Birkedal, Møgelberg, Schwinghammer and Støvring on ‘synthetic’ step-indexing, which uses a modality to hide some of the arithmetic of step-indexing. But that is, perhaps, a topic for another day.

Thursday, 15 September 2022

Discussing "A Core Calculus of Dependency"

(Link)

Authors: Martín Abadi (Compaq Systems Research Center) and Anindya Banerjee (Stevens Institute of Technology) and Nevin Heintze (Bell Laboratories) and Jon G. Riecke (Bell Laboratories)

Reference: Abadi, Martín, Anindya Banerjee, Nevin Heintze, and Jon G. Riecke. "A core calculus of dependency." In POPL ’99: Proceedings of the 26th ACM SIGPLAN-SIGACT symposium on Principles of Programming Languages, pp. 147-160. 1999.

Why this paper? Cited by The Syntactic Dual of Autonomous Categories Enriched over Generalised Metric Spaces and Recursive Session Logical Relations.

This paper concerns the general concept of programming languages and formal calculi whose types record how certain parts of a program might depend on others. One example is the SLam (Secure Lambda) Calculus of Heintze and Riecke, where types are annotated by the level of security of the data held in them. These annotations are used to prevent the output of low security computations from depending on more private information, as that creates the risk of secrets being leaked. Another example is the slicing calculus of Abadi and coauthors, where types record which parts of the program the output might depend on, which was designed to be applied to caching rather than security. This paper unifies these concepts, and more, into the Dependency Core Calculus (DCC).

The technology they adapt for this purpose comes from a paper I looked at on this blog only a month ago, Moggi’s monadic metalanguage! There, type constructors called monads captured various notions of side effect. Here, a monad is introduced for each element of a hierarchy; for example, if there are a certain number of security levels, then there will be a monad for each such level. By reducing various calculi to DCC they can prove desirable properties for each them, and even comment on alternative design decisions that could be made for them. Semantics for DCC are given in terms of sets equipped with certain relations called complete partial orders, arranged into a category.

One somewhat interesting note is that three of the four authors were employed by the research labs of private corporations, rather than research universities. Private researchers making their work available to the scientific commons via publication is not as rare as some might expect, but there is some evidence that this practice is declining.

Tuesday, 13 September 2022

Discussing "Structural Proof Theory"

(Link)

Authors: Sara Negri and Jan von Plato (University of Helsinki)

Reference: Negri, Sara, and von Plato, Jan. Structural proof theory. Cambridge University Press, 2001.

Why this book? Cited by Two-sided Sequent Calculi for FDE-like Four-valued Logics and Proof Theory for Intuitionistic Temporal Logic

There has been an unplanned delay to the blog, as two weeks ago there were not enough new recommendations appearing in Google Scholar to recommend any ‘classic papers’ for me to review via their citations, and then one week ago we got this book, which took me a small while to track down. I therefore expect to post twice this week.

A `structural’ proof is one that proceeds via a set of inference rules that forces one to arrange one’s proof in a tree-like structure, using only a limited set of inference rules and axioms. These inference rules generally reflect the logical connectives used to state the proposition one is trying to prove.  For example, to prove a conjunction one `branches’ to prove each side of the conjunction separately. This may seem a mundane or obvious notion, but is actually very young with respect to the long history of logic, originating with Gerhard Gentzen in the 1930s. The structural approach allows one to study the space of proofs for a given logic as an object of mathematical interest in its own right (hence, proof theory) and can help with finding proofs.

The two important systems for structural proof are natural deduction and sequent calculus, both originating with Gentzen. Natural deduction is intended to reflect the natural way in which one reasons logically, and is an important concept in computer science due to its relationship with types: according to the Curry-Howard isomorphism, programs with a given type can be viewed as natural deduction proofs of a certain proposition, and vice versa. Sequent calculus is a slightly less natural notion where one works on `sequents’ which allow one to manipulate one’s premises as well as one’s conclusion(s). This turns out to be an ideal setting in which to study the space of all proofs for a logic, and can support automated proof search and help to demonstrate that non-theorems are not provable. While the two approaches differ, they also have many similarities; for example, both have a notion of an algorithm (called normalisation or reduction for natural deduction, and cut-elimination for sequent calculus) that automatically simplifies unnecessarily complex or indirect proofs.

This book is a careful development of various notions of natural deduction and sequent calculus for classical, intuitionistic, and intermediate logic, including propositions, predicates, equality, and ‘nonlogical rules’ that give one access to the axioms of various parts of logic. Full proof details are given throughout the book, a number of new results are presented alongside the classic results, and care is taken to place ideas in their historical context. The high level of detail make this a less breezy read than, say, Girard’s Proofs and Types, but it is in invaluable tool to anyone working in the area.