Friday 26 August 2022

Discussing "Separation Logic: A Logic for Shared Mutable Data Structures"

(Link)

Author: John C. Reynolds (Carnegie Mellon University)

Reference: Proceedings 17th Annual IEEE Symposium on Logic in Computer Science, 2002, pp. 55-74, doi: 10.1109/LICS.2002.1029817.

Why this paper? Cited by On Transforming Rewriting-Induction Proofs for Logical-Connective-Free Sequents into Cyclic Proofs, Compositional Verification of Interacting Systems Using Event Monads, and Semantic Cut Elimination for the Logic of Bunched Implications and Structural Extensions, Formalized in Coq.

How can we be sure that the programs we write do what they’re supposed to? An answer from the late 1960s is Hoare logic, which gives a formal relationship between programs and specifications. Crucially, Hoare logic can be used in a modular way, in the sense that parts of the program can be reasoned about separately, rather than demanding that an entire program must be reasoned about all at once – a completely impossible task even for some relatively small programs.

Traditional Hoare logic becomes unusable in the face of languages (for example, C) that allow the direct manipulation of memory within the code. Because there is nothing to stop two different program variables pointing to the same physical piece of memory, a manipulation that only seems to effect one variable may also effect another, even if that variable exists in some long distant piece of code. However, the direct manipulation of memory (and potential for ‘sharing’ that this creates) can be extremely useful, as can be seen in various examples in this paper by Reynolds. For example, we see a list reversal where instead of creating a new list that is a reversal of the input, we take the memory-saving approach of reversing by swapping values within the list.

This paper introduces the term separation logic, but mostly reports and summarises previous work by Reynolds and others (primarily Peter O’Hearn, Samin Ishtiaq, and Hongseok Yang, with inspiration from Rod Burstall and David Pym). The key innovation of this group is to change the logical language in which specifications can be stated. A separating conjunction ∗ joins the usual list of logical operators, with a proposition of form P ∗ Q holding if P and Q hold of entirely separate parts of memory. This allows modular reasoning to be restored via a frame rule that says that a true statement about a fragment of one’s code remains true if put in the context of other memory, so long as that extra memory remains separate from the fragment in question. Another connective called separating implication also joins the party, along with a constant stating that no memory is in use, and some syntax to allow one to talk about the content of memory. The paper goes on to present a treasure trove of examples, extensions, and directions for the future, with a heavy emphasis on the application of the logic to a variety of data structures and program constructs.

Wednesday 17 August 2022

Discussing 'Homotopy Type Theory: Univalent Foundations of Mathematics'

 (Link)

Author: The Univalent Foundations Program (Institute for Advanced Study)

Why this paper? Cited by Internal Strict Propositions Using Point-Free Equations, Probabilistic Programming and Multiverse Models of Type Theory, and Contextual Algebraic Theories: Generic Boilerplate beyond Abstraction (Extended Abstract).

The first thing to note about this e-book is its unusual authorship - it was put together by a collective of 27, along with various visitors and students, working together at the IAS in Princeton across a special year of study. The most famous participant was the Fields Medal (roughly equivalent to a Nobel Prize for mathematics) winner Vladimir Voevodsky, who sadly passed away a few years ago, aged just 51.

I will not attempt to summarise a 490 page book in a short blog post, but I can give some sense of the flavour of this work. It concerns a foundation of mathematics known as type theory. A foundation of mathematics might sound like an abstruse thing, and sometimes it can be, but it is essentially a description of what things mathematicians are able to talk about, and the rules they must follow to construct valid arguments about them. Nailing this down precisely is becoming increasingly vital, as we grow to understand that some mathematics (for example, proofs about software) can get too complex to fully trust a pen-and-paper proof by a human. The alternative is to have a somewhat simple-minded but indefatigable computer program check every step of a proof for error; for this to be possible, we must be precise about what language the program should be able to read and inferences it should be prepared to accept – in other words, our foundation of mathematics. Type theory is an approach to foundations that naturally lends itself to this sort of automated proof checking, as it involves working with syntax not too far from that of a programming language (although one of the arguments of this book is that one can de-formalise type theory to present proofs in a style more natural for human mathematicians as well).

This book introduces a new (at the time) direction in type theory known as homotopy type theory. The motivation is to get better behaviour for proofs of equality in type theory, with particular attention to applications in mathematical areas such as topology and category theory. The standard approaches to equality in type theory are well understood but rather blunt. The intensional approach only identifies functions (for example) when they have the same definition, up to some automatic simplifications a computer can easily run, and so will often not identify functions that produce the same output on all inputs, because their definitions differ – this is a reasonable approach to equality, but rather more limited than the use of equality in ordinary mathematics. The extensional approach, which ignores the details of definitions, lacks appealing computational properties. The insight that led to this book was that we can get a more nuanced notion of equality if we think of objects in our type theory as points in spaces, with equalities as paths between them modulo continuous deformation, a point of view taken from the mathematical field of homotopy theory. This concept takes some out of their mathematical comfort zone, but the idea has more to offer than better support for formalisation of homotopy theory itself (although it does offer that); borrowing the notion of path gives rise to a new notion of type theory in which equality has more interesting properties, and in particular does not collapse to mere definitional equality or something impossible to compute with.

The main new technical gains of taking the homotopy point of view into type theory are the univalence axiom, and higher inductive types. The univalence axiom essentially says that one can treat an equivalence (roughly, a mathematical proof that two objects are essentially ‘the same’), the same way one treats the type-theoretic construct of equality. Practically, this means that once one has proved an equivalence between two mathematical definitions, one can treat them as identical for all purposes, even switching back and forth during a proof to use whichever definition is most convenient at a given time, without risk of error. This mirrors standard informal mathematical practice but is sometimes considered an ‘abuse of notation’, i.e. something that should really be avoided in a fully rigorous development. With homotopy type theory, it is correct and completely formal to work like this. Higher inductive types are a general approach to constructing mathematical objects which include equalities as part of their inductive definition, and are vital for work formalising homotopy theory, but are also shown to be useful to other applications, for example to define the notion of colimit in category theory, and that of quotients of sets.

Monday 8 August 2022

Discussing 'Notions of Computation and Monads'

(Link)

Author: Eugenio Moggi (University of Edinburgh)

Reference: Information and computation 93.1 (1991): 55-92

Why this paper? Cited by Normalization for Fitch-Style Modal Calculi, Probabilistic Programming and Multiverse Models of Type Theory, and Denotational Semantics of General Store and Polymorphism  in my Google Scholar recommendations.

This famous paper suggested the use of the important but rather abstract category theoretic concept known as a monad to assist with reasoning about programming languages and programs that feature side effects. A side effect is an aspect of computation that cannot be easily described as a final output of a deterministic program, variously because it refers to output that happens during computation (such as printing to the screen), a failure to create an output (such as crashing with an error), or an opportunity for something (such as a user) to interact with the computation. The monad (typically written T) of choice depends on which type of side effect(s) one wishes to incorporate. Having chosen the monad, we can draw a distinction between the values of a type A - the sort of thing one can use as an input - and the type TA of computations of type A. This fine-grained distinction allows one to formally discuss the side effects, and not merely the result, of a program of interest.

The style in which Moggi presents his monads are at least as important as the concept itself, as he suggests introducing them, with new notation and equational rules, into a lambda calculus, both as a ‘metalanguage’ for reasoning about programming languages with side effects, and as a programming language in its own right. This has been influential in the use of monads in programming in Haskell  and other languages.