Tuesday, 23 December 2014

Discussing 'Nominal Unification'

UPDATE - Due to upcoming teaching commitments and, after that, paternity leave, this blog will not be updated again in the near future.

(Link).

Authors: Christian Urban (University of Cambridge), Andrew M. Pitts (University of Cambridge), and Murdoch J. Gabbay (INRIA).

Reference: Theoretical Computer Science vol. 323(1-3), pp. 473-497 (2004)

Why this paper? Cited by 4 (of a possible 28) of the papers in my Google Scholar Updates recommendations.

Comments: In the first iteration of this blog I reviewed recent papers recommended to me to by Google Scholar Updates; one of the reasons I abandoned this is that many of these new papers were quite strongly linked to older work, and I ended up having to explain all that older work before writing about the new. Note that being strongly linked to older work is no bad thing - on the contrary, such incremental research is the unglamorous but essential heart of scientific progress. But my new focus for this blog, on 'classic' (which is to say, highly cited) papers, allows me to focus more on foundational work, which tends to be more original and so may be described with less reference to prior literature.

The reason for this discussion is that I have already discussed this week's paper at some length in a post for the first iteration of this blog. Please feel free to look up the post; the only correction I'd make is that is that a quadratic time (which is to say, feasibly fast) algorithm for nominal unification is in fact sketched in the conclusion of this paper, whereas I had previously thought that credit for this should go to Levy and Villaret alone.

Let me instead discuss something slightly orthogonal to this paper. The phrase 'nominal techniques', which I discussed for example here, actually refers to two things - syntax and semantics. I have previously emphasised the semantical (which is to say, mathematical) side because that is where my interests lie; but here is a paper that is clearly nominal techniques yet makes explicit use of neither Fraenkel-Mostowski set theory nor nominal sets. What brings the syntactic and semantical work together under the heading 'nominal techniques' is their commonality of jargon and notation, their treatment of names as 'first-class' entities rather than as embarassments to be hidden away, and their use of name permutations (name-for-name swappings) as the main technical engine. These links mean that for a trained reader it is simple to see how this paper should be understood semantically, although my workshop paper 'Nominal Logic with Equations Only' is the only place I know of where this is done explicitly (see Section 6).

Tuesday, 16 December 2014

Discussing 'The Semantics and Proof Theory of the Logic of Bunched Implications'


Author: David J. Pym (University of Bath)

Reference: Applied Logic Series Vol. 26, Kluwer Academic Publishers, 2002.

Why this paper? Cited by 4 (of a possible 31) of the papers in my Google Scholar Updates recommendations.

This week's most cited paper from among my Google Scholar Updates recommendations is a book, which I admittedly have not read through in its entirety for this discussion, but have enjoyed the excuse to flip through.

The logic of bunched implications, abbreviated BI, is similar to the linear logic I discussed last time, combining as it does 'normal logic' with a 'substructural' part that disallows the powerful inference rules of weakening and contraction. The technical difference between the two logics is that linear logic lacks a first-class normal implication (if... then...), usually considered to be the most important logical connective, but Girard showed that it can be reconstructed out of substructural implication and the '!' connective. As ingenious as that construction is, it demands a relationship between the normal and substructural connectives that it too tight for some applications. BI instead takes the conceptually rather simpler view that the normal and substructural logical connectives should be 'ships passing in the night' with no explicitly required interaction - which is not to say that they cannot be freely mixed together to create interesting theorems, but rather that one set of connectives does not depend on the other in any fundamental way.

I mentioned applications in the previous paragraph, and it is the breadth of concrete mathematical and computational constructions that can be understood via the logic BI that make it interesting. Nominal sets, the model of names in computing applications, provide such an example. In this setting propositions depend on finite set of names. The normal 'and' is defined in the usual way - 'p and q' is true if p is true and q is true. But the substructural conjunction, usually written with a star *, can be defined as follows: p * q is true if p is true and q is true and they do not share any names. The substructual implication is a bit trickier to define, and in fact I wrote a paper mostly about this very topic. Note that with this definition of * nominal sets do not define a model of linear logic (in the sense that we cannot define a ! connective with the required properties), showing that BI really is the 'better logic' for this application. A more famous application of BI is separation logic, a prominent formal methods technique for reasoning about programs, in which the * connective permits reasoning about the content of memory.

There is a lot of material in this book which I am not mentioning, of course. I will merely briefly note the proof theoretic notion of bunches that give the logic its name. These are essentially a richer notion of 'structure' than the lists (or sets) of propositions used by traditional proof systems; the idea of using more complicated structures in proof theory is reminiscent of the display calculi which I discussed recently.

Monday, 1 December 2014

Discussing 'Linear Logic'


Author: Jean-Yves Girard (Université Paris Diderot)

Reference: Theoretical Computer Science Vol. 50(1), pp 1-101 (1987).

Why this paper? Cited by 5 (of a possible 31) of the papers in my Google Scholar Updates recommendations.

Comments: This paper begins with an extraordinary disclaimer from the editor: "Because of its length and novelty this paper has not been subjected to the normal process of refereeing. The editor is prepared to share with the author any criticism that eventually will be expressed concerning this work." In other words, the paper was published in one of the top journals in theoretical computer science without review. With hindsight there is no doubt that this paper was worthy of publication - four and a half thousand citations speaks to that - but one wonders if a proper reviewing process could have helped point out Girard's rediscovery of much that was already known, particularly the work of the relevant logicians. Instead, only six papers are cited in this 101 page paper, 5 sole-authored by Girard himself! This matters for more than just the claiming of credit; for example Girard's notational innovations, which have stuck despite attempts at reform by the likes of Arnon Avron, have almost certainly harmed the ability of computer scientists and the broader logic community to communicate.

Linear logic is in part a substructural logic, meaning that less powerful 'structural rules' are available for reasoning than there would be for more standard logics. This is a technical concept, but can be explained by example: there is a substructural 'and', for which you cannot deduce 'A' from 'A and B' (weakening), nor 'A and A' from A (contraction). These are obviously not desirable restrictions to have if you really mean 'and'! But linear logic has the standard 'and' as well; the substructural one simply allows you to reason in a "resource conscious" way, in which propositions may not be thrown away with weakening, nor duplicated with contraction.

The insight that led to linear logic lay in Girard's study of coherent spaces, a mathematical model (or semantics) for intuitionistic logic (a standard logic which can be, but usually isn't, viewed as 'substructural'). In this model the meaning of the apparently basic logical notion of implication (if... then...) came apart into two. The denotation of "if A then B" involves (1) 'permission' to use A as many times as we might want to, and (2) some translation of however many As we fixed on, into a B. (2) is the the resource-conscious 'substructural' part, whereas (1) can be considered as a brand new logical connective, written !, which permits the throwing away or duplication of a proposition. Hence a new logic, expressive enough to encode intuitionistic logic (and via the well known encoding, or more directly) classical logic also.

As might be suggested by its 101 page length, there is a lot more to this paper, including an alternative semantics of 'phases', which look a lot like the 'ternary relations' already well known to relevance logicians, sequent calculus (which we discussed in general here), and 'proof nets', a style of deduction analogous in some ways to natural deduction. It is all quite engagingly written, thank goodness: necessary technical grinding is openly flagged as "boring", and the dedicated reader is rewarded on page 92 with "This paper has already been too long, and a reader not already dead at this point will have no trouble in finding out the details." Of course I already knew Girard as a flamboyant and humorous writer, exhibit A being his virtually indescribable Locus Solum, a 206 page paper that includes an almost 100 page long appendix titled 'A pure waste of paper', essentially amounting to his personal book of jokes and observations about logic ("The theory of free and bound variables is still a typical topic for the early morning lecture the day following a meeting banquet. But this is a deep subject..."). Strangely enough, that paper appears to have been reviewed in the usual way!