Sunday, 24 February 2013

Towards Self-Verification of Isabelle's Sledgehammer


Authors: Jasmin Christian Blanchette and Andrei Popescu (Technische Universität München)

Reference: Retrieved from first author's homepage, 18 Feb 2013.

Comments: One of the most popular short-hand techniques for assessing the impact of an academic paper is to count how often it has been cited; this is increasingly applied to researchers also, by looking at their citations across all their papers. These numbers have real impact, for the careers of young researchers like myself (young-ish, at least!) in particular. Google Scholar is an increasingly popular source of such stats, as the citation number appears conveniently below every paper the search engine returns, and statistics like the h-index are prominently displayed on individual researchers' pages.

In a recent paper that has received a fair amount of attention, Manipulating Google Scholar Citations and Google Scholar Metrics: simple, easy and tempting, three Spanish researchers showed that it is exceptionally easy to game this system to inflate the citation statistics of papers and researchers. The reason for this, which I've already discussed several times on this blog, is that Google Scholar greedily archives anything that vaguely looks like an academic paper, and so people merely need to upload bogus papers with citations to themselves on appropriate websites (usually on university domains) to juice their stats. I should add that I don't think this sort of outright fraud is a particularly big issue - it is fairly easy to discover and would destroy the reputation of anyone who engaged in it. But it does raise the more important question of what the widely viewed statistics Google Scholar produce are actually supposed to mean (other databases that are used for these sort of statistics, such as Scopus and Microsoft Academic Search, have the opposite problem of failing to count perfectly reasonable peer-reviewed research; their numbers are therefore just as dubious).

None of this is to imply that this week's paper is in any way bogus; I was however put in mind of this controversy because it is simply listed on Blanchette's website (on the TU Munich domain) as a 'draft paper'. Has it been submitted anywhere, and will it ever be? It's impossible to say, but from the point of view of Google Scholar's statistics its citations are as good as those of any other paper.

This paper deals with the broad theme of mechanised theorem proving, where computers assist mathematicians in their work. There are two main ways that this happens. First is automated theorem proving, where a computer program tries to solve some problem on its own. However, we are not particularly close to seeing this sort of program actually replace human mathematicians; for the moment arguably the more important task for computers is the simpler problem of checking proofs for logical errors or gaps. Under this paradigm the mathematician enters their own proof into a 'proof assistant' program like Isabelle, and tries to convince it that their proof is correct and complete.

A more sophisticated mechanised theorem proving approach involves combining these two approaches, so that the creative mathematical ideas are supplied by a human, but the program tries to reduce the drudgery of the proof by proving as much as it can on its own (usually mundane but time-consuming details). This is the idea behind the Isabelle tool Sledgehammer, which calls a variety of first order logic automated theorem provers and sets them to the task of finding proofs of statements expressed in that logical language.

According to this paper, "paper proofs reign supreme in the automated reasoning community". I don't have quite enough background in this field to judge the truth of this, but if true it certainly is ironic. After all, if the provers are not maximally trustworthy, how can we trust their proofs? (This sort of 'who watches the watchmen' problem can easily spiral off into paradox, of course, as the watchmen-watchers must themselves be watched; but we would still like the watchmen to be watched as well as possible!) This paper makes a contribution to filling this gap by verifying a particular aspect of the Sledgehammer tool - the encoding of many-sorted first order logic with equality into unsorted first order logic with equality.

A sort system is a way to stop us forming grammatical but nonsensical sentences like Noam Chomsky's "colourless green ideas sleep furiously". Here 'sleep' should be a predicate that accepts certain sorts of things (only animals, say), and 'ideas' should not have that correct sort. Sort systems are convenient, but it turns out they are not essential, in first-order logic at least - all the information held in a sort system can be faithfully expressed by predicates or functions or first-order logic. This sort of translation is used by Sledgehammer, as Isabelle users want to use sorts, but the automated provers Sledgehammer calls on work with unsorted logic only.

The catch is that the sort-deleting translation, performed naively, is inefficient, producing large numbers of predicates or functions carrying around sort information that turn out not be needed. This clutter is a problem for the automated provers, slowing them down at best, and possibly causing them not to find an answer when they otherwise might have. Sledgehammer therefore uses cleverer techniques for the translation, first developed in the 2011 paper Sort It Out With Monotonicity (and further developed in the conference paper "Encoding Monomorphic and Polymorphic Types", also available on Blanchette's website). This paper verifies the correctness of these implemented cleverer encodings, and actually finds an error (albeit not a fatal one) in the original 2011 paper - always a pleasing result for those championing the mechanically verified approach to mathematics.

Sunday, 17 February 2013

Proof-Relevant Logical Relations for Name Generation


Authors: Nick Benton (Microsoft Research Cambridge), Martin Hofmann (LMU Munich), and Vivek Nigam (Federal University of Paraíba)

Reference: Retrieved from third author's homepage, 11 Feb 2013.

Comments: It's an interesting and perhaps underappreciated fact that where you do your earliest research can typecast you for some time - perhaps a whole career - into a certain 'school' of researchers who look at similar problems, and use similar techniques to solve them. This notion of a school of thought is perhaps most known in the humanities, and particularly philosophy, but it's just as true in science, I think. For this reason, perhaps it's not surprising that this is the third paper in four weeks that Google Scholar Updates has served up that hails from Cambridge, where I did my PhD (although Benton works at Microsoft Research Cambridge, rather than the university's Computer Laboratory, that doesn't really diminish the point - the buildings are after all right next door to each other).

I should be completely clear by saying that this paper hails only in part from Cambridge, as this is a truly international collaboration, with the England-based researcher joined by one in Germany and another in Brazil. The paper is described on Nigam's website as a "conference submission", so the usual health warnings about grey literature apply - the paper has not been peer reviewed, is very likely to change between now and publication as a result of reviewers' comments, and of course may not appear at all if rejected.

This paper is yet another to look at my favourite topic of nominal sets (see here for discussion), albeit in rather disguised fashion. It's well known (although I don't know that the result has ever been published) that nominal sets have an equivalent formulation called the Schanuel topos, where instead of taking sets with certain structure as primary, we take certain homomorphisms between categories, called functors, as our objects. The intuition is that these functors map finite sets of names to the sets of elements that are 'supported by them', so the functors map into the category of sets. We then throw out all the functors that don't have a certain property called pullback-preservation. It's fair to say that this formulation of nominal sets isn't so simple to understand for most, but it's natural from the point of view of category theory, and that point of view can be useful, as I discussed here.

The move taken in this paper is to consider the pullback-preserving functors that start in the same category as the Schanuel topos, but map into the category of setoids, not sets. I should say that there seems to exist a reasonably widely used definition of 'setoid' (see e.g. here) which the authors are not cohering to, which is confusing, but these things happen (and Humpty Dumpty would urge us to be philosophical about it). The essential notion is that a setoid is a collection of 'types' equipped with explicit 'proofs' that some of these types are equal to others. I found myself wondering whether a set theoretic version of this paper could have been written that starts with the usual definition of nominal sets and adds structure to define 'nominal setoids'; this is not discussed in the paper, and perhaps such a construction wouldn't work, or would work but not be so natural for the applications that the authors have in mind.

What applications of this setoid version of Schanuel topos do the authors present? In a related and longer arXiv paper they discuss a range, but this paper looks as a case study at reasoning about the ν (nu) calculus, a formal calculus developed by Andrew Pitts and Ian Stark in a 1993 paper, and in particular the equationally specified semantics for the observable behaviour of the ν-calculus, first studied by Stark in 1996. The ν-calculus is an extension of the λ-calculus with explicit support for the generation of new 'names' encapsulated within a given scope (a common feature of programming languages). This paper shows that the setoid-Schanuel view can support a definition of the operators of the ν-calculus (to be precise, a 'metalanguage' for reasoning about the ν-calculus) which validates the equations for semantics that Stark gave in his 1996 paper.

Sunday, 10 February 2013

Classical lambda calculus in modern dress


Author: Martin Hyland (University of Cambridge)

Reference: arXiv preprint arXiv:1211.5762v1 [math.CT], 2012.

Comments: For those reading this blog who aren't familiar with Google Scholar Updates, it might help to describe the format of this tool, because it is a little different from how search tools are usually set up. Instead of being ranked by relevance, my Updates page's 42 results, all presumably judged as crossing some threshold of relevancy, are ranked chronologically, so the most recently archived papers are at the top. The oldest paper is 326 days old; I do not know for sure if they will start dropping off the list, but I suspect they will once they get to a year old. I should say that the 42 results are from my list of 'top' recommendations, which I am using for this blog - there is also a less selective list of 'all' recommendations that currently has 99 results. In weeks where no new papers have been added I will go down the list till I get to my first unblogged paper; there hasn't been a lot of activity lately, so this week's paper is currently fifth.

This is another arXiv preprint (discussion here), but slightly unusually the author specifies which venue the paper is intended for - a 'festschrift' special issue of the journal Mathematical Structures in Computer Science in honour of the 90th birthday of Italian computer scientist Corrado Böhm (journal issues as 'birthday presents' to eminent scientists are a rather nice tradition in science, and quite common; one paper of mine is in a festschrift for Gordon Plotkin's 60th birthday, for example).

The subject of this paper is the λ (lambda) calculus, an area to which Böhm made important contributions. The λ-calculus is a mathematical formalism developed in the 1930s by Alonzo Church as a 'theory of functions'. The two basic constructions are application - given a function f and expression e, we get a new expression f(e) (usually written without brackets), and abstraction - given an expression e which may mention a fresh variable x, the function λx.e will evaluate by binding whatever argument it's given to x throughout e. For example, λx.x is the identity function that returns whatever it's given unchanged. Finally, raw λ-calculus expressions are considered to be equal if they differ only by certain variable renamings, or they can be simplified in the same way by evaluating them as functions. I should also note that the rhetorical distinction I'm making between 'functions' and 'expressions' is a bit misleading - in the pure λ-calculus, everything is a function, and functions happily take other functions as inputs and produce them as outputs.

Remarkably, the simple system I have sketched can be used to compute any function (on natural numbers) that it is possible to compute! This claim is called the Church–Turing thesis (where 'Turing' refers to the Turing machines of Alan Turing). So the λ-calculus is important and useful to people like me whose job it is to ponder the nature of computation all day.

It's one thing to define a formal system of rules and procedures, but it seems a bit like a pointless game if we don't have a precise understanding of what they all mean. After all, why these rules and not some others? This is a question of semantics, and in the case of the λ-calculus this question has, in the words of this paper, "proved vexed". The simplest interpretation - that λ-expressions correlate to ordinary functions - turns out not to work. Computer scientists have had 80-odd years to work on this problem, so of course there are now a number of answers on the market; this paper adds a new one, and shows that it is mathematically natural and yields some of the earlier approaches as special cases.

The semantics for the λ-calculus given by Hyland are category theoretic (an approach I discussed generally here). The basic concept is that of an 'operad', although it could also be given in the more familiar (to me, at least) guise of Lawvere theories. The general idea is that for every natural number n we have a set L(n) of all the λ-expressions with at most n free variables. This is the setting for ordinary equational logic (see here). To get to the λ-calculus we must also have functions L(n+1)→L(n) and L(n)→L(n+1), where the first corresponds to adding an abstraction, and the second to applying an expression to a fresh variable. These functions are related because doing the first, then the second, leaves you back where you started - e = (λx.e)x. Doing the second, then the first, need not leave you back where you started, but if we added the requirement that it did - that f = λx.(fx) - this is exactly η-equivalence, a common condition to add on to the basic λ-calculus.

This is attractive stuff if you are a computer science and category theory nerd - and I am! The rest of the paper is dedicated to showing that this construction, called a 'semi-closed structure' is indeed a good way to interpret the λ-calculus, by relating it to the syntax, and to some other notions of semantics. There's a lot in this paper and I couldn't absorb it all in one week - the 'Taylor Fibration' material went a bit over my head - but I still greatly enjoyed it.

Sunday, 3 February 2013

Nominal Event Structures


Authors: Andrei Alexandru and Gabriel Ciobanu (Institute of Computer Science Iași)

Journal reference: Romanian Journal of Information Science and Technology, Volume 15, Number 2, pp. 79-90 (2012).

Comments: I must admit that this paper hailing from the 'Romanian Journal of Information Science and Technology' (ROMJIST) didn't exactly fill me with confidence. My scepticism was not primarily driven by the 'Romanian' in the title - many journals and conferences are sponsored by scientific societies linked to specific countries or regions, and not necessarily the 'obvious' countries either - the conference WoLLIC, through which I have published, has significant sponsorship from Brazil, for example.

Rather, it's the 'Information Science and Technology' in the title that set off alarm bells. How does a journal ensure high quality and credible reviewing when their remit covers the entire field of computer science? If you're the Journal of the ACM that's possible because you've set yourself up as a flagship journal for the whole subject, with a seriously high powered and diverse board of editors (Nature is a more famous and even more wide-ranging example of this sort of model). But for less high profile journals (a category in which ROMJIST obviously sits) a larger degree of specialisation seems essential if quality is to be maintained.

Setting that aside, Google Scholar has recommended this paper to me, and so I read it with an open mind. The reason for the recommendation is clear enough, as it deals my speciality of nominal sets, a variant of set theory in which each element is linked to a 'finite support' of names. I discussed the computational applications of this model in my first article on this blog. It's a natural and common move to reinterpret mathematical concepts from 'ordinary' set theory into nominal set theory - my own work on equational logic, which I discussed briefly last week, can be seen in this light. This paper makes this move on event structures, abstract structures in which causality and conflict between events can be modelled.

Here's the problem though - nowhere in this paper is it explained why creating a nominal variant of event structures might be a good idea. No examples are given of these new nominal event structures, no problems are solved using them, no 'further work' section suggests what might be done with them next, and no interesting theoretical or mathematical issues are brought to light. The small number of novel proofs in this paper are all completely routine checks that finite (or sometimes empty) support properties are not violated by the new definitions. I don't intend this blog to be a space where I 'review' papers in the academic sense, with all the controversy that process inevitably generates, but I can't report on the usefulness of Google Scholar Updates without indicating where the papers I read are not useful, and this is certainly such an instance.