Sunday, 13 October 2013

Shuttering the blog

It's been an interesting experiment maintaining this blog, as I have looked at a total of 21 papers as recommended to me by the Google Scholar Updates tool. My comments on the tool, written up 9 papers ago but still basically true to my opinions, are to be found here.

However despite the fun I've had and value I've derived from this process I now have to shutter this blog, at least for a good few months but perhaps permanently. The happy reason for this is that I have a new job at Aarhus University, and I have many projects to complete before I leave my current position at Australian National University, and will of course be busy getting myself settled in my job for a while after that.

Thanks to everybody who's taken the time to have a look at my blog.

Kind regards,

Ranald Clouston

Sunday, 11 August 2013

Hiatus

Due to impending travel and some recent (minor, but irritating!) illness, this blog will be on hiatus for the next month or so.

Sunday, 28 July 2013

Quasivarieties and Varieties of Ordered Algebras: Regularity and Exactness


Authors: Alexander Kurz (University of Leicester) and Jiří Velebil (Czech Technical University in Prague).

Reference: Retrieved from Alexander Kurz's homepage, 24 July 2013.

Comments: After last week's paper, which could only have been recommended to me based on my most recent publication, this week's paper seems to hark back to the very first paper I published back in 2005 when I was doing (more or less) pure mathematics for my Masters degree. That paper looked at coalgebra, which I have previously discussed in a post about a paper which shares an author with this one. This week's paper looks at the categorically dual (and perhaps more familiar) notion of algebra.

This paper fits within (or perhaps just to one side of) the mathematical field of universal algebra, which very broadly is the study of mathematics specified by equations. The power of equations is a theme I've come back to a few times on this blog, e.g. here. A class of mathematical structures, called algebras, that are specified by a certain set of equations is called a variety. For example, the class of all groups is a variety. A quasivariety is similar except that it is specified by conditional equations where each equation is conditioned by a list of equations. For example if we wanted to require that a symbol f be interpreted as an injective function then we do this by writing
(fx = fy) implies x=y.
The really interesting results in universal algebra involve describing (quasi)varieties in ways that are apparently unconnected to their definition by (quasi)equational logic. The one I'm most familiar with is Birkhoff's HSP theorem, which characterises (quasi)varieties by certain 'closure' conditions; this paper concerns an alternative characterisation where varieties are interpreted as categories, and we then enquire what structure such a category should possess.

So much for the standard picture; this paper picks up the suggestion of the 1976 paper Varieties of ordered algebras by Bloom which replaces equations, =, by inequations, ≤. That paper gave a Birkhoff-style characterisation of such 'ordered' (quasi)varieties; this paper gives these classes their categorical characterisation. For this to happen we need to relax our idea of what a category is; normal categories have a set of arrows between any two objects, called a hom-set, but the categories of this paper have a partial order on their hom-sets. This is a particular manifestation of a general idea called enriched category theory, where hom-sets can be replaced by objects from (almost) any category.

NOTE: Because of leave my next post will be in a fortnight.

Tuesday, 23 July 2013

Correspondence between Modal Hilbert Axioms and Sequent Rules with an Application to S5


Authors: Björn Lellmann (Imperial College London) and Dirk Pattinson (Imperial College London / Australian National University).

Conference: Automated Reasoning with Analytic Tableaux and Related Methods (Tableaux 2013), Nancy, France, 16-19 September, 2013.

Journal reference: Lecture Notes in Computer Science, to appear.

Comments: This is a paper that Google Scholar Updates surely wouldn't have recommended to me a couple of weeks ago; recall that Updates uses one's papers as 'search terms' to direct its recommendations of new work to you. This paper has little in common with most of my existing papers, but it has clear (albeit somewhat shallow) similarities with my brand new paper 'Annotation-Free Sequent Calculi for Full Intuitionistic Linear Logic' (pdf), co-authored with Jeremy Dawson, Rajeev Goré, and Alwen Tiu, which I will be presenting at the conference CSL 2013 in September. I also must note that one of the authors of this paper, Dirk Pattinson, has an office in the same corridor as me at Australian National University; in this new world we apparently need big data algorithms to tell us to read the work of our colleagues next door!

A mathematical proof involves starting with agreed-upon axioms, applying agreed-upon inference rules, and reaching a conclusion; this is a rather basic definition, but leads us to the really important questions for a logician - what axioms? And what inference rules? One broad approach, called Hilbert calculi, require that only an extremely small and fixed set of inference rules be used: in the simplest systems the single rule Modus Ponens alone suffices, although with modal logic, the topic of this week's paper, a couple more are needed. All the action with Hilbert calculi is in their axioms, in the sense that many different logics developed for all sorts of different purposes may have the same inference rule(s) but very different axioms. Hilbert calculi have their charms - they can be a very concise way to present a logic, for example - but they are not much fun to construct actual proofs with, as proofs of even apparently simple tautologies can be lengthy and highly unintuitive.

To get a more structured proof system that helps both humans and machines to generate proofs, the opposite approach tends to be more helpful: more structured proof systems tend to have very few axioms - often the fact that every proposition entails itself is enough - but many inference rules. Such structured proof systems come in various styles, including natural deduction, tableau, and sequent calculus, the topic of this paper, and also of my CSL paper which has surely prompted this recommendation (broadly, these proof systems differ in the style in which the inference rules are designed and organised).

One last piece of preamble is needed before we consider this week's paper; modal logic refers to any logic involving 'modalities' that modify propositions. For example, if 'p' is a statement that might be true or false ("it is raining"), we can preface it with a range of modalities: "it is possible that", "I believe that", "I know that", "in the future it will hold that", and so on. If such modalities have any plausible logical content - e.g. you can form a deductive argument in which "it is possible that" forms an important part - then we can design a modal logic that captures this logical content, via Hilbert axioms, sequent calculus, or whatever other proof style appears most suited.

This paper concerns the translation back and forth between Hilbert calculi axioms and sequent calculi rules for modal logic. The details of this translation are far too involved to summarise adequately in this blog post, but a key wrinkle in the development is to precisely identify certain classes of axioms with certain classes of rules; these classes relate to how many modalities must be piled up on top of one proposition to describe the logic. These distinctions support the 'limitative' results of the final section of the paper, which uses the translation to show that certain modal logics (defined via Hilbert calculi) cannot be captured by certain types of sequent calculus.

Sunday, 14 July 2013

A Confluent Pattern Calculus with Hedge Variables


Authors: Sandra Alves (University of Porto), Besik Dundua (Porto / Ivane Javakhishvili Tbilisi State University), Mário Florido (Porto), and Temur Kutsia (Johannes Kepler University).

Workshop: 2nd International Workshop on Confluence (IWC 2013), 28 June 2013, Eindhoven, The Netherlands.

Reference: Retrieved from workshop website, 8 July 2013.

Comments: This week's paper is certainly the shortest I've looked at for this blog - no bad thing in a week I've been wrestling with a conference deadline - at just 4 pages plus a half page of references. But this it isn't a formally `published' paper at all, although it has been reviewed; rather, it is a short communication to the specialist workshop IWC 2013, and ideas in here will probably form part of some bigger published paper in the future. Like many such workshops, IWC is a satellite orbiting a bigger event, the International Conference on Rewriting, Deduction, and Programming (RDP 2013), whose main events are two conferences, RTA and TLCA, that do have formally published proceedings (last week's paper hailed from one of those conferences).

The topic of the workshop IWC is confluence. Some computational systems are non-deterministic - at certain points their next steps may not be completely determined, like a Choose Your Own Adventure book. A non-deterministic system is confluent if those 'choices' don't really matter in the end - whenever possible paths diverge, they must always be able to converge again. In a Choose Your Own Adventure book this would imply that there was at most one accessible ending (there could be loops in the book, and hence perhaps no ending at all; confluence says nothing about termination, but rather says you can never make a choice that closes off where you would have gone if you'd made a different choice). In such a book confluence would be a bit of a rip-off, but for a computational system it is highly desirable, because it means the overall behaviour of the system is predictable without going to the trouble of specifying a totally deterministic strategy.

A class of computational systems for which confluence is highly prized are formal calculi, among which the λ (lambda) calculus is king. Here computation is expressed as the simplification, or reduction, of a term, and confluence corresponds to the notion that it doesn't matter which part of the term you choose to reduce first. While the λ-calculus is powerful enough to model any and all computations, there is good reason why most programmers use languages with many more built-in commands - just because you can use the λ-calculus for something like complex data structures doesn't mean it's easy to do so, so ready-to-hand support for them is to be desired. However even with such 'higher' commands the λ-calculus comes in useful, as by bolting them directly on to the basic calculus we can study their behaviour, including how they effect confluence.

One such 'extension' to the λ-calculus known to cause problems with confluence is pattern matching, the process of comparing a term with some pattern, attempting to unify them (as discussed last week), then proceeding to compute using that unification somehow. I'm being intentionally vague here - there are many notions of pattern matching, differing for example in their choice of what constitutes a 'pattern', and a whole host of them have been added to the λ-calculus. The most important reference for this work is the dynamic pattern λ-calculus of Cirstea and Faure, itself an attempt to unify various other calculi with similar intent. This workshop paper extends the dynamic pattern λ-calculus with hedge variables, which instead of being substituted for by terms, may be substituted for by lists of terms, and discusses how such a calculus can be constrained to get the desired confluence property.

Sunday, 7 July 2013

Unifying Nominal Unification


Author: Christophe Calvès (Université de Lorraine, Centre National de la Recherche Scientifique)

Conference: 24th International Conference on Rewriting Techniques and Applications (RTA 2013), 24-26 June, 2013, Eindhoven, The Netherlands.

Journal reference: Leibniz International Proceedings in Informatics (LIPIcs), Volume 21, Femke van Raamsdonk (Ed.), pp 143-157 (2013).

Comments: Many algorithms in computer science involve some sort of pattern matching which involves, at its most abstract level, working out whether two different terms can be forced to be equal by an appropriate choice of substitutions of their variables. This process is called unification and is best understood by simple examples - given the terms f(c,x) and f(y,d), where f is a binary function, x and y are variables, and c and d are constants that may not vary, we can unify them by mapping x to c and y to d, to get the new term f(c,d). On the other hand the terms f(x,x) and f(c,d) cannot be unified - there is nothing we can substitute for x that will make the terms equal.

In its simplest form, as in the examples above, unification is frequently useful but basically trivial - unifiers (more precisely the best, or 'most general' unifiers) can be computed in linear time. One way to make the problem harder is to relax the definition of equal from 'literally equal' to 'equal with respect to some equational theory', which creates a unification problem that is in general undecidable - working out whether f(x,x) and f(c,d) are unifiable now necessitates determining if there exists any term t such that f(t,t) and f(c,d) are provably equal, which involves a possibly infinite search through all possible terms t, not to mention the difficulty of proving the equality holds when a candidate term t is found.

Another way that unification can become more difficult is to enrich the syntax that is used to define terms - here, we consider variable binding, a frequent concern of my research, and this blog. Conceptually, this is not too different from the 'equational unification' of the paragraph above - terms can be considered equal despite being not literally equal, in this case because they differ only uniformly in their bound names. Fortunately the equality theory of binding is not difficult enough to lead to uncomputability: so-called nominal unification was proved to be decidable in at least exponential time in the original 2003 paper of that name by Urban, Pitts and Gabbay. Exponential time algorithms are rather slow so are of limited practical use; the picture was improved in 2008 by Levy and Villaret who gave a quadratic time algorithm for nominal unification, based on a detour via 'higher-order' syntax. Calvès, this week's author, in 2011 published a paper with Fernández presenting a different quadratic time algorithm, based on a detour via first-order syntax. The point of this paper is to show that, despite the differences between the approaches of Levy-Villaret and Calvès-Fernández, the algorithms are in a certain sense mathematically identical; both rely implicitly on a concept Calvès calls nominal modalities.

Monday, 24 June 2013

Modelling Infinite Structures with Atoms


Author: Mikołaj Bojańczyk (University of Warsaw)

Conference: Workshop on Logic, Language, Information and Computation (WoLLIC 2013), 20-23 August, 2013, Darmstadt, Germany.

Journal reference: Lecture Notes in Computer Science, to appear.

Comments: The topic of this week's paper, from an author we've met twice before (here and here) is compactness. Compactness is the property that states that an infinite set of logical sentences is satisfiable (not contradictory) if and only if all the finite subsets of that infinite set are satisfiable. Some logics have the property, but others may not. The importance of this result may not be immediately obvious; it was first discovered by Kurt Gödel for first-order logic as a mere corollary to another theorem (completeness). However compactness went on to be a cornerstone principle of the mathematical field of model theory, which in a broad sense looks at how logical claims can be 'satisfied' by mathematical constructions.

Bojańczyk is interested in shifting first-order logic from its usual set-theoretic foundations to the world of nominal sets, the mathematical model for names in syntax that I've discussed in many places on this blog, e.g. here (in fact, as discussed in that link, Bojańczyk has a somewhat more general notion in mind than is usually meant by 'nominal sets', but such subtleties are orthogonal enough to the gist of this paper that we will pass by them). Making first-order logic nominal means that predicates can involve names, and disjunctions and conjunctions can be infinite, although only 'orbit-finite'.

Compactness is fundamentally a theorem about infinite and finite collections; in nominal sets such issues become a bit subtle. First, each element of a nominal set has what is called the finite support property, which means that it 'depends' on only finitely many names. This is a very helpful assumption in practice, and is motivated by the idea that real syntax can only ever involve finitely many names (even when the syntax might get infinitely large, as I discussed here). Moving up a level, we see that no non-trivial nominal set is finite. However there is a notion something like finiteness that does make sense for nominal sets - that of orbit-finiteness, which essentially says that the nominal set is finitely 'generated', rather like an infinite number of reflections in a hall of mirrors could be generated by a finite number of people standing in its middle.

With familiar notions of finiteness and infinity upended by the nominal sets model, getting a halfway sensible notion of compactness becomes quite difficult. It clearly makes no sense to state the theorem in terms of finite sets of sentences, because there are no interesting finite nominal sets. More surprisingly, the compactness postulate got by replacing the word 'finite' with 'orbit-finite' turns out to be false. In fact to validate that version of the theorem we initially need to jump outside of the nominal sets model entirely by axing the finite support property. This is by no means an illegitimate move but it does create worries, because sets with names but no finite support property are less relevant to computer science applications. However Bojańczyk finishes the paper by putting the rabbit back into the hat by defining what are called 'stratified models', more complicated constructions which have the finite support property but also satisfy compactness by 'approximating' the non-finitely supported models.

Update: Due to workload my next post will be a fortnight after this one.

Sunday, 16 June 2013

Formalizing Bounded Increase


Author: René Thiemann (University of Innsbruck)

Conference: 4th International Conference on Interactive Theorem Proving (ITP '13), 22-26 July 2013, Rennes, France.

Journal reference: Lecture Notes in Computer Science 7998, pages 245-260, 2013.

Comments: This paper concerns the fundamental problem in computing of determining whether a given algorithm or program terminates, or instead gets stuck in an infinite loop. This is called the Halting Problem. In 1936 Alonzo Church and Alan Turing independently proved that this problem is, in general, insoluble - a program that accepts other programs as input and tells you if they terminate or not is logically impossible. This is the most spectacular result in all computer science, but the proof is really quite simple at its heart - just a little logic game involving feeding hypothetical programs to themselves until they display behaviour that it clearly paradoxical.

This is all a lot of fun, but the practical need to work out whether particular programs terminate or not remains, and so there is an obvious need for techniques that work to prove termination at least some of the time. This paper looks at bounded increase, where each iteration of the program (or some part of it) increases some index until it hits a 'ceiling' that it is provably not permitted to surpass. Because the index increases constantly but cannot do so forever, the program is guaranteed to terminate.

This technique was developed in an earlier paper by Thiemann and co-authors; this paper goes a step further by formalising it in the interactive theorem prover Isabelle/HOL. This sort of formalisation has been discussed several times on this blog - see here for example - and its usual motivation is to increase confidence in a result and detect any subtle errors. This is certainly a motivation here, and Thiemann discusses a number of issues that arise during this formalisation, particularly in the section on 'conditional constraints', but this particular formalisation is also intended to certify proofs produced by specialist termination provers such as AProVE and TTT2. Indeed, a bug in AProVE was uncovered by this research.

This is the second week in a row that Google Scholar Updates has recommended a paper that, while interesting, is not massively related to my research interests. At least this week the reason for the recommendation is a bit more clear - the author's formalisation had to grapple with some (fairly simple) issues involving variable binding, and he considers use of the specialist package Nominal Isabelle to deal with them; the outcome was that the package proved too heavy duty for his requirements and Thiemann instead defined everything needed for binding himself.

Sunday, 2 June 2013

Partial Recursive Functions and Finality


Author: Gordon Plotkin (University of Edinburgh)

Reference: Computation, Logic, Games, and Quantum Foundations: The Many Facets of Samson Abramsky, Lecture Notes in Computer Science Volume 7860, 2013, pp 311-326.

Comments: As I've mentioned before, there's a nice tradition in science of 'festschrift' volumes put out to celebrate the birthdays of prominent scientists. I myself have published a paper in a festschift for Gordon Plotkin's 60th birthday; here Plotkin himself contributes to a festschrift for the 60th birthday of Oxford computer scientist Samson Abramsky.

Despite this connection, this week's recommendation from Google Scholar Updates is unfortunately not very pertinent to my research interests. That's not to say it's not a nice paper, but Updates is supposed to be a source of personalised recommendations; I have other ways to find papers that are merely nice! The paper is about how partial recursive functions - which is to say, partially computable functions - can be 'represented' (in the technical sense used in the paper) within general category theory. Why has the algorithm recommended this to me? There is some use made of algebra and coalgebra, but that doesn't really seem sufficient grounds to send a paper my way (am I to be recommended every paper at CALCO, for example?). Perhaps publishing in a volume dedicated to Plotkin was enough?

Note: Next week I will be on leave and so my next blog will be in a fortnight.

Monday, 27 May 2013

Instances of computational effects: an algebraic perspective


Authors: Sam Staton (University of Cambridge)

Conference: Twenty-Eighth Annual ACM/IEEE Symposium on Logic in Computer Science (LICS 2013), June 25-28, 2013, New Orleans, USA.

Journal reference: Proceedings to appear.

Comments: Google Scholar Updates's top recommendation this week was something a bit different - my old PhD supervisor Andrew Pitts has just released the first full book ever published on the relatively new field of nominal sets, titled Nominal Sets: Names and Symmetry in Computer Science. As a recommendation to read it is of course very much in line with my interests, but as a recommendation to write about on this blog within a week's notice it is not so realistic (Google Books offer a preview if you're curious).

Let's turn instead to the second recommendation in my queue, which happens to also stem from Cambridge. In fact it is a sequel to the Staton paper I wrote about four months ago, continuing his development of parameterized algebra. Please refer to my earlier post for a reminder of what that is.

In this paper parameterized algebra is put to work to talk about computational effects. If we think of a computer program as a process that, after some computation, returns a result, a (side) effect refers to things the computer does while that result is being produced. For example, the computer might write something on the screen, or change something in memory. Less obviously, if the program can accept user input this is also a sort of side effect. Whether such events are treated as side effects or as part of the value returned is somewhat flexible, and can differ by programming paradigm or language, but separating computations into the values they calculate and the effects they produce en route is hard to avoid entirely, even in the relatively 'pure' world of functional programming.

A big development in functional programming that allowed effects to be handled more transparently came with Moggi's 1991 paper Notions of computation and monads. Monads are a concept we've encountered a few times on this blog, most extensively here, and essentially give a concise category theoretic way to describe equationally defined, or algebraic, models. Monads are a core concept of category theory because of their relationship to the basic idea of adjunctions, but hardly seem candidates to impact the working practices of your average Haskell programmer. And yet, Moggi and later authors showed that monads are a nice way to 'package up' effects in functional programs to make effectful programs more structured.

The next big milestone in the intellectual history behind this paper is Plotkin and Power's 2002 Notions of computation determine monads, deliberately titled as a sequel to Moggi's classic. This paper essentially dialed the practical issue - effects - back to the original theory - equational definitions - and asked how the monads used to handle effects can be defined equationally. This work was in a sense experimental, as it gave a few examples of how such definitions were both possible and illuminating without building any sort of general theory. Staton to an extent follows this example-driven approach, looking at computational effects that can be thought of as multiple instances of a general concept, such as writing to a specific memory bit being an instance of the general idea of writing to memory. However he moves beyond the examples to argue that the effects in question are correctly captured by his theory of parameterized algebra, which hence provides a syntactic and semantic framework through which many interesting side effects can be defined, categorised, and studied.

Sunday, 19 May 2013

Correctness of Functionally Specified Denotational and Operational Semantics


Authors: Sjaak Smetsers, Ken Madlener (Radboud University Nijmegen), and Marko van Eekelen (Radboud University Nijmegen and Open University of the Netherlands)

Conference: Symposium on Trends in Functional Programming (TFP 2013), 14-16 May 2013, Provo, Utah, USA.

Journal reference: Lecture Notes in Computer Science, to appear.

Comments: Last week I looked at the ways Google Scholar Updates has been useful to my research; I recently read a blog post that saw the tool put to a rather more surprising use: Updates recommended a possibly relevant published paper that turned out to be highly related, in the sense that it was partly plagiarised from the academic's own work! Click here to read the story.

Moving on to this week's paper, we'll look at bialgebraic semantics of programming languages. The (only) good thing about technical jargon is that you can fit a lot of content into a small space, so we'll try to unpack this over the course of the post.

First, what is meant by "semantics of programming languages"? To give such is to give a precise and formal mathematical statement of how a programming language should be understood. These semantics can then be used to (a) reason about properties of the language, (b) reason about programs written in that language, and (c) guide the implementation of the language, e.g. compilation. Semantics come in a variety of different styles (see e.g. Wikipedia's overview of some options); the "bialgebraic" style of this paper is so called because it combines algebraic and coalgebraic approaches.

Algebraic semantics start with a specification of the syntax of the language, and then show how programs using that syntax can be interpreted within some mathematical setting. This mathematical translation is called the denotation of a program, and so algebraic semantics are a form of what is called denotational semantics. We discussed this approach here, but our mathematical setting in this paper is a lot simpler than that used there - the category of sets and functions rather than rich and complex "domains". The advantage of denotational semantics in general is that they are 'mathematical' in character and hence can be reasoned about with familiar mathematical tools such as induction; the disadvantage is that they are abstracted away from the original language enough that the actual behaviour of programs can be obscured.

Coalgebraic semantics are the category theoretic dual of algebraic semantics; we discussed coalgebra here. Coalgebra are a good way of talking about systems with transitions; when used in semantics they view programs as transitions between states (say, memory states of the program). This is a particular style of what is called operational semantics.

The denotational and operational semantics of a language or program should, of course, have something to do with each other; the jargon for when they conform is the adequacy property. But we'd like to say more than that: we'd like the coalgebraically specified operational behaviour of the language to very closely flow from the algebraically specified syntax. In other words we'd like the algebra to provide 'structure' for the coalgebra, in the style called structural operational semantics. It turns out this sort of structure can be imposed at the level of abstract category theory, quite independent of the attributes of any particular language, by asking that the algebra and coalgebra share certain mathematical relationships called 'distribution laws'. The seminal paper behind this idea is Turi and Plotkin's 1997 Towards a mathematical operational semantics, but I'd recommend starting with Klin's nice introduction to the area.

So what does this week's paper contribute? One problem with formal semantics - the reason most languages lack them despite their obvious advantages - is that real languages are huge and complicated and formal proofs involving them are just too big to fit in any one human being's mind. Hence there is a need for formalisation, in the sense that machines, rather than people, should at the very least record and check proofs, and preferably do some of the grunt work themselves. This is a pretty common refrain on this blog - see here for example. This paper, then, is a formalisation of bialgebraic semantics (the general theory, not any particular language) into the specification tool and theorem prover PVS. The particularities of the coding are a bit beyond me, as a non-expert in mechanised theorem proving, but the appeal of PVS seems to be that it is designed and used as a working tool for real world formal methods, and so provides a promising link between abstract category theory and application; the main impediment seems to be its weak support for certain forms of polymorphism.

Monday, 13 May 2013

Reviewing Google Scholar Updates - 12 papers in

4 months or so ago I launched this blog, experimenting with the academic paper recommendation tool Google Scholar Updates, which uses your own papers as a 'search term' to recommend recently archived papers. I've now read and written on 12 papers, from 14 recommendations (I broke the purity of the 'experiment' by skipping two papers that looked very unlikely to be of value to me); in this post I will review what I've learned from the experiment so far.

Relevance: The biggest question is whether Updates has been regularly recommending papers that are interesting and useful to me. To answer this we must first observe that there is one very obvious problem with the methodology of a tool like this, which is that its only input is the research I've done in the past, and not that which I'm doing right now but have yet to publish. This is particularly noticeable to me right now because I've been partly transitioning away from my PhD line of research to take on some new interests. These new directions are not catered for by Updates, but neither would you expect them to be; it would be foolish to use Updates as your only source of reading. Nonetheless the point should be made that Updates could only ever be useful in proportion to how much your current research interests are a continuation of your past ones.

With this caveat set aside, I have been happy with the relevance of my recommendations - happy enough to continue using this tool, and maintaining this blog. There has been only one conspicuous example of a paper with very low relevance to my research interests, although a number of other papers have fairly weak links. In my case, these weakly linked papers usually comprised a brief discussion of handling name and binding issues in the context of some other investigation. In fact these weakly linked papers have in many ways been the most interesting to read about, as they help me situate my work in the broader fabric of theoretical computer science research. So while there is a concern (not one I totally buy into) that these sorts of recommender systems in general can produce an 'echo chamber' effect, fitting people into narrow profiles and never challenging them, that's not been my experience with Updates.

Breadth of sources: One problem that might be underestimated is the sheer diversity of different outlets that publish research relevant to me. A long time ago everything worth reading would appear in the Proceedings of the Royal Society, but nowadays research of interest to me might appear in all sorts of journals, conference proceedings, and online repositories. The only venue to 'publish' more than one of the twelve papers I have had recommended is arXiv, which is far too large and diverse to regularly check. In fact, 4 of the 12 papers were not formally 'published' anywhere more high profile than the personal websites of the authors (more on such 'grey literature' below). The converse of this diversity of outlets is that any one outlet publishes only a very small proportion of material I am interested in, or even understand. Therefore regularly checking certain outlets for new research borders on the useless, and is certainly inefficient. Some sort of automation, whether through explicit keyword or citation alerts or more 'intelligent' but opaque recommender systems, seems virtually compulsory if you want to stay current with relevant research.

False negatives: A tricky question to judge is whether any highly relevant papers are not being recommended to me. Following my earlier discussion, the first answer to this is "of course!" - at a minimum, Updates is worthless to bring my attention to research relating only to my current unpublished research. On the rather fairer question of whether Updates is failing to recommend papers relevant to my existing body of work, one thing that could help make that judgement is that Updates offers, along with a selective 'top' list (currently numbering 32), a less selective 'all' list of recommendations (currently 99). Are there recommendations in the second list that I would have preferred on the first? I haven't studied this systematically, but on a brief look the answer is no - the algorithm has highlighted papers that at least from their titles and authors seem more interesting than those on its backup list. The more imponderable question is whether Updates is missing some highly relevant papers altogether; all I can say here is that I am not aware of any.

Quality: Of course a paper can be highly related to my research and still not be worth my time - here is one clear example, and as I said I entirely skipped two others (by the same authors). Updates does not really filter by quality in any direct way and so this problem is probably unavoidable. It is, however, not chronic - perhaps I'm simply lucky to be in a field not too overburdened with cranks, because the vast majority of papers I have read are genuinely good science. I wouldn't necessarily say 'excellent' science, but the reality is that most science is not groundbreaking - and that's OK! For every paradigm shift that changes everything you need a volume of 'normal' science to fill in the gaps and push at the edges; it's fair to say that most of the papers I have read for this blog fit into the latter category. In particular, many have been 'sequel' papers extending earlier work by the same authors, which are very popular in an era where quantity of publication gets (perhaps undue) emphasis.

One issue on the quality front that deserves particular emphasis is the large amount of unreviewed work I have had recommended. 7 out of the 12 papers had not yet been accepted for publication at the time of my review (one of these has subsequently been accepted for publication). Researchers are not all of one mind on whether such unverified work should be exposed to public view or not, but enough do put this sort of work out there that it constitutes a major proportion of Updates's recommendations. The advantage of this is that you see new science quicker; the disadvantage is that the work tends to be less polished in its exposition and less reliable in its results. Your mileage with this tool will almost certainly depend on your tolerance for this unpublished grey literature.

Sociology: One slightly unexpected aspect of this tool is that it has given a pretty interesting big picture not just of who is releasing research of interest to me, but of where. This is quite important as I look for post-postdoc work! Of the 12 papers an overwhelming 11 come from institutions in Europe, with 1 each from Asia and South America (numbers add to more than 12 because some papers have multiple authors from different institutions). Breaking that down, there were 5 from the United Kingdom, 2 each from Germany and Poland, and 1 each from Brazil, France, Japan, Romania, and Sweden. While these statistics are only compelling so far as one trusts the quality of Updates's recommendations, the sheer weight of European institutions (not to mention the surprising absence of the USA) say a lot about where my sort of research is being done, and emphasise the role geography still plays despite all the connections modern communications make possible - something difficult for a researcher in relatively isolated Australia to be unaware of, of course.

Sunday, 5 May 2013

Nominal Monoids


Author: Mikołaj Bojańczyk (University of Warsaw)

Reference: Theory of Computing Systems, Springer Open Choice, 4 April 2013

Comments: This, the twelfth paper I've read for this blog, represents something of a full circle, as it is by one of the same authors that wrote the first paper I looked at. In that post I discussed how Bojańczyk and collaborators had generalised the nominal sets model to capture certain computational phenomena, particularly to do with automata.

Automata are mathematical machines, somewhat like flowcharts, that accept certain inputs as 'questions' and return 'yes' or 'no' as their answer (more sophisticated notions of automata can return more complex output, and can also fail to terminate). The set of inputs that an automaton returns 'yes' to (in the jargon, recognises) is known as the automaton's language. This paper takes a more abstract view of formal language theory by, instead of using automata, using abstract algebraic structures called monoids to recognise languages, but the intention is the same - to study certain classes of computation at a clarifying level of mathematical abstraction.

There actually already exists a paper titled 'nominal monoids' (pdf), written in 2010 by Alexander Kurz (one of the authors of last week's paper) and collaborators, and perhaps surprisingly not cited in this paper. To be fair, the two notions of nominal monoid are a bit different in their emphasis - Kurz et al are interested in monoids equipped with binding structure, while Bojańczyk is interested in changing the 'alphabet' underlying languages to include an infinite number of letters, possibly equipped with some non-trivial structure. This point of view is known as languages with data words and is claimed to have applications e.g. with XML.

The key result of this paper is to adapt a result called the Schützenberger-McNaughton-Papert theorem from the setting of ordinary languages to languages with data words. This theorem makes a surprising and beautiful link between languages, the monoids that recognise them, and the logic that specifies them (specifically first-order logic, one of the basic logics you learn in a first or second year course). Bojańczyk extends notions of language, monoid, and even first-order logic to his generalised nominal setting, and his version of the theorem is proved with a new assumption that the monoid is 'ordered' in a certain sense. Interestingly, in the 'ungeneralised' nominal sets model that I'm most familiar with, this ordering requirement becomes unnecessary, so the statement of the theorem becomes very close indeed to the standard version.

Sunday, 28 April 2013

Nominal Coalgebraic Data Types with Applications to Lambda Calculus


Authors: Alexander Kurz, Daniela Petrişan, Paula Severi, and Fer-Jan de Vries (University of Leicester)

Reference: Retrieved from Fer-Jan de Vries's homepage, 22 April 2013.

Comments: The topic of this unpublished paper is infinitary syntax, for which terms need not be finite, but rather can extend to the horizon. The key example is the infinitary λ (lambda) calculus, which I have previously discussed, and this paper is a sequel of sorts to a 2012 conference paper that focused purely on this example.

At first glance infinitary syntax might seem an abstruse thing to study, but consider the λ-calculus. This is an abstract model for all computation, but not all computation terminates, and nor is all computation supposed to terminate. We therefore need to think infinitely if we are to reason about the behaviour of non-terminating computation. But how can this infinity be 'tamed' so we can reason about it with our finite minds? There are two techniques. One is to start with the finite terms (over some signature) and extend them to infinity with a procedure called 'metric completion'. The other uses a category theoretic notion called final coalgebra.

I discussed category theory at some length here. Recall that category theorists describe mathematical structure by the transitions, or arrows, between objects. These definitions are often presented as diagrams. One of the truly cool things about category theory is that given such an arrow-theoretic definition, we can turn all the arrows in the diagram around and we get a new definition of something different, but related and often interesting in its own right - what category theorists call dual. So the dual of (the abstract arrow-theoretic definition of) zero is one, the dual of addition is multiplication, and so on. The notion of a set of finite terms can be defined with arrows as what is called an initial algebra; turn the arrows around and you get a final coalgebra, which gives you the set of infinitary terms, equipped with helpful techniques for reasoning about them (coinduction instead of induction).

The move this paper makes is to look at syntax with names that can be free or bound, a major focus of my own research and hence, unsurprisingly, of the recommendations of Google Scholar Updates. To get final coalgebra for this sort of structure we move from the category of sets to the category of nominal sets, which I gave an introduction to here.

At first blush, however, the marriage of infinitary syntax and nominal sets looks like it might be an unhappy one, because the machinery of nominal sets crucially requires a finiteness condition - only finitely many free names. Indeed, the metric completion notion of infinitary syntax and the final coalgebra notion no longer coincide in this setting, as the former happily accommodates infinitely many free names. The authors argue that the more restricted nominal sets setting is the more useful, given that it supports coinduction. This certainly accords with intuition, as if we think of infinite terms as non-terminating 'unfoldings' of programs that started out finite, and impose the reasonable requirement that such unfoldings can not generate new free names, then the infinite unfoldings will only have at most the names of the original finite program.

For those applications where infinite numbers of free names truly are necessary, the authors communicate an idea from nominal sets co-creator Andrew Pitts that replaces free names with constants. I'm not sure how flexible this idea is - we can't bind a constant - but it may be sufficient for some purposes.

Sunday, 21 April 2013

A Sorted Semantic Framework for Applied Process Calculi


Authors: Johannes Borgström, Ramūnas Gutkovas, Joachim Parrow, Björn Victor and Johannes Åman Pohjola (Uppsala University)

Reference: Retrieved from Joachim Parrow's homepage, 16 April 2013.

Comments: After some time away from the blog due first to conference travel, then to the next round of conference submission, a number of papers have collected in my Google Scholar Updates in-tray. This week sees an unpublished paper that, judging from its length of exactly 15 pages (not counting appendices) must also be a conference submission - wrestling scientific work to exactly fit the page length requirements of this or that venue is one of the necessary frustrations of being a researcher.

A very common mode of research in theoretical computer science is to study formal calculi, which are often best seen as idealised programming languages, usually with a very simple syntax and set of operations. These calculi allow various aspects of computation to be formalised and studied without all the impurities and hacks that go along with a 'real' programming language intended to be conveniently usable. The most famous and important such calculus is the λ (lambda) calculus, which I discussed a few posts back.

Almost as important are the calculi known collectively as the process calculi, which are intended for the formalisation and study of concurrent computation - computation that runs in parallel (perhaps simultaneously) in different threads, occasionally interacting. This is an important topic because computers - even home PCs - are increasingly multiplying the number of separate 'cores' in their hardware, not to mention the use of massively distributed computation across the internet; yet what the best programming languages, methodologies, and theory should be for this sort of architecture remains very much up for grabs.

The process calculi best known to me is the π (pi) calculus, introduced by Robin Milner, Joachim Parrow (one of the authors of this paper), and David Walker here. This notion of process calculus alone has produced dozens of offshoots and modifications, often aimed at some specific application. To name just one important example, the spi calculus is used to formalise cryptographic protocols.

At one level this paper merely adds to the arguably overstocked zoo of process calculi, by making modifications to the existing psi-calculus by adding a sort system (a concept I discussed, in a rather different setting, here) and more flexible pattern matching. At another level, this paper is intended to be a unifying and clarifying contribution to the zoo, by setting the augmented psi-calculus up as a system that captures many other process calculi. By this the authors don't mean that the new and improved psi-calculus makes systems like the spi calculus practically redundant; rather, the spi calculus can be seen in a some relatively natural way as a particular instantiation of the psi-calculus. The advantage of this, as posited in the introduction, is that basic theoretical proofs about process calculi only need to be 'done once', for the very general psi-calculus, and then can be exported 'for free' to hold for the many process calculi that are examples of the psi-calculus. With these basic correctness proofs out the way, people can then direct their attention to what really is unique, interesting or useful about the calculus they are particularly interested in.

Sunday, 10 March 2013

Hiatus

This blog will be on hiatus for a month or so while I travel to Italy for the ETAPS conferences.

Sunday, 3 March 2013

Full Abstraction for Nominal Scott Domains


Authors: Steffen Lösch and Andrew M. Pitts (University of Cambridge).

Conference / Reference: 40th annual ACM SIGPLAN-SIGACT symposium on Principles of Programming Languages (POPL '13), 23-25 January 2013, Rome, Italy (ACM Press).

Comments: Just two weeks ago I was remarking on the very large proportion of papers recommended by Google Scholar Updates that came from Cambridge; well, here is another one - the fourth of the last six. To be precise, I should say it was the fourth of the last eight, as I have decided to fudge the purity of the experiment for the first time by skipping two papers from the same authors as this this one, on the grounds that I was unlikely to gain much benefit from them.

This paper, whose second author was my PhD supervisor, is much more down my alley, as you might expect; it is another that deals with nominal sets, which I first discussed on this blog here. In that post I mentioned in passing that nominal sets had their origins in 1930s axiomatic set theory, before a rather surprising revival in 1999 by Murdoch Gabbay and Andrew Pitts, who employed them to model syntax with names and binders in computing. However their origins in deep set theory shouldn't be completely forgotten; their original use was for an attempt to construct a model of set theory where the axiom of choice failed.

Mathematics as conceived by Euclid is the process of enumerating axioms and inference rules that are obviously true, and then deriving truths from them (some of which may be far from obvious). Even in Euclid's own work this idealised picture did not really hold, as the fifth postulate of his geometry, regarding non-parallel lines, was rather more complicated that might be desired, and was long regarded as a flaw of his presentation. Much later it turned out that denying this postulate could lead to interesting and valuable mathematics for geometry in curved space. The attempt to axiomatise set theory in the early twentieth century came to a quite similar place, with the axiom of choice (which regards the ability to make systematic choices from unstructured sets, even infinite ones) as the useful but non-obvious 'rogue' axiom. Again, it turned out that both the axiom and its negation were compatible with the other, more obvious, set axioms, and so 'nonstandard set theory' in which the axiom failed - of which nominal sets are an example - could be developed and applied, as with non-Euclidean geometry.

The subjects of this paper, domain theory and denotational semantics, are techniques to interpret computer programs as objects in a mathematical setting that directly supports rich and complex computation phenomena such as recursion. This apparently esoteric material about set theoretic axioms matters because some of the proofs of the basic facts about domains use the axiom of choice. This paper defines a programming language PNA (and an extension, PNA+) with commands for manipulating and binding names, and develops domain theory within nominal sets to provide a characterisation of programs in this language. But domain theory within nominal sets needs, in part, to be reconstructed to avoid reliance on the now invalid axiom, which is one of interesting technical challenges addressed by this work (following earlier work by Glynn Winskel and David Turner (pdf), also at Cambridge).

Sunday, 24 February 2013

Towards Self-Verification of Isabelle's Sledgehammer


Authors: Jasmin Christian Blanchette and Andrei Popescu (Technische Universität München)

Reference: Retrieved from first author's homepage, 18 Feb 2013.

Comments: One of the most popular short-hand techniques for assessing the impact of an academic paper is to count how often it has been cited; this is increasingly applied to researchers also, by looking at their citations across all their papers. These numbers have real impact, for the careers of young researchers like myself (young-ish, at least!) in particular. Google Scholar is an increasingly popular source of such stats, as the citation number appears conveniently below every paper the search engine returns, and statistics like the h-index are prominently displayed on individual researchers' pages.

In a recent paper that has received a fair amount of attention, Manipulating Google Scholar Citations and Google Scholar Metrics: simple, easy and tempting, three Spanish researchers showed that it is exceptionally easy to game this system to inflate the citation statistics of papers and researchers. The reason for this, which I've already discussed several times on this blog, is that Google Scholar greedily archives anything that vaguely looks like an academic paper, and so people merely need to upload bogus papers with citations to themselves on appropriate websites (usually on university domains) to juice their stats. I should add that I don't think this sort of outright fraud is a particularly big issue - it is fairly easy to discover and would destroy the reputation of anyone who engaged in it. But it does raise the more important question of what the widely viewed statistics Google Scholar produce are actually supposed to mean (other databases that are used for these sort of statistics, such as Scopus and Microsoft Academic Search, have the opposite problem of failing to count perfectly reasonable peer-reviewed research; their numbers are therefore just as dubious).

None of this is to imply that this week's paper is in any way bogus; I was however put in mind of this controversy because it is simply listed on Blanchette's website (on the TU Munich domain) as a 'draft paper'. Has it been submitted anywhere, and will it ever be? It's impossible to say, but from the point of view of Google Scholar's statistics its citations are as good as those of any other paper.

This paper deals with the broad theme of mechanised theorem proving, where computers assist mathematicians in their work. There are two main ways that this happens. First is automated theorem proving, where a computer program tries to solve some problem on its own. However, we are not particularly close to seeing this sort of program actually replace human mathematicians; for the moment arguably the more important task for computers is the simpler problem of checking proofs for logical errors or gaps. Under this paradigm the mathematician enters their own proof into a 'proof assistant' program like Isabelle, and tries to convince it that their proof is correct and complete.

A more sophisticated mechanised theorem proving approach involves combining these two approaches, so that the creative mathematical ideas are supplied by a human, but the program tries to reduce the drudgery of the proof by proving as much as it can on its own (usually mundane but time-consuming details). This is the idea behind the Isabelle tool Sledgehammer, which calls a variety of first order logic automated theorem provers and sets them to the task of finding proofs of statements expressed in that logical language.

According to this paper, "paper proofs reign supreme in the automated reasoning community". I don't have quite enough background in this field to judge the truth of this, but if true it certainly is ironic. After all, if the provers are not maximally trustworthy, how can we trust their proofs? (This sort of 'who watches the watchmen' problem can easily spiral off into paradox, of course, as the watchmen-watchers must themselves be watched; but we would still like the watchmen to be watched as well as possible!) This paper makes a contribution to filling this gap by verifying a particular aspect of the Sledgehammer tool - the encoding of many-sorted first order logic with equality into unsorted first order logic with equality.

A sort system is a way to stop us forming grammatical but nonsensical sentences like Noam Chomsky's "colourless green ideas sleep furiously". Here 'sleep' should be a predicate that accepts certain sorts of things (only animals, say), and 'ideas' should not have that correct sort. Sort systems are convenient, but it turns out they are not essential, in first-order logic at least - all the information held in a sort system can be faithfully expressed by predicates or functions or first-order logic. This sort of translation is used by Sledgehammer, as Isabelle users want to use sorts, but the automated provers Sledgehammer calls on work with unsorted logic only.

The catch is that the sort-deleting translation, performed naively, is inefficient, producing large numbers of predicates or functions carrying around sort information that turn out not be needed. This clutter is a problem for the automated provers, slowing them down at best, and possibly causing them not to find an answer when they otherwise might have. Sledgehammer therefore uses cleverer techniques for the translation, first developed in the 2011 paper Sort It Out With Monotonicity (and further developed in the conference paper "Encoding Monomorphic and Polymorphic Types", also available on Blanchette's website). This paper verifies the correctness of these implemented cleverer encodings, and actually finds an error (albeit not a fatal one) in the original 2011 paper - always a pleasing result for those championing the mechanically verified approach to mathematics.

Sunday, 17 February 2013

Proof-Relevant Logical Relations for Name Generation


Authors: Nick Benton (Microsoft Research Cambridge), Martin Hofmann (LMU Munich), and Vivek Nigam (Federal University of Paraíba)

Reference: Retrieved from third author's homepage, 11 Feb 2013.

Comments: It's an interesting and perhaps underappreciated fact that where you do your earliest research can typecast you for some time - perhaps a whole career - into a certain 'school' of researchers who look at similar problems, and use similar techniques to solve them. This notion of a school of thought is perhaps most known in the humanities, and particularly philosophy, but it's just as true in science, I think. For this reason, perhaps it's not surprising that this is the third paper in four weeks that Google Scholar Updates has served up that hails from Cambridge, where I did my PhD (although Benton works at Microsoft Research Cambridge, rather than the university's Computer Laboratory, that doesn't really diminish the point - the buildings are after all right next door to each other).

I should be completely clear by saying that this paper hails only in part from Cambridge, as this is a truly international collaboration, with the England-based researcher joined by one in Germany and another in Brazil. The paper is described on Nigam's website as a "conference submission", so the usual health warnings about grey literature apply - the paper has not been peer reviewed, is very likely to change between now and publication as a result of reviewers' comments, and of course may not appear at all if rejected.

This paper is yet another to look at my favourite topic of nominal sets (see here for discussion), albeit in rather disguised fashion. It's well known (although I don't know that the result has ever been published) that nominal sets have an equivalent formulation called the Schanuel topos, where instead of taking sets with certain structure as primary, we take certain homomorphisms between categories, called functors, as our objects. The intuition is that these functors map finite sets of names to the sets of elements that are 'supported by them', so the functors map into the category of sets. We then throw out all the functors that don't have a certain property called pullback-preservation. It's fair to say that this formulation of nominal sets isn't so simple to understand for most, but it's natural from the point of view of category theory, and that point of view can be useful, as I discussed here.

The move taken in this paper is to consider the pullback-preserving functors that start in the same category as the Schanuel topos, but map into the category of setoids, not sets. I should say that there seems to exist a reasonably widely used definition of 'setoid' (see e.g. here) which the authors are not cohering to, which is confusing, but these things happen (and Humpty Dumpty would urge us to be philosophical about it). The essential notion is that a setoid is a collection of 'types' equipped with explicit 'proofs' that some of these types are equal to others. I found myself wondering whether a set theoretic version of this paper could have been written that starts with the usual definition of nominal sets and adds structure to define 'nominal setoids'; this is not discussed in the paper, and perhaps such a construction wouldn't work, or would work but not be so natural for the applications that the authors have in mind.

What applications of this setoid version of Schanuel topos do the authors present? In a related and longer arXiv paper they discuss a range, but this paper looks as a case study at reasoning about the ν (nu) calculus, a formal calculus developed by Andrew Pitts and Ian Stark in a 1993 paper, and in particular the equationally specified semantics for the observable behaviour of the ν-calculus, first studied by Stark in 1996. The ν-calculus is an extension of the λ-calculus with explicit support for the generation of new 'names' encapsulated within a given scope (a common feature of programming languages). This paper shows that the setoid-Schanuel view can support a definition of the operators of the ν-calculus (to be precise, a 'metalanguage' for reasoning about the ν-calculus) which validates the equations for semantics that Stark gave in his 1996 paper.

Sunday, 10 February 2013

Classical lambda calculus in modern dress


Author: Martin Hyland (University of Cambridge)

Reference: arXiv preprint arXiv:1211.5762v1 [math.CT], 2012.

Comments: For those reading this blog who aren't familiar with Google Scholar Updates, it might help to describe the format of this tool, because it is a little different from how search tools are usually set up. Instead of being ranked by relevance, my Updates page's 42 results, all presumably judged as crossing some threshold of relevancy, are ranked chronologically, so the most recently archived papers are at the top. The oldest paper is 326 days old; I do not know for sure if they will start dropping off the list, but I suspect they will once they get to a year old. I should say that the 42 results are from my list of 'top' recommendations, which I am using for this blog - there is also a less selective list of 'all' recommendations that currently has 99 results. In weeks where no new papers have been added I will go down the list till I get to my first unblogged paper; there hasn't been a lot of activity lately, so this week's paper is currently fifth.

This is another arXiv preprint (discussion here), but slightly unusually the author specifies which venue the paper is intended for - a 'festschrift' special issue of the journal Mathematical Structures in Computer Science in honour of the 90th birthday of Italian computer scientist Corrado Böhm (journal issues as 'birthday presents' to eminent scientists are a rather nice tradition in science, and quite common; one paper of mine is in a festschrift for Gordon Plotkin's 60th birthday, for example).

The subject of this paper is the λ (lambda) calculus, an area to which Böhm made important contributions. The λ-calculus is a mathematical formalism developed in the 1930s by Alonzo Church as a 'theory of functions'. The two basic constructions are application - given a function f and expression e, we get a new expression f(e) (usually written without brackets), and abstraction - given an expression e which may mention a fresh variable x, the function λx.e will evaluate by binding whatever argument it's given to x throughout e. For example, λx.x is the identity function that returns whatever it's given unchanged. Finally, raw λ-calculus expressions are considered to be equal if they differ only by certain variable renamings, or they can be simplified in the same way by evaluating them as functions. I should also note that the rhetorical distinction I'm making between 'functions' and 'expressions' is a bit misleading - in the pure λ-calculus, everything is a function, and functions happily take other functions as inputs and produce them as outputs.

Remarkably, the simple system I have sketched can be used to compute any function (on natural numbers) that it is possible to compute! This claim is called the Church–Turing thesis (where 'Turing' refers to the Turing machines of Alan Turing). So the λ-calculus is important and useful to people like me whose job it is to ponder the nature of computation all day.

It's one thing to define a formal system of rules and procedures, but it seems a bit like a pointless game if we don't have a precise understanding of what they all mean. After all, why these rules and not some others? This is a question of semantics, and in the case of the λ-calculus this question has, in the words of this paper, "proved vexed". The simplest interpretation - that λ-expressions correlate to ordinary functions - turns out not to work. Computer scientists have had 80-odd years to work on this problem, so of course there are now a number of answers on the market; this paper adds a new one, and shows that it is mathematically natural and yields some of the earlier approaches as special cases.

The semantics for the λ-calculus given by Hyland are category theoretic (an approach I discussed generally here). The basic concept is that of an 'operad', although it could also be given in the more familiar (to me, at least) guise of Lawvere theories. The general idea is that for every natural number n we have a set L(n) of all the λ-expressions with at most n free variables. This is the setting for ordinary equational logic (see here). To get to the λ-calculus we must also have functions L(n+1)→L(n) and L(n)→L(n+1), where the first corresponds to adding an abstraction, and the second to applying an expression to a fresh variable. These functions are related because doing the first, then the second, leaves you back where you started - e = (λx.e)x. Doing the second, then the first, need not leave you back where you started, but if we added the requirement that it did - that f = λx.(fx) - this is exactly η-equivalence, a common condition to add on to the basic λ-calculus.

This is attractive stuff if you are a computer science and category theory nerd - and I am! The rest of the paper is dedicated to showing that this construction, called a 'semi-closed structure' is indeed a good way to interpret the λ-calculus, by relating it to the syntax, and to some other notions of semantics. There's a lot in this paper and I couldn't absorb it all in one week - the 'Taylor Fibration' material went a bit over my head - but I still greatly enjoyed it.

Sunday, 3 February 2013

Nominal Event Structures


Authors: Andrei Alexandru and Gabriel Ciobanu (Institute of Computer Science Iași)

Journal reference: Romanian Journal of Information Science and Technology, Volume 15, Number 2, pp. 79-90 (2012).

Comments: I must admit that this paper hailing from the 'Romanian Journal of Information Science and Technology' (ROMJIST) didn't exactly fill me with confidence. My scepticism was not primarily driven by the 'Romanian' in the title - many journals and conferences are sponsored by scientific societies linked to specific countries or regions, and not necessarily the 'obvious' countries either - the conference WoLLIC, through which I have published, has significant sponsorship from Brazil, for example.

Rather, it's the 'Information Science and Technology' in the title that set off alarm bells. How does a journal ensure high quality and credible reviewing when their remit covers the entire field of computer science? If you're the Journal of the ACM that's possible because you've set yourself up as a flagship journal for the whole subject, with a seriously high powered and diverse board of editors (Nature is a more famous and even more wide-ranging example of this sort of model). But for less high profile journals (a category in which ROMJIST obviously sits) a larger degree of specialisation seems essential if quality is to be maintained.

Setting that aside, Google Scholar has recommended this paper to me, and so I read it with an open mind. The reason for the recommendation is clear enough, as it deals my speciality of nominal sets, a variant of set theory in which each element is linked to a 'finite support' of names. I discussed the computational applications of this model in my first article on this blog. It's a natural and common move to reinterpret mathematical concepts from 'ordinary' set theory into nominal set theory - my own work on equational logic, which I discussed briefly last week, can be seen in this light. This paper makes this move on event structures, abstract structures in which causality and conflict between events can be modelled.

Here's the problem though - nowhere in this paper is it explained why creating a nominal variant of event structures might be a good idea. No examples are given of these new nominal event structures, no problems are solved using them, no 'further work' section suggests what might be done with them next, and no interesting theoretical or mathematical issues are brought to light. The small number of novel proofs in this paper are all completely routine checks that finite (or sometimes empty) support properties are not violated by the new definitions. I don't intend this blog to be a space where I 'review' papers in the academic sense, with all the controversy that process inevitably generates, but I can't report on the usefulness of Google Scholar Updates without indicating where the papers I read are not useful, and this is certainly such an instance.

Monday, 28 January 2013

An algebraic presentation of predicate logic


Author: Sam Staton (University of Cambridge).

Conference: 16th International Conference on Foundations of Software Science and Computation Structures (FoSSaCS 2013), 18-20 March 2013, Rome, Italy.

Journal reference: Lecture Notes in Computer Science (to appear).

Comments: After last week's head-scratcher, Google Scholar Updates is back on firmer ground with a paper by somebody I know, from my old university, that cites one of my own papers. In fact, this paper is due to be presented at a conference later this year that I also have a paper at (pdf), so this is a chance to read something before I see it presented.

Equational logic is the logic that allows you to reason about systems that are defined with equations; its basic rules include facts such as "everything is equal to itself", and "if x is equal to y, then y is equal to x". This may not seem particularly deep, but in fact a vast number of mathematically significant systems are amenable to equational reasoning - groups, rings, Boolean algebras, and so on. I myself have conducted a short course of lectures on equational logic at the Australian National University's Logic Summer School - my slides can be found here.

For all the power of equational logic, both the demands of applications and pure curiousity have often lead people to tweak or extend the basic logic. The bulk of my own publications have looked at a certain tweak of equational logic, and this paper introduces another (my paper Nominal Equational Logic, with Andrew Pitts, is cited as related work for this reason).

Staton's new style of equational logic is called 'parameterized algebra'. In the standard equational setting, we start with a 'signature' of operation and constant symbols that can be used to build terms. For example, if we are interested in monoids we have an identity constant and a binary multiplication; if we move to groups we add a unary inverse operation, and so forth. With parameterized algebra we have two signatures - a main one, and an ancillary one that is used to define parameters. The difference between this and ordinary equational logic with two sorts is two-fold. First, parameter-sorted variables can be bound by the main signature's operations. Second, parameter terms can be appended to the main signature's variables. For example, the term
∃ a. P[a]
where a is a parameter-sorted variable and P is a main signature variable, indicates that a is bound by the existential quantifier ∃, and that P is 'standing in' for a predicate that may make some mention of a.

Once the new logic is defined, the paper goes in two directions: first, it develops the metatheory of the logic, first via Lawvere-style classifying categories, and then via monads (in fact, relative monads; this is the second paper of the four I have had recommended to me that has made use of this fairly recently developed notion). Second, a major example is developed of 'disjunctive predicate logic', a fragment of first order logic (lacking e.g. implication) that uses parameters. This example is considered as a small programming language, and it is shown that it has a mathematically natural description in the 'classifying category' setting.