Monday 27 May 2013

Instances of computational effects: an algebraic perspective


Authors: Sam Staton (University of Cambridge)

Conference: Twenty-Eighth Annual ACM/IEEE Symposium on Logic in Computer Science (LICS 2013), June 25-28, 2013, New Orleans, USA.

Journal reference: Proceedings to appear.

Comments: Google Scholar Updates's top recommendation this week was something a bit different - my old PhD supervisor Andrew Pitts has just released the first full book ever published on the relatively new field of nominal sets, titled Nominal Sets: Names and Symmetry in Computer Science. As a recommendation to read it is of course very much in line with my interests, but as a recommendation to write about on this blog within a week's notice it is not so realistic (Google Books offer a preview if you're curious).

Let's turn instead to the second recommendation in my queue, which happens to also stem from Cambridge. In fact it is a sequel to the Staton paper I wrote about four months ago, continuing his development of parameterized algebra. Please refer to my earlier post for a reminder of what that is.

In this paper parameterized algebra is put to work to talk about computational effects. If we think of a computer program as a process that, after some computation, returns a result, a (side) effect refers to things the computer does while that result is being produced. For example, the computer might write something on the screen, or change something in memory. Less obviously, if the program can accept user input this is also a sort of side effect. Whether such events are treated as side effects or as part of the value returned is somewhat flexible, and can differ by programming paradigm or language, but separating computations into the values they calculate and the effects they produce en route is hard to avoid entirely, even in the relatively 'pure' world of functional programming.

A big development in functional programming that allowed effects to be handled more transparently came with Moggi's 1991 paper Notions of computation and monads. Monads are a concept we've encountered a few times on this blog, most extensively here, and essentially give a concise category theoretic way to describe equationally defined, or algebraic, models. Monads are a core concept of category theory because of their relationship to the basic idea of adjunctions, but hardly seem candidates to impact the working practices of your average Haskell programmer. And yet, Moggi and later authors showed that monads are a nice way to 'package up' effects in functional programs to make effectful programs more structured.

The next big milestone in the intellectual history behind this paper is Plotkin and Power's 2002 Notions of computation determine monads, deliberately titled as a sequel to Moggi's classic. This paper essentially dialed the practical issue - effects - back to the original theory - equational definitions - and asked how the monads used to handle effects can be defined equationally. This work was in a sense experimental, as it gave a few examples of how such definitions were both possible and illuminating without building any sort of general theory. Staton to an extent follows this example-driven approach, looking at computational effects that can be thought of as multiple instances of a general concept, such as writing to a specific memory bit being an instance of the general idea of writing to memory. However he moves beyond the examples to argue that the effects in question are correctly captured by his theory of parameterized algebra, which hence provides a syntactic and semantic framework through which many interesting side effects can be defined, categorised, and studied.

No comments:

Post a Comment