Wednesday, 3 January 2024

Discussing "Intuitionistic Type Theory"

(Link)

Author: Per Martin-Löf (University of Stockholm)

Reference: Per Martin-Löf, Intuitionistic Type Theory, Notes by Giovanni Sambin of a series of lectures given in Padua, June 1980 (1984)

Why this paper? Cited by A conservativity result for homotopy elementary types in dependent type theory, Topological Quantum Gates in Homotopy Type Theory, and A parametricity-based formalization of semi-simplicial and semi-cubical sets

5 years on from Martin-Löf’s seminal paper on type theory (on which I have written before; I will try to avoid repeating myself), he presented his approach to a mixed audience of computer scientists, mathematicians, and philosophers. As such, these notes are quite accessible to a range of audiences, with time spent on questions like “What is it we must know in order to have the right to judge something to be a set?”, plenty of examples, and a fairly light touch on metatheorems about his system. However this is not merely a teaching course on his 1975 material, and we see two major new ideas enter his type theory here (and also in his almost simultaneously published paper Constructive mathematics and computer programming, which covers similar ground but is much briefer).

The first is the reflection rule, which allows ‘propositional’ equalities proved within the system to be used freely to replace equals by equals anywhere (as a fully fledged ‘definitional’ equality). This captures the strong extensional view of equality, as opposed to the intensional view of the older work in which objects enjoy first class equality only if they are described the same way, up to some automatic conversions. This is an old concept from philosophy of language, where Superman and Clark Kent are extensionally but not intensionally equal, because there are places in language for which they are not interchangeable: “Clark Kent is a civilian identity” does not imply “Superman is a civilian identity”. Martin-Löf notably does not discuss the reasons for this change or its ramifications. In fact it is known to have unfortunate consequences computationally, violating one of the keystone theorems of his earlier paper – the decidability of type checking – among other troublesome aspects. This does not make extensional type theory useless; it is the right tool for much ‘pen and paper’ type theory in which we want to work synthetically in a particular mathematical setting but use the strong form of equality usually preferred by mathematicians.

The second new idea is the introduction of W-types (at least, I believe this is their first appearance), which gives a general tree-like syntax for inductive types such as natural numbers and lists.

Finally, I would note the interesting fact that, linguistically at least, Martin- Löf repeatedly conflates the terms ‘type’ and ‘set’. This does not reflect most modern practice, in which category theory is usually the language of choice for defining denotational semantics, opening the way for the interpretation of type theory in mathematical settings others than sets. Indeed, in homotopy type theory (invented in part to find an interesting middle way between extensional and intensional equality) the notion of type is strictly more general than that of set: a set is a type without interesting equalities.

Sunday, 2 April 2023

Discussing "Normalization for Multimodal Type Theory"

(Link)

Author: Daniel Gratzer (Aarhus University)

Reference: Daniel Gratzer. 2022. Normalization for Multimodal Type Theory. In Proceedings of the 37th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS '22). Association for Computing Machinery, New York, NY, USA, Article 2, 1–13. https://doi.org/10.1145/3531130.3532398

Why this reference? Cited by Semantics of Multimodal Adjoint Type Theory, A Categorical Normalization Proof for the Modal Lambda-Calculus, and For the Metatheory of Type Theory, Internal Sconing Is Enough

While my reading program for this blog often sends me bouncing around from subject to subject, this blog post is exactly the sequel to the last one, as one of the authors of the Multimodal Type Theory paper tackles the question of normalisation for these systems. Normalisation is the property that all programs in a language terminate; this is not a typical property of programming languages but is important for dependent types: as terms can appear in types, type checking can require execution of arbitrary code, and so termination of type checking in general requires termination of all possible programs. Terminating type checking is a standard requirement, but is particularly desirable when we use type theories to formalise mathematics. In this paradigm type checking corresponds to proof checking, and it seems a bizarre violation of usual practice for it to be impossible to decide whether a putative proof of a theorem is indeed valid.

Proving normalisation of even quite simple type theories, such as the simply typed lambda calculus, is not easy (although the techniques are now well studied and understood). While evaluation is sometimes called ‘reduction’, in fact terms do not reduce in size, or in any other obvious sense, as they compute. Matters get even more difficult in this paper, which refers to not one system, but multiple different type theories depending on one’s choice of mode theory. A recent proof of the property for one particular modal type theory was not encouraging, requiring a 77 page technical report to state all details. This paper also has a companion technical report weighing in at a comparatively svelte 39 pages, and yet proves the result for all modal type theories in the framework (provided that the mode theory chosen is not itself undecidable). How is this possible?

The technology behind this is Synthetic Tait Computability, as developed quite recently by Sterling and Harper. I am not yet entirely on top of this material, and frankly probably need to become so to appreciate the technical details of this quite dense paper (at least the Aarhus group, to which I used to belong, are good about publishing longer follow-ups in journals rather than leaving work in ultra-terse conference-truncated form, as sometimes happens with important or promising work in our field, so a more approachable but still reviewed version of this work might well be coming).

Given the technical prerequisites, perhaps a blow-by-blow of the development will not be so illuminating in a short blog post. Suffice it to say that the approach is deeply category theoretic; even the syntax is defined (it is claimed as a theorem, but looks more like a definition to me) to be an initial model, where model is defined as a certain 2-functor (this would provide a form of ‘fully annotated’ syntax which would probably be the target of an elaboration algorithm rather than pleasant to work with directly, but this is outside the bounds of this paper). Similarly, the normalisation algorithm is defined via the construction of a 'normalisation cosmos' (a cosmos is a weaker notion of model, avoiding some strictness requirements). While I am used to the denotational semantics (‘mathematical meaning’) of type theory being expressed in such terms, and syntax as initial model is OK as well, defining syntactic algorithms via category theory is a bit of a new one on me. It’s always good to have plenty to learn!

Tuesday, 21 March 2023

Discussing "Multimodal Dependent Type Theory"

(Link)

Authors: Daniel Gratzer (Aarhus University), G. A. Kavvos (Aarhus University), Andreas Nuyts (KU Leuven), and Lars Birkedal (Aarhus University)

Reference: Daniel Gratzer, G. A. Kavvos, Andreas Nuyts, and Lars Birkedal. 2021. Multimodal Dependent Type Theory. In Logical Methods in Computer Science, Volume 17, Issue 3 - https://doi.org/10.46298/lmcs-17(3:11)2021

Why this reference?
Cited by A Categorical Normalization Proof for the Modal Lambda-Calculus, Fitch-Style Applicative Functors, and Commuting Cohesions

I’ve discussed the links between types and modal logic a few times on this blog, most notably two weeks ago, when I discussed Davies and Pfenning’s addition of a certain modality into a simply typed functional language. This paper, based on the shorter conference paper of the previous year, extends the state of play by introducing a general recipe for integrating modalities, including multiple modalities at once, into dependent type theory. As long as the recipe is followed (as we will see, this is not always exactly possible) certain desirable properties such as canonicity can be guaranteed, rather than working from scratch to derive such properties every time a formally new modal type theory is proposed.

The two main technologies making this possible are Fitch-style type theory, which I discussed two weeks ago, and the mode theories introduced recently by Licata and Shulman. Given how recently I’ve discussed Fitch-style types I’ll restrain myself to briefly say that these provide a setting for modal type theory that is appealingly general and close to informal modal reasoning, and that their usual formulation is modified here to push some of the complexity of the system from the modal elimination rules to the variable rules, which turns out to be beneficial due to avoidance of certain ‘trimming’ operations on contexts.

Mode theories are three layer structures organised as 2-categories (for some applications, some of these layers may be trivial). These layers are best illustrated through the example of guarded recursion, a setting in which types are regarded as unfolding over time, which is helpful for ensuring that certain recursive constructions avoid the impossible circularity of parts that are constructed in terms of other parts that are not constructed yet. The first layer of a mode theory are the modes, which correspond to different dependent type theories. Often one mode suffices, but with guarded recursion it is convenient to have two, allowing the programmer to switch back and forth between regarding types as varying over time, or as lacking this property; in the latter mode, ‘vanilla’ dependent type theory suffices. The second layer are the modalities, which map from mode to mode. With guarded recursion, the ‘later’ modality starts and finishes in the time-varying mode, but is meaningless in constant mode; two further modalities allow passage between the varying and constant modes. The third layer, of 2-cells, define maps between modalities, which give fine control over details such as whether an S4-like modality is idempotent.

The guarded recursion example is also helpful for observing some of the limitations of this setup. The ‘Löb induction’ property of the later modality, a technical requirement that gives guarded recursion its power and significance, cannot be expressed in a mode theory and must be imposed on top of the iteration of the general recipe. By leaving the recipe, the powerful general theorems proved about recipe-obeying type theories are no longer guaranteed to apply. More generally, the notion of mode theory as 2-category is quite abstract and it will be interesting to see development of how they are realised as syntax, and how their limitations can be understood.

Another limitation, not so significant in the case of guarded recursion but of interest elsewhere, is that every modality that fits into the framework is a ‘necessity’ or ‘box’ style of modality, commuting with products. Some modalities of interest elsewhere are the ‘possibility’ or ‘diamond’ modalities, which (usually) commute with coproducts instead. Admittedly such modalities currently have considerably less application as types that the modalities of this paper, but I am not completely clear whether that will be an enduring truth, or it is something that is true because we do not have good general understanding of how to fit them into type theory.

Next week my paper-picking algorithm will see me consider a direct sequel paper to this one, so it will be interesting to see how these important and relevant ideas continue to develop.

Tuesday, 7 March 2023

Discussing "The Coq Proof Assistant"

(Link)

Authors: The Coq Development Team

Reference: The Coq Development Team. The Coq proof assistant, Version 8.16, September 5 2022

Why this reference? Cited by Type Theory as a Language Workbench and Asynchronous Probabilistic Couplings in Higher-Order Separation Logic

This blog is a project to encourage myself to read more widely, and hopefully spread some knowledge to those who stumble across it, by looking at multiple-cited papers in my recent Google Scholar recommendations. Yet papers are not the only thing that gets cited; I’ve looked at books, monumental pairs of books, and now, my first software. Reviewing software in a week is not an obvious task, as the ‘content’ of software is not encapsulated by looking at source code, but also lives beyond the software in the ecosystem around it – tutorials, libraries, applications, active forums and communities, and so forth. In order to make this task tractable, I’ve decided to concentrate on the Coq User Manual as my reference, and use that to give an overview of Coq.

At its base, Coq is an implementation of a certain version of Martin-Löf Type theory. The intention as usual is to provide a type system expressive enough to naturally express mathematical statements, along with tools to allow users to attempt to prove these statements in an interactive style, with the computer checking each step and assisting with automatic proving where possible. Coq has enjoyed conspicuous success in this, with the manual making particular mention of the CompCert verified C compiler and verification of the four colour theorem.

The name, which is French for rooster but inevitably attracts sniggers, is a play on CoC, the Calculus of Constructions, the version of type theory the original software was based on in 1983. It also works as a pun on the name of Thierry Coquand, whose PhD thesis provided the original impetus (the original software was mostly written by Gérard Huet). The version of type theory that is currently the basis of Coq is CIC, the Calculus of Inductive Constructions, as introduced by Coquand and Christine Paulin-Mohring. This does not mean that the underlying type theory is fixed and static however; for example there was major development on the treatment of coinduction between versions 8.5 and 8.9 (2015-18), and there are currently two elements of the trusted base that are labelled as experimental (universe polymorphism and a type of proof-irrelevant propositions).

The first layer built on top of the implementation of the core language CIC are the language extensions, which are translated to the core language in a process called elaboration. To give some idea of relative sizes, the manual spends 97 pages on the core language, and 120 on the extensions. Features such as implicit coercions and typeclasses allow the user to write more natural looking mathematical statements with less clutter, wherever said clutter can be inferred automatically.

The ability to express mathematical statements is not particularly valuable without the ability to prove them. Naively, one might expect proofs to proceed in a functional style, as CIC is essentially a functional language with a particularly expressive type system. Indeed, every Coq proof is ‘really’ a functional program, which can be inspected. But in practice these programs are not written directly in functional style but rather generated in imperative style by commands called tactics, which are called in sequence and modify the state of a proof (its current goals and assumptions). Some tactics are fairly low level, for example moving the left hand side of an implication goal into the assumptions. Others are considerably more sophisticated, attempting to automatically discover non-trivial proofs in a style reminiscent of the logic programming language Prolog. There is also a bifurcation between two major packages of tactics, the standard collection and an independently produced collection SSReflect; these are intended to be merged in the future. Note that neither of these collections of tactics, nor any others, need to be trusted: they attempt to produce proofs, but these ‘proofs’ are not accepted until the core language type-checks them successfully.

If the default tactics are not sufficient, users can create their own using a language called Ltac. This language is described by the manual as somewhat ad hoc and unclear in its semantics, and so another currently experimental feature (good enough that users are encouraged to try it) is Ltac2, a tactic-definition language enhanced with desirable programming language features such as types.

As one would expect for a mature language, there have been a range of libraries and plugins developed for Coq. Again, even the most basic libraries do not need to be trusted, as they can be checked by the core type checker. There are also several choices of IDEs, a range of introductory books and tutorials, and so on. One style of plugin that is of particular note is the ability to extract code from Coq to other programming languages (currently Haskell, Scheme, and OCaml). At best, this creates certified-correct code that can run in a more efficient system than Coq. At worst, code extraction is not completely reliable due to mismatches between the semantics of Coq and the target languages, and so the manual even features a brief section colourfully titled “Extraction’s horror museum”.

Tuesday, 28 February 2023

Discussing "A Modal Analysis of Staged Computation"

(Link)

Authors: Rowan Davies and Frank Pfenning (Carnegie Mellon University)

Reference: Davies, Rowan, and Frank Pfenning. "A modal analysis of staged computation". Journal of the ACM (JACM) 48.3 (2001): 555-604.

Why this paper? Cited by A Fibrational Approach to Multiplicative Additive Indexed Linear Logic and Proofs and Refutations for Intuitionistic and Second-Order Logic

This week we have some work I know and love (although I think I’ve spent more time with the shorter conference version, published in 1996). The central idea is for a programmer to state using their type system in which order (expressed as stages) their program will execute. This would allow various optimisations to be applied automatically at compile time. We achieve this via a new type constructor, written ☐. Code of type ☐A is understood as unevaluated code of that type. For example, there is always a function from ☐A to A, which works by evaluating the code.

The cool thing here for a logician interested in the Curry-Howard isomorphism between proofs and programs is that this new type-former behaves exactly like the necessity, or box, modality of the well known modal logic S4 (more precisely, the intuitionistic version of this logic, without the possibility, or diamond, modality). For example, necessitation (a standard feature of all normal modal logics) corresponds to the ability to treat an arbitrary closed program as code, which does not hold of programs with free variables. So this paper, developing as it does lambda calculi for this box, is both a contribution to a particular programming language problem and to the proof theory of an important modal logic.

There are a number of different calculi developed in this paper, with the most important split between the dual context and context stack approaches. This nicely prefigures more recent work in modal lambda calculus, which supports the conclusion that while other approaches work for specific modal logics, these are the two approaches powerful enough to work for a large variety of modal logics; see here for a modern view of the first approach, and a paper of mine for the second. Another split is between the calculi that hew closely to the simply typed lambda calculus and (although this is not proved, it is fairly clear) are guaranteed to terminate, versus the ‘Mini-ML’ variations that achieve Turing completeness via general fixed points, at the cost of muddying the relationship with logic.

Given the context stack approach is close to my heart, I’ll make some more specific comments on this. Without getting distracted by bike shedding I'll briefly note that I do prefer the older term for this approach, Fitch-style (named in honour of work in proof theory by Frederic Fitch) over the coinage of Kripke-style here (Saul Kripke’s famous ‘possible worlds’ semantics should correspond to any reasonable proof theory, although admittedly Fitch’s approach feels closest to this semantics). The basic idea is that use of our Box modality corresponds to navigating through various layers of contexts; this is a wonderfully general idea and not fixed to the particular logic S4.

One interesting feature of S4 is that ☐…☐A is logically equivalent to ☐A for any number of boxes. My work has shown that if we are willing to allow all boxed types to further be isomorphic (true for many applications, but perhaps not appropriate for this work) we can significantly simplify the calculus of this paper. This works by removing the need for natural number indices to track how many layers of contexts are being popped at a given point of the program, and hence removing the need for ‘modal structural transformations’ in the reduction relation that are needed to relabel indices at various points. This actually contradicts the assertion in this paper that indices are necessary because of a certain ‘coherence’ problem; in fact that problem can be shown to resolve (the proof of this is not that easy), so the indices are in fact only necessary to preserve, if desired, the generality of having ☐…☐A be possibly non-isomorphic (genuinely different) to ☐A.

Monday, 20 February 2023

Discussing 'On the homotopy groups of spheres in homotopy type theory'

(Link)

Author: Guillaume Brunerie (Université Nice-Sophia-Antipolis)

Reference: Brunerie, Guillame. “On the homotopy groups of spheres in homotopy type theory.” PhD Thesis, Université Nice-Sophia-Antipolis. 2016

Why this paper? Cited by Formalizing π4(S3)≅ ℤ/2ℤ and Computing a Brunerie Number in Cubical Agda, Commuting Cohesions, and Encoding Dependently-Typed Constructions into Simple Type Theory

Many PhD dissertations are constructed by ‘stapling together’ a variety of different research papers generated during the years of study, with a varying level of effort put into writing connective tissue between them. This is very much not one of those dissertations. Instead, it is a book length development of a single mathematical fact, that π4(S3)≅ℤ/2ℤ. Of course, unpacking this line of mathematics would take time - ‘the fourth homotopy group of the 3-sphere is equivalent to a certain 2-element group’ is a first unpacking, but there is much jargon in there would itself require unpacking – but the focus on this one result is still notable.

I will of course not pretend to do full justice to the multiple years of effort that went into this thesis in the space of a short blog post, but instead try to give some high level feeling for what is going on here. At first glance it is perhaps a bit perplexing; the core result is a well known (although not easy) result in homotopy theory from the 1930s; why is a modern PhD thesis spending hundreds of pages to prove it? The answer is that this a landmark effort within homotopy type theory (HoTT); indeed, it is perhaps the most substantial development of a major result using HoTT. Translating this result into the foundations provided by HoTT turns out to be far from trivial. At one level working within this type theory is helpful, for example obviating the need to ever prove any functions are continuous (well-behaved), because they must be continuous by construction. For example the difficulty of one particularly gnarly part of the standard development, proving that the associativity of the smash product is suitably continuous, simply disappears. Furthermore because the work is ‘synthetic’, in the sense of not being tied to any particular presentation of topology, it is highly general and the core development is not obscured by the particular details of a concrete choice (e.g. working via point-set topology, or simplicial sets).

However, there is a catch: the standard development of results in homotopy theory often contain intermediate constructions that are not well behaved in the sense that HoTT enforces, even where the final product is well behaved. This means that the standard development, at least in this case, cannot be translated line by line into the new formalism as one might naively hope: Brunerie reflects in his superbly reflective concluding chapter that he “often had to find completely different definitions, proofs or statements that would work in homotopy type theory”. The point of all this effort is, of course, not merely to reprove old results, but to pave the way for new advances in homotopy theory (and other areas of mathematics) using HoTT as a tool that avoids certain errors by enforcing certain invariants, in a style amenable to mechanisation.

Having said that, one notable aspect of this development is that it is not mechanised (encoded into a computer), although mechanisation is certainly discussed. This perhaps flies in the face of some people’s intuitions that type theory, resembling as it does a programming language, is innately connected to computer mathematics. Instead this is a significant development of pen and paper type theory, in the 'informal' style recommended by the original HoTT book.

Another interesting aspect of this thesis is its constructivity, and what that means in practice. At the end of chapter 3 it is proved that π4(S3)≅ℤ/nℤ for some integer n. The nature of constructive mathematics is that an existential result of this sort is impossible to prove without generating its ‘witness’; in other words we should have access to the value of the number n (in this case, 2). More precisely, if a witness is not immediately clear from the proof as written, the proof should compute until it produces the witness. This does not happen in this thesis; instead three more chapters are needed to prove that n is in fact equal to 2. What went wrong? One might wonder if the problem is that the proof is pen and paper and hence cannot compute, but although this is part of the problem it is not dispositive: one could try to carry out the computations by hand. Unfortunately these calculations proved too difficult to carry out. An early attempt to develop the thesis within cubical type theory, which is a fully constructive presentation of HoTT which hence would compute, did not prove successful and led Brunerie to conclude “for informal synthetic homotopy theory as done in this thesis, cubical ideas aren’t especially helpful in general”.

There is an important addendum to this picture however: I was recommended this paper in part because of its citation in a just released preprint, in which the results of this thesis (in slightly modified form) were fully mechanised in Cubical Agda by Axel Ljungström and Anders Mörtberg, allowing the ‘Brunerie number’ (that is, 2) to finally be computed from the proof that π4(S3)≅ℤ/nℤ for some integer n.