Sunday, 2 April 2023

Discussing "Normalization for Multimodal Type Theory"

(Link)

Author: Daniel Gratzer (Aarhus University)

Reference: Daniel Gratzer. 2022. Normalization for Multimodal Type Theory. In Proceedings of the 37th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS '22). Association for Computing Machinery, New York, NY, USA, Article 2, 1–13. https://doi.org/10.1145/3531130.3532398

Why this reference? Cited by Semantics of Multimodal Adjoint Type Theory, A Categorical Normalization Proof for the Modal Lambda-Calculus, and For the Metatheory of Type Theory, Internal Sconing Is Enough

While my reading program for this blog often sends me bouncing around from subject to subject, this blog post is exactly the sequel to the last one, as one of the authors of the Multimodal Type Theory paper tackles the question of normalisation for these systems. Normalisation is the property that all programs in a language terminate; this is not a typical property of programming languages but is important for dependent types: as terms can appear in types, type checking can require execution of arbitrary code, and so termination of type checking in general requires termination of all possible programs. Terminating type checking is a standard requirement, but is particularly desirable when we use type theories to formalise mathematics. In this paradigm type checking corresponds to proof checking, and it seems a bizarre violation of usual practice for it to be impossible to decide whether a putative proof of a theorem is indeed valid.

Proving normalisation of even quite simple type theories, such as the simply typed lambda calculus, is not easy (although the techniques are now well studied and understood). While evaluation is sometimes called ‘reduction’, in fact terms do not reduce in size, or in any other obvious sense, as they compute. Matters get even more difficult in this paper, which refers to not one system, but multiple different type theories depending on one’s choice of mode theory. A recent proof of the property for one particular modal type theory was not encouraging, requiring a 77 page technical report to state all details. This paper also has a companion technical report weighing in at a comparatively svelte 39 pages, and yet proves the result for all modal type theories in the framework (provided that the mode theory chosen is not itself undecidable). How is this possible?

The technology behind this is Synthetic Tait Computability, as developed quite recently by Sterling and Harper. I am not yet entirely on top of this material, and frankly probably need to become so to appreciate the technical details of this quite dense paper (at least the Aarhus group, to which I used to belong, are good about publishing longer follow-ups in journals rather than leaving work in ultra-terse conference-truncated form, as sometimes happens with important or promising work in our field, so a more approachable but still reviewed version of this work might well be coming).

Given the technical prerequisites, perhaps a blow-by-blow of the development will not be so illuminating in a short blog post. Suffice it to say that the approach is deeply category theoretic; even the syntax is defined (it is claimed as a theorem, but looks more like a definition to me) to be an initial model, where model is defined as a certain 2-functor (this would provide a form of ‘fully annotated’ syntax which would probably be the target of an elaboration algorithm rather than pleasant to work with directly, but this is outside the bounds of this paper). Similarly, the normalisation algorithm is defined via the construction of a 'normalisation cosmos' (a cosmos is a weaker notion of model, avoiding some strictness requirements). While I am used to the denotational semantics (‘mathematical meaning’) of type theory being expressed in such terms, and syntax as initial model is OK as well, defining syntactic algorithms via category theory is a bit of a new one on me. It’s always good to have plenty to learn!

Tuesday, 21 March 2023

Discussing "Multimodal Dependent Type Theory"

(Link)

Authors: Daniel Gratzer (Aarhus University), G. A. Kavvos (Aarhus University), Andreas Nuyts (KU Leuven), and Lars Birkedal (Aarhus University)

Reference: Daniel Gratzer, G. A. Kavvos, Andreas Nuyts, and Lars Birkedal. 2021. Multimodal Dependent Type Theory. In Logical Methods in Computer Science, Volume 17, Issue 3 - https://doi.org/10.46298/lmcs-17(3:11)2021

Why this reference?
Cited by A Categorical Normalization Proof for the Modal Lambda-Calculus, Fitch-Style Applicative Functors, and Commuting Cohesions

I’ve discussed the links between types and modal logic a few times on this blog, most notably two weeks ago, when I discussed Davies and Pfenning’s addition of a certain modality into a simply typed functional language. This paper, based on the shorter conference paper of the previous year, extends the state of play by introducing a general recipe for integrating modalities, including multiple modalities at once, into dependent type theory. As long as the recipe is followed (as we will see, this is not always exactly possible) certain desirable properties such as canonicity can be guaranteed, rather than working from scratch to derive such properties every time a formally new modal type theory is proposed.

The two main technologies making this possible are Fitch-style type theory, which I discussed two weeks ago, and the mode theories introduced recently by Licata and Shulman. Given how recently I’ve discussed Fitch-style types I’ll restrain myself to briefly say that these provide a setting for modal type theory that is appealingly general and close to informal modal reasoning, and that their usual formulation is modified here to push some of the complexity of the system from the modal elimination rules to the variable rules, which turns out to be beneficial due to avoidance of certain ‘trimming’ operations on contexts.

Mode theories are three layer structures organised as 2-categories (for some applications, some of these layers may be trivial). These layers are best illustrated through the example of guarded recursion, a setting in which types are regarded as unfolding over time, which is helpful for ensuring that certain recursive constructions avoid the impossible circularity of parts that are constructed in terms of other parts that are not constructed yet. The first layer of a mode theory are the modes, which correspond to different dependent type theories. Often one mode suffices, but with guarded recursion it is convenient to have two, allowing the programmer to switch back and forth between regarding types as varying over time, or as lacking this property; in the latter mode, ‘vanilla’ dependent type theory suffices. The second layer are the modalities, which map from mode to mode. With guarded recursion, the ‘later’ modality starts and finishes in the time-varying mode, but is meaningless in constant mode; two further modalities allow passage between the varying and constant modes. The third layer, of 2-cells, define maps between modalities, which give fine control over details such as whether an S4-like modality is idempotent.

The guarded recursion example is also helpful for observing some of the limitations of this setup. The ‘Löb induction’ property of the later modality, a technical requirement that gives guarded recursion its power and significance, cannot be expressed in a mode theory and must be imposed on top of the iteration of the general recipe. By leaving the recipe, the powerful general theorems proved about recipe-obeying type theories are no longer guaranteed to apply. More generally, the notion of mode theory as 2-category is quite abstract and it will be interesting to see development of how they are realised as syntax, and how their limitations can be understood.

Another limitation, not so significant in the case of guarded recursion but of interest elsewhere, is that every modality that fits into the framework is a ‘necessity’ or ‘box’ style of modality, commuting with products. Some modalities of interest elsewhere are the ‘possibility’ or ‘diamond’ modalities, which (usually) commute with coproducts instead. Admittedly such modalities currently have considerably less application as types that the modalities of this paper, but I am not completely clear whether that will be an enduring truth, or it is something that is true because we do not have good general understanding of how to fit them into type theory.

Next week my paper-picking algorithm will see me consider a direct sequel paper to this one, so it will be interesting to see how these important and relevant ideas continue to develop.

Tuesday, 7 March 2023

Discussing "The Coq Proof Assistant"

(Link)

Authors: The Coq Development Team

Reference: The Coq Development Team. The Coq proof assistant, Version 8.16, September 5 2022

Why this reference? Cited by Type Theory as a Language Workbench and Asynchronous Probabilistic Couplings in Higher-Order Separation Logic

This blog is a project to encourage myself to read more widely, and hopefully spread some knowledge to those who stumble across it, by looking at multiple-cited papers in my recent Google Scholar recommendations. Yet papers are not the only thing that gets cited; I’ve looked at books, monumental pairs of books, and now, my first software. Reviewing software in a week is not an obvious task, as the ‘content’ of software is not encapsulated by looking at source code, but also lives beyond the software in the ecosystem around it – tutorials, libraries, applications, active forums and communities, and so forth. In order to make this task tractable, I’ve decided to concentrate on the Coq User Manual as my reference, and use that to give an overview of Coq.

At its base, Coq is an implementation of a certain version of Martin-Löf Type theory. The intention as usual is to provide a type system expressive enough to naturally express mathematical statements, along with tools to allow users to attempt to prove these statements in an interactive style, with the computer checking each step and assisting with automatic proving where possible. Coq has enjoyed conspicuous success in this, with the manual making particular mention of the CompCert verified C compiler and verification of the four colour theorem.

The name, which is French for rooster but inevitably attracts sniggers, is a play on CoC, the Calculus of Constructions, the version of type theory the original software was based on in 1983. It also works as a pun on the name of Thierry Coquand, whose PhD thesis provided the original impetus (the original software was mostly written by Gérard Huet). The version of type theory that is currently the basis of Coq is CIC, the Calculus of Inductive Constructions, as introduced by Coquand and Christine Paulin-Mohring. This does not mean that the underlying type theory is fixed and static however; for example there was major development on the treatment of coinduction between versions 8.5 and 8.9 (2015-18), and there are currently two elements of the trusted base that are labelled as experimental (universe polymorphism and a type of proof-irrelevant propositions).

The first layer built on top of the implementation of the core language CIC are the language extensions, which are translated to the core language in a process called elaboration. To give some idea of relative sizes, the manual spends 97 pages on the core language, and 120 on the extensions. Features such as implicit coercions and typeclasses allow the user to write more natural looking mathematical statements with less clutter, wherever said clutter can be inferred automatically.

The ability to express mathematical statements is not particularly valuable without the ability to prove them. Naively, one might expect proofs to proceed in a functional style, as CIC is essentially a functional language with a particularly expressive type system. Indeed, every Coq proof is ‘really’ a functional program, which can be inspected. But in practice these programs are not written directly in functional style but rather generated in imperative style by commands called tactics, which are called in sequence and modify the state of a proof (its current goals and assumptions). Some tactics are fairly low level, for example moving the left hand side of an implication goal into the assumptions. Others are considerably more sophisticated, attempting to automatically discover non-trivial proofs in a style reminiscent of the logic programming language Prolog. There is also a bifurcation between two major packages of tactics, the standard collection and an independently produced collection SSReflect; these are intended to be merged in the future. Note that neither of these collections of tactics, nor any others, need to be trusted: they attempt to produce proofs, but these ‘proofs’ are not accepted until the core language type-checks them successfully.

If the default tactics are not sufficient, users can create their own using a language called Ltac. This language is described by the manual as somewhat ad hoc and unclear in its semantics, and so another currently experimental feature (good enough that users are encouraged to try it) is Ltac2, a tactic-definition language enhanced with desirable programming language features such as types.

As one would expect for a mature language, there have been a range of libraries and plugins developed for Coq. Again, even the most basic libraries do not need to be trusted, as they can be checked by the core type checker. There are also several choices of IDEs, a range of introductory books and tutorials, and so on. One style of plugin that is of particular note is the ability to extract code from Coq to other programming languages (currently Haskell, Scheme, and OCaml). At best, this creates certified-correct code that can run in a more efficient system than Coq. At worst, code extraction is not completely reliable due to mismatches between the semantics of Coq and the target languages, and so the manual even features a brief section colourfully titled “Extraction’s horror museum”.

Tuesday, 28 February 2023

Discussing "A Modal Analysis of Staged Computation"

(Link)

Authors: Rowan Davies and Frank Pfenning (Carnegie Mellon University)

Reference: Davies, Rowan, and Frank Pfenning. "A modal analysis of staged computation". Journal of the ACM (JACM) 48.3 (2001): 555-604.

Why this paper? Cited by A Fibrational Approach to Multiplicative Additive Indexed Linear Logic and Proofs and Refutations for Intuitionistic and Second-Order Logic

This week we have some work I know and love (although I think I’ve spent more time with the shorter conference version, published in 1996). The central idea is for a programmer to state using their type system in which order (expressed as stages) their program will execute. This would allow various optimisations to be applied automatically at compile time. We achieve this via a new type constructor, written ☐. Code of type ☐A is understood as unevaluated code of that type. For example, there is always a function from ☐A to A, which works by evaluating the code.

The cool thing here for a logician interested in the Curry-Howard isomorphism between proofs and programs is that this new type-former behaves exactly like the necessity, or box, modality of the well known modal logic S4 (more precisely, the intuitionistic version of this logic, without the possibility, or diamond, modality). For example, necessitation (a standard feature of all normal modal logics) corresponds to the ability to treat an arbitrary closed program as code, which does not hold of programs with free variables. So this paper, developing as it does lambda calculi for this box, is both a contribution to a particular programming language problem and to the proof theory of an important modal logic.

There are a number of different calculi developed in this paper, with the most important split between the dual context and context stack approaches. This nicely prefigures more recent work in modal lambda calculus, which supports the conclusion that while other approaches work for specific modal logics, these are the two approaches powerful enough to work for a large variety of modal logics; see here for a modern view of the first approach, and a paper of mine for the second. Another split is between the calculi that hew closely to the simply typed lambda calculus and (although this is not proved, it is fairly clear) are guaranteed to terminate, versus the ‘Mini-ML’ variations that achieve Turing completeness via general fixed points, at the cost of muddying the relationship with logic.

Given the context stack approach is close to my heart, I’ll make some more specific comments on this. Without getting distracted by bike shedding I'll briefly note that I do prefer the older term for this approach, Fitch-style (named in honour of work in proof theory by Frederic Fitch) over the coinage of Kripke-style here (Saul Kripke’s famous ‘possible worlds’ semantics should correspond to any reasonable proof theory, although admittedly Fitch’s approach feels closest to this semantics). The basic idea is that use of our Box modality corresponds to navigating through various layers of contexts; this is a wonderfully general idea and not fixed to the particular logic S4.

One interesting feature of S4 is that ☐…☐A is logically equivalent to ☐A for any number of boxes. My work has shown that if we are willing to allow all boxed types to further be isomorphic (true for many applications, but perhaps not appropriate for this work) we can significantly simplify the calculus of this paper. This works by removing the need for natural number indices to track how many layers of contexts are being popped at a given point of the program, and hence removing the need for ‘modal structural transformations’ in the reduction relation that are needed to relabel indices at various points. This actually contradicts the assertion in this paper that indices are necessary because of a certain ‘coherence’ problem; in fact that problem can be shown to resolve (the proof of this is not that easy), so the indices are in fact only necessary to preserve, if desired, the generality of having ☐…☐A be possibly non-isomorphic (genuinely different) to ☐A.

Monday, 20 February 2023

Discussing 'On the homotopy groups of spheres in homotopy type theory'

(Link)

Author: Guillaume Brunerie (Université Nice-Sophia-Antipolis)

Reference: Brunerie, Guillame. “On the homotopy groups of spheres in homotopy type theory.” PhD Thesis, Université Nice-Sophia-Antipolis. 2016

Why this paper? Cited by Formalizing π4(S3)≅ ℤ/2ℤ and Computing a Brunerie Number in Cubical Agda, Commuting Cohesions, and Encoding Dependently-Typed Constructions into Simple Type Theory

Many PhD dissertations are constructed by ‘stapling together’ a variety of different research papers generated during the years of study, with a varying level of effort put into writing connective tissue between them. This is very much not one of those dissertations. Instead, it is a book length development of a single mathematical fact, that π4(S3)≅ℤ/2ℤ. Of course, unpacking this line of mathematics would take time - ‘the fourth homotopy group of the 3-sphere is equivalent to a certain 2-element group’ is a first unpacking, but there is much jargon in there would itself require unpacking – but the focus on this one result is still notable.

I will of course not pretend to do full justice to the multiple years of effort that went into this thesis in the space of a short blog post, but instead try to give some high level feeling for what is going on here. At first glance it is perhaps a bit perplexing; the core result is a well known (although not easy) result in homotopy theory from the 1930s; why is a modern PhD thesis spending hundreds of pages to prove it? The answer is that this a landmark effort within homotopy type theory (HoTT); indeed, it is perhaps the most substantial development of a major result using HoTT. Translating this result into the foundations provided by HoTT turns out to be far from trivial. At one level working within this type theory is helpful, for example obviating the need to ever prove any functions are continuous (well-behaved), because they must be continuous by construction. For example the difficulty of one particularly gnarly part of the standard development, proving that the associativity of the smash product is suitably continuous, simply disappears. Furthermore because the work is ‘synthetic’, in the sense of not being tied to any particular presentation of topology, it is highly general and the core development is not obscured by the particular details of a concrete choice (e.g. working via point-set topology, or simplicial sets).

However, there is a catch: the standard development of results in homotopy theory often contain intermediate constructions that are not well behaved in the sense that HoTT enforces, even where the final product is well behaved. This means that the standard development, at least in this case, cannot be translated line by line into the new formalism as one might naively hope: Brunerie reflects in his superbly reflective concluding chapter that he “often had to find completely different definitions, proofs or statements that would work in homotopy type theory”. The point of all this effort is, of course, not merely to reprove old results, but to pave the way for new advances in homotopy theory (and other areas of mathematics) using HoTT as a tool that avoids certain errors by enforcing certain invariants, in a style amenable to mechanisation.

Having said that, one notable aspect of this development is that it is not mechanised (encoded into a computer), although mechanisation is certainly discussed. This perhaps flies in the face of some people’s intuitions that type theory, resembling as it does a programming language, is innately connected to computer mathematics. Instead this is a significant development of pen and paper type theory, in the 'informal' style recommended by the original HoTT book.

Another interesting aspect of this thesis is its constructivity, and what that means in practice. At the end of chapter 3 it is proved that π4(S3)≅ℤ/nℤ for some integer n. The nature of constructive mathematics is that an existential result of this sort is impossible to prove without generating its ‘witness’; in other words we should have access to the value of the number n (in this case, 2). More precisely, if a witness is not immediately clear from the proof as written, the proof should compute until it produces the witness. This does not happen in this thesis; instead three more chapters are needed to prove that n is in fact equal to 2. What went wrong? One might wonder if the problem is that the proof is pen and paper and hence cannot compute, but although this is part of the problem it is not dispositive: one could try to carry out the computations by hand. Unfortunately these calculations proved too difficult to carry out. An early attempt to develop the thesis within cubical type theory, which is a fully constructive presentation of HoTT which hence would compute, did not prove successful and led Brunerie to conclude “for informal synthetic homotopy theory as done in this thesis, cubical ideas aren’t especially helpful in general”.

There is an important addendum to this picture however: I was recommended this paper in part because of its citation in a just released preprint, in which the results of this thesis (in slightly modified form) were fully mechanised in Cubical Agda by Axel Ljungström and Anders Mörtberg, allowing the ‘Brunerie number’ (that is, 2) to finally be computed from the proof that π4(S3)≅ℤ/nℤ for some integer n.

Thursday, 9 February 2023

Discussing "Scalable Verification of Probabilistic Networks"

(Link)

Authors: Steffen Smolka (Cornell University), Praveen Kumar (Cornell University), David M. Kahn (Carnegie Mellon University), Nate Foster (Cornell University), Justin Hsu (University of Wisconsin), Dexter Kozen (Cornell University), and Alexandra Silva (University College London)

Reference: Smolka, Steffen, Praveen Kumar, David M. Kahn, Nate Foster, Justin Hsu, Dexter Kozen, and Alexandra Silva. "Scalable verification of probabilistic networks." In PLDI 2019: Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation, pp. 190-203. 2019.

Why this paper? Cited by A Complete Inference System for Skip-free Guarded Kleene Algebra with Tests and Lilac: a Modal Separation Logic for Conditional Probability

I built this blog around the idea of discoverability via Google Scholar both of new papers and, via their citations, influential older papers. Google Scholar generally does an excellent job of this (far better than the new wave of AIs with their tendency to hallucinate literature) but nothing is perfect, and the recommendations from this week are perhaps partly caused by misunderstanding that the word ‘guarded’, which is used a lot in some of my papers, is also used, but in a completely different way, in certain other papers. Still, the paper I got served by my self-imposed algorithm, while not exactly in my wheelhouse, is interesting enough so long as I don’t dive too deep into the technicalities.

This paper describes a system call McNetKAT, which is a grand pileup of acronyms: Mc for ‘Markov chain’, Net for ‘Network’, KA for ‘Kleen Algebra’, and ‘T’ for ‘(with) Tests’. The basic idea is to build capability for automatic verification on top of the programming language ProbNetKAT (Prob for ‘Probabilistic’), a small imperative language that allows one to specify the topology of a network and the policy for moving packets through it. The support for probability helps capture cases “from uncertainty about the inputs, to expected loads, to likelihood of device and link failures” that might effect networks. One can attempt to prove that the network program is fully correct, by showing that is equivalent to a magic network that teleports each packet to its correct destination, but the probabilities really come into play when you show a certain network program improves on another, in the sense that where both succeed they agree on their treatment of packets, but one is less likely to fail than another according to a given failure model on links or switches.

The ‘Mc’ comes into play because a new semantics is presented based on stochastic matrices / Markov chains, a notion of probabilistic transition system. These semantics are proved to be equal to the existing semantics in the literature, but are more amenable to automation. Programs are compiled into their semantics, and then passed to the linear algebra solver UMFPACK. It is shown that this treatment performs well against an alternative route involving compilation into the language of the probabilistic model checker PRISM, and against the (admittedly, more expressive) system Bayonet.

Wednesday, 1 February 2023

Discussing "The Lean Theorem Prover (System Description)"

(Link)

Authors: Leonardo de Moura (Microsoft Research Redmond)), Soonho Kong (Carnegie Mellon University), Jeremy Avigad (Carnegie Mellon University), Floris van Doorn (Carnegie Mellon University), and Jakob von Raumer (Carnegie Mellon University)

Reference: de Moura, L., Kong, S., Avigad, J., van Doorn, F., von Raumer, J. (2015). The Lean Theorem Prover (System Description). In: Felty, A., Middeldorp, A. (eds) Automated Deduction - CADE-25. CADE 2015. Lecture Notes in Computer Science(), vol 9195. Springer, Cham. https://doi.org/10.1007/978-3-319-21401-6_26

Why this paper?
Cited by Encoding Dependently-Typed Constructions into Simple Type Theory and A Type-Based Approach to Divide-and-Conquer Recursion in Coq

Lean is something of a new kid on the block in the world of interactive theorem proving, coming out just 10 years ago, compared to 34 years for Coq and 36 for Isabelle. Nonetheless it has clearly had impact; for example this year’s CPP (Certified Programs and Proofs) conference, which I had the privilege to be a member of the program committee for, by my count had 7 of 25 accepted papers using Lean.

This short conference paper gives a brief and high-level description of the aims and design principles of the new theorem prover. It is neither a tutorial (see pdf) nor in-depth description of the nitty gritty of the implementation (for which see e.g. here). It does, perhaps, give some sense of why developing a new prover was thought to be desirable rather than simply using the established technologies, although it is rather polite and implicit about any comparisons with other provers. Lean, which like Coq is based on Martin-Löf dependent type theory, has a few features emphasised: its small trusted code base, its support for different proving styles (declarative à la Isabelle, or imperative-style tactics to generate proof terms), its flexible framework for supporting automation and talking to other software, its user interface, and its relatively fast speed and support for parallelism. Homotopy type theory is highlighted as one of the motivating application areas. The bulk of the paper is an informal description of the elaboration algorithm, which compiles definitions and proofs written in various styles down to the proof terms that the small trusted kernel can accept.

Tuesday, 24 January 2023

Discussing "An Intuitionistic Theory of Types: Predicative Part"

(Link), although I downloaded a free copy here.

Author: Per Martin-Löf (University of Stockholm)

Reference: Per Martin-Löf, An Intuitionistic Theory of Types: Predicative Part, Editor(s): H.E. Rose, J.C. Shepherdson, Studies in Logic and the Foundations of Mathematics, Elsevier, Volume 80, 1975, Pages 73-118, https://doi.org/10.1016/S0049-237X(08)71945-1.

Why this paper? Cited by Encoding Dependently-Typed Constructions into Simple Type Theory and Normalization and coherence for ∞-type theories

After covering a few advanced and recent papers on type theory recently, this week we look at the first ever fully realised paper on type theory for formalised mathematics (previous work was either incomplete or suffered from errors). It is stated from the very first sentence that the work of this paper is “intended to be a full scale system for formalizing intuitionistic mathematics”. Charmingly, the paper ends with the self-effacing remark “What is doubtful at present is not whether computerization is possible, because we already know that, but rather whether these proof-theoretical computation procedures are at all useful in practice. So far, they do not seem to have found a single significant application”. Almost 50 years on, the vision of this paper has been thoroughly vindicated by the development of sophisticated proof assistants such as Coq and Lean capable of formalising complex mathematics such as proofs of the Four-Colour Theorem and the Odd Order Theorem.

The basic idea is to exploit the Curry-Howard isomorphism, only recently worked out in detail by William Howard (see Howard’s comments on his influence on Martin-Löf here), which identifies propositions with types and (intuitionistic) proofs with programs, to develop a (at this stage, paper-only) programming language for which which proof checking is type checking. Each logical symbol, familiar from any introductory logic course, becomes associated with a type former, as well as program formers which allow programs with that type to be introduced and eliminated (used). Less familiar to logicians, a universe (type of types) is essential to formalise many mathematical arguments, but care must be taken to avoid paradoxes that would crop up if the universe is taken to be a member of itself (very similar paradoxes must be avoided in set theory); this is the predicativity of the title. Crucially (and again, perhaps not so familiar from an introductory logical course) the decidability and other good properties of algorithms related to the type theory – reduction and type checking – is proved, paving the way for type theory to jump off the page and into real implementation.

This paper is quite a readable introduction to this brand new area, as Martin-Löf does not assume familiarity with the underlying idea of performing mathematics inside type theory, as a more recent paper might, but rather must justify his approach. The paper is full of pithy quotes (some original, some adapted from other sources) to motivate the work, such as “A proposition is defined by prescribing how we are allowed to prove it” and, justifying the importance of type checking being decidable, “we recognize a proof when we see one”.

Sunday, 22 January 2023

Discussing "Cubical Type Theory: A Constructive Interpretation of the Univalence Axiom"

(Link)

Authors: Cyril Cohen (INRIA Sophia Antipolis - Méditerranée), Thierry Coquand (University of Gothenburg), Simon Huber (University of Gothenburg), and Anders Mörtberg (Institute for Advanced Study)

Reference: Cyril Cohen, Thierry Coquand, Simon Huber, Anders Mörtberg, Cubical Type Theory: A Constructive Interpretation of the Univalence Axiom. In 21st International Conference on Types for Proofs and Programs (TYPES 2015), ed. Tarmo Uustalu, Leibniz International Proceedings in Informatics (LIPIcs), vol. 69 (2018).

Why this paper? Cited by Formalizing CCS and π-Calculus in Guarded Cubical Agda and Normalization and coherence for ∞-type theories

I’ve blogged a few times on homotopy type theory, ranging from the seminal book to various follow-ups (One Two Three). This is one of the most important follow-ups because of what it does to the univalence axiom, the cornerstone of homotopy type theory that asserts that equivalences between types correspond to identities between them. Cubical type theory is an instance of homotopy type theory for which the univalence axiom is not an axiom at all, but rather a provable theorem (inhabited type). The significance of this may not be immediately obvious to newcomers to type theory – who cares if a statement is an axiom or a consequence of some other rules? In fact with the type theories that lend themselves to applications such as interactive theorem proving we do not typically proceed by asserting axioms – unless we need to – but rather by defining rules for introducing and eliminating our basic type formers. Arranging things in this way allow us to have good computational properties, for example a constructive argument involving a natural number really will produce an appropriate natural number when run. This gives proofs in the type theory computational content, whereas the bald assertion of an axiom would provide a ‘roadblock’ in the execution of a proof.

This development proceeds by developing a type theory which has denotational semantics in the category of cubical sets. In this blog post I won’t sweat the details of these semantics – many of my comments last week about semantics in simplicial sets still apply  – although it is important to understand that working via the semantics motivates every part of this paper. However, I wish to concentrate on the details of the type theory itself as syntax.

It is fair to say that cubical type theory is a heavy duty extension to standard type theory. Some of the earlier parts of the paper are quite beautiful, however. The notion of identity as path is captured by thinking of them as (roughly) a function from a slightly abstracted version of the real interval [0,1] to a type; if we wish to express that t is equal u then we need that function to map 0 to t and 1 to u, and we’re done. This gives a startlingly simple proof of function extensionality (functions are equal if they agree on all inputs), which is a tricky property, and indeed sometimes not even true, in many type theories. Things get more involved, but still rather elegant, if we want dependence on multiple different arbitrary elements of the interval, usually called dimensions. This is where the cubes of cubical type theory come in, as zero dimensions give rise to a point in a space (which is to say, just a usual inhabitant of a type), and one dimension to a path in a space, but we can have squares, cubes, 4-dimensional hypercubes and so on as well.

The next feature is to define faces, which can be thought of as corresponding to faces of cubes (of various dimension), or more prosaically as restrictions on the values that our otherwise arbitrary interval elements can take. For example, we might require that both i and j are equal to one of 0 or 1, describing the outline of a square but leaving its centre unfilled. Terms and types defined with faces in their context are then only partial, e.g. only existing where the free dimensions i and j have appropriate values. Systems are terms of arbitrary type that explicitly define different constructions for different faces. Faces are also used to introduce the notion of extensibility;  a partial term t with respect to face φ is extensible if there is a term u that is not restricted to φ, for which if φ is added to its context, it is indeed equal to t. A new composition operator then comes into play that ensures that extensibility is preserved along paths. This may seem heavy, but is necessary to prove the rather mundane fact that identity is transitive. The equality judgements (intuitively, reduction rules) associated with the composition operator are fairly complex, depending on the type the composition is applied at and so needing explicit definition corresponding to each type former.

The final layer of novelty involves a new type former called glueing which connects types with partial types, and ensures that extensibility is invariant across equivalence. This is the final necessary ingredient for univalence (which is a statement about equivalences, after all) to be proved.

Wednesday, 11 January 2023

Discussing "The Simplicial Model of Univalent Foundations (after Voevodsky)"

(Link)

Authors: Krzysztof Kapulkin (University of Pittsburgh) and Peter LeFanu Lumsdaine (Institute for Advanced Study)

Reference: Krzysztof Kapulkin, Peter LeFanu Lumsdaine, The simplicial model of Univalent Foundations (after Voevodsky). J. Eur. Math. Soc. 23 (2021), no. 6, pp. 2071–2126. DOI 10.4171/JEMS/1050

Why this paper? Cited by Normalization and coherence for ∞-type theories, The Formal Theory of Monads, Univalently, and On the ∞-topos semantics of homotopy type theory

This paper has a rather peculiar publication history, as it started as a 2012 preprint written by the two authors along with a third, Fields medal winning mathematician Vladimir Voevodsky, who was in fact the main author. However Voevodsky grew disenchanted with the paper due to some issues with the type theory literature it was relying on, and asked for his name to be removed. Sadly, Voevodsky passed away before rectifying these holes in the literature, and so Kapulkin and Lumsdaine worked the paper into a publication. Lacking permission to re-add Voevodsky’s name to the author list, his contribution is instead recognised in the title.

One benefit of this paper having almost a decade to germinate is that it has a particularly pleasant and well written introduction to its topic, Homotopy Type Theory (HoTT). I looked at the seminal HoTT book on this blog last year; this paper picks up one of the topics that the book does not address, although it is briefly mentioned, namely denotational semantics. To recap, HoTT is an approach to mathematical foundations that aims to be both usable by humans and amenable to computer formalisation. The technical approach is to extend previous approaches to type theory with, in particular, a property called the univalence axiom (the other major innovation, higher inductive types, are not discussed here).  Univalence strengthens the usual notion of equality in type theory to, at the type level, correspond to equivalence; this means that one can move constructions and results between equivalent mathematical settings, a common feature of mathematical argument, ‘for free’. This leads one to think of types as (some sort of topological) space, with equality as paths in a space à la homotopy theory.

This paper makes this connection formal by giving homotopy type theory denotational semantics in the category of simplicial sets. Denotational semantics give a mathematical meaning for the constructions of a type theory, programming language, or logic. So long as the mathematical model chosen is not pathological, this gives us as immediate benefit a proof of consistency (one cannot prove false, or construct an element of a type that should be empty). Just as importantly, this style of semantics assures us that all syntactic constructions are not mere shuffling of symbols, but have real mathematical meaning that can be communicated to someone who is not necessarily interested in the details of the particular type theory. Put another way, the type theory is ‘about something’ other than itself.

Why is the ‘something’ that HoTT is about in this paper – i.e. simplicial sets – actually interesting, beyond mere proof of consistency? This paper does not offer an introduction to this mathematics, but looking at one of the books cited as background (link to pdf) gives some clues: “the category of simplicial sets has a homotopical structure … the resulting homotopy theory is equivalent in a strong sense to the ordinary homotopy theory of topological spaces… Insofar as simplicial sets are algebraically defined, and the corresponding closed model structure is combinatorial in nature, we obtain an algebraic, combinatorial model for standard homotopy theory”. In other words, simplicial sets are not a mere mathematical curiosity than happens to work for this type theory, but rather a strongly motivated approach to the general study of homotopy which, of course, is what motivated homotopy type theory in the first place. It should be noted that simplicial sets are not unique in having this property; there is a somewhat similar structure called cubical sets that appear to be more suited to computational implementation. But that, perhaps, is a story for another day.