I remember, from maybe grad school, being puzzled about “semantics”. What does this word mean? Having gained a lot more knowledge of this topic since then, I thought I’d share a few thoughts on it, especially in the context of type theory.

“Semantics” means meaning. In this sense, many different approaches to the study of programming languages are described as giving some form of semantics to a language. An interpreter gives you a semantics for a language. It shows you what these funny sequences of keywords or symbols (or more modernly, these peculiar abstract syntax trees) **mean** by showing how the programs execute. If one writes an interpreter as a set of inference rules, then this is usually called operational semantics. Hoare Logic, which is a set of inference rules for verifying that programs started in states satisfying a given precondition will (if they terminate) end in a state satisfying a given postcondition, has traditionally been dubbed axiomatic semantics. This term “axiomatic” is maybe not too informative, but the idea that you can describe the meaning of an imperative program by explaining how it changes the state indeed gives it a reasonable claim to be a semantics. What do programs in the language mean? A semantics tells you.

There was a lot of interest in earlier days on giving denotational semantics to programs. The idea is to give the meaning of a program by stating what mathematical function the program denotes. After all, programs are intended (are they not?) to represent some class of functions as understood in mathematics. A hallmark of denotational semantics is that it is compositional: the meaning of a bigger expression is defined in terms of the meanings of its strict subexpressions. Notice that this is not true of operational semantics generally: we define the meaning of in terms of , which is not a strict subexpression of (indeed it can be a bigger term than ). There was a lot of excitement, it seems, in finding denotational models of languages like lambda calculus. More recently, it seems more common, at least in Computer Science, to take lambda calculus for granted and not be concerned with mathematical denotations of lambda terms.

Logic is, as far as I know, the earliest field concerned with giving mathematical semantics for formal languages. There, denotational semantics of formulas is the basis of model theory. Proof theory can also be viewed as giving a semantics of formulas, and indeed in nice situations these two semantics agree — though even for theories like first-order arithmetic, results like Goedel’s incompleteness theorem imply that no such agreement is possible: the proof-theoretic semantics (showing what formulas mean by showing how to prove them or use them in other proofs) must be weaker, in the sense of proving fewer theorems, than the denotational semantics. But maybe this is a digression…

In type theory, there is a great tradition of giving denotational semantics for types. In full generality, if we have a term t of type T (in some typing context G!), we would like to explain what this relation () means. Historically, it was discovered that one could use a denotational semantics for types as a proof method to show deep properties of the language, like normalization or logical consistency. I first learned about this in Girard’s Proofs and Types, and still recommend this as a good introduction. The historical root of this is Kleene’s realizability semantics for first-order intuitionistic arithmetic. But one does not need to be driven by the desire to prove such metatheorems, in order to want to have a denotational semantics for types. Here are my amazing new types! Wouldn’t you like to know what they mean? I can show you some typing rules, and depending on how I set them up — for example, if you give introduction and elimination rules for the types that are related in certain ways — these could to a greater or lesser extent show you what the types mean. But a compositional semantics that says, “my funny type construct X A B means …” can be more direct, and provide guidance for crafting typing rules. One can prove that a set of typing rules is sound with respect to the semantics, for example.

The form of such semantics I have found most useful is a realizability semantics, that tells you what a type means by specifying the set of all terms whose operational behavior is allowed by the type. For example, in some language, the meaning of a type like “Int -> Int” could be defined to be the set of all programs which, given an Int input, either diverge or produce an Int output. Or maybe in some other language, it is the set of all terms which, given an Int input, are guaranteed to terminate producing an Int output. Notice that these example semantics do not say anything about the internal form of the program. It could fail to be well-typed according to some set of typing rules. It could do very crazy and upsetting things if not given an Int input. The only thing the type specifies is the behavior of the term when given an Int input. So the type describes behavior of terms, through the simple device of giving a set of terms as the meaning of a type (namely, the set of all terms whose behavior is allowed by the type).

For Cedille, I have defined such a semantics, and used it to prove logical consistency of the system.