### Background

[This report is by Duckki Oe.] $\texttt{T}^\texttt{vec}$ is dependently typed (see the paper). For example, the vector type <vec T n> is indexed by a type T (polymorphic) and a term n (dependent on terms, so dependently typed).

In a dependently typed system, two syntactically distinct types may deem “equal”. That is not the case in the other systems we discussed in the seminar, in which syntactically distinct types are indeed disjoint. For example, <vec nat 1> and <vec nat 0+1> are supposed to be equal, because those two terms can be equated by evaluating them. So, equality reasoning is a crucial point of dependently typed systems. At the same time, because proving an equality is generally undecidable in a FOL/HOL, proofs are necessary as a part of code and what the code proves are used to cast one type to another.

### Unannotated $\texttt{T}^\texttt{vec}$

The type system of $\texttt{T}^\texttt{vec}$ has two type-level abstractions (Pi and Forall). The motivation of this distinction is that some arguments are only used for equality reasoning or typing, not for computation. A Forall type is a function that takes a specificational argument and the argument can only be used at specificational positions. So, the deal here is that specificational information can be dropped when we reason about the equality of terms.

Also, $\texttt{T}^\texttt{vec}$ sports equality types, which are atomic formulas of the equality predicate and can be introduced by “join” term. A term of an equality type is considered a proof of the equality, and the value can be passed to a function that takes (or assumes) that equality. A join term justifies the equalities of two terms via evaluation. (That means you can reach the same term by evauating them.) The join cares only about the semantic equality of terms, but not types. However, the equality of types can be achieved from the equalities of terms inside those types.

A good thing about join is that we don’t need to worry about typing information and specificational arguments, because they don’t affect computation (evaluation) at all. And because the specificational arguments can get in the way of proofs, it is better not to have them when we do reasoning. So, the paper presents the unannotated $\texttt{T}^\texttt{vec}$, where terms do not have typing information and specificational arguments at all. This idea is very similar to that of the Implicit Calculus of Construction. The consequence is that type assignment/checking over those terms is undecidable. However, that is OK because there is a fully annotated version of $\texttt{T}^\texttt{vec}$ which is decidable and sound with respect to the unannotated version. The meta-theoratic analysis of unannotated $\texttt{T}^\texttt{vec}$ is important because, even in the annotated $\texttt{T}^\texttt{vec}$, the “join” proofs are based on evaluating unannotated terms.

### Annotated $\texttt{T}^\texttt{vec}$

In the annotated version, terms have all the type annotations and specificational arguments, which are distinguished from other (computational) arguments. Marking arguments to be specificaitonal is used in Guru (with the “spec” keyword) and discussed in other papers like Erasure and Polymorphism in Pure Type Systems (and also in ICC*). The annotated version lookes obviously sound because dropping annotations woundn’t do anything wrong for typing, and thus terms with annotation dropped should be typeable.

### Large Eliminations and Inconsistent Assumptions

Large Eliminations are types that are computed by evaluating terms. For example, IfZero n T1 T2 is a type with Large Elimination, where n is a term, and T1 and T2 are types. The type of the whole term is T1 if the term n is evaluated 0, otherwise, T2. That can be used for generic programming and proving.

In the $\texttt{T}^\texttt{vec}$ above, one cannot type a looping term or stuck term. However, with Large Eliminations added, one can give a type to a looping term or stuck term by making inconsistent implicit assumptions. To prevent from evaluating a term with inconsistent assumptions, when annotations are dropped, we can leave unbound lambda abstractions. Until all implicit assumptions are proved, terms are not evaluated (considered values). This treatment preserves the strong normalization property of $\texttt{T}^\texttt{vec}$ with Large Eliminations.