# Hindley–Milner type system

(Redirected from Damas-Milner)

A Hindley–Milner (HM) type system is a classical type system for the lambda calculus with parametric polymorphism. It is also known as Damas–Milner or Damas–Hindley–Milner, It was first described by J. Roger Hindley[1] and later rediscovered by Robin Milner.[2] Luis Damas contributed a close formal analysis and proof of the method in his PhD thesis.[3][4]

Among HM's more notable properties are its completeness and its ability to infer the most general type of a given program without programmer-supplied type annotations or other hints. Algorithm W is an efficient type inference method that performs in almost linear time with respect to the size of the source, making it practically useful to type large programs.[note 1] HM is preferably used for functional languages. Its first implementation was part of the type system of the programming language ML. Since then, HM has been extended in various ways, most notably by type class constraints like those in Haskell.

## Introduction

One and the same thing can be used for many purposes. A chair might be used to support a sitting person but also as a ladder to stand on while changing a light bulb or as a clothes valet. Beside having particular material qualities, which make a chair usable as such, it also has the aspect of a particular designation for its use. When no chair is at hand, other things might be used as seat, and the designation of a thing can be changed as fast as one can turn an empty bottle crate upside down changing its purpose from a container to become a support.

Different uses of a physically otherwise more or less identical things are usually accompanied by giving that things different names to emphasize the intended purpose. Depending on the use, seaman have a dozen or more words for a rope though it might materially be the same thing. Same in everyday language, where a leash indicates other uses than a line.

In computer science, this practice of naming things by its intended use is put to the extreme and called typing and the names or expressions types:

• Contrary to the richly varying raw materials in the physical world, computing has only one raw material, bytes, and like letters in a text they are used to build up and express everything in a computers memory.
• Programmers do not only have a group of words indicating the different uses of bytes, but rather a plethora of formal type languages each having their own grammar.
• There are branches of research in computer science and mathematics dedicated to develop and enhance such languages and their theoretical foundations having type systems and type theory as their subjects.
• Types are not only used in specifications and documentations, but are also an integral part build deeply into the programming languages.
• There, they are mechanised to strongly support the programmers' tasks.

Beside structuring object, (data) types serve as means to validate that these objects are used as intended. Much like the crate above can be only used as a support or a container at one time, a particular arrangement of bytes designated for one purpose might exclude other possible usages.

In programming, these usages are expressed by so-called functions or procedures which serve the role of verbs in natural language. As an example for typing verbs, an English dictionary might spell "to give so. sth.", indicating that the object must be a person and the indirect object a physical thing. In programming, "so." and "sth." would be called types and putting a thing into the place of "so." would be indicated as a programming error by a type checker.

Beside checking, one can use the types in this example to gain knowledge about a yet unknown word. Reading the sentence "Mary gives John a bilber" the types could be used to conclude that a "bilber" is likely a physical thing. This activity and conclusion is called type inference. While the story unfolds, more and more information about the ominous, yet unknown "bilber" might be gained, and after a while enough details become known to finally form a complete image of that kind of thing.

The type inference method designed by Hindley and Milner does exactly this for programming languages. The advantage of type inference over type checking is, that it allows a more natural and denser style of programming. Instead of starting a program text with a glossary defining what a bilber and everything else is, one can distribute this information over the text simply by using the yet undefined words and let a program collect all the details about them. The method does not only work for nouns (data types) but also for verbs (functions types). As a consequence, a programmer can proceed without ever mentioning types at all while still having the full support of a type checker that validates his writing. When reading a program, the programmer can make the type inference to provide him a full definition of everything named in the program when ever needed.

--

Historically type inference to this extend was developed for a particular group of programming languages, called functional languages. This branch started 1958 with LISP, which bases on the lambda calculus and compares well with modern scripting languages like Python or Lua. LISP was mainly used for computer science research, often for symbol manipulation purposes. Here large, tree-like data structures where common.

While the data in LISP is dynamically typed and types are thus only available to some degree while running a program, debugging type errors was not more a concern as it is in modern script languages as it is today. But being completely untyped, i.e. written without any explicit type informations, maintaining large programs written in LISP became soon a problem, because the more or less complicated types of the data were mentioned only in the program documentation and comments at best.

Thus the need of having a LISP like language with proper types soon became more and more pressing. At this point, programming language development faced two challenges:

1) Polymorphism. Some kinds of data are very generic. In particular, because LISP is a "list programming language". Lists are data whose type can be a "list of something", e.g. a list of numbers.

Functions for such generic data types are often themselves generic. For instance, counting the number of items on a list is independent of type of its items. Likely, a function that adds another item to given list. But in this case, type checking is wanted to make sure, that the list is still consistent with respect to the type of its items, meaning that only numbers might be added to a list of numbers.

Types for such "generic" data and functions are called polymorphic, in that they can be used for more than one type. Adding an item can be used for a list of numbers as well as for a list of words or even a list of something. More precisely, this kind of polymorphism is called parametric polymorphism, where "something" is the parameter in "list of something".

More formally, this is written List T with T being the type parameter. The type of a function adding a new item to a list is forall T . T -> List T -> List T, meaning that forall types T the function needs an item of type T and a list of T to produce a resulting (new) list of T.

Thus, the first challenge was, to design a type system at least as strong to properly express parametric polymorphism.

2) Type inference. Unfortunately, when using such schemes only with a type checker, the type checker must be continuously informed about the types. The above function would need the type T as its additional first parameter, resulting in program text that is so cluttered with type informations, that it becomes unreadable. Additionally, when keying in a program, a programmer would spend a significant portion of time keying in types over and over.

As an example, constructing a list "(1 2)" of two numbers would mean to write:

  (addItem Number 1 (addItem Number 2 (emptyList Number)))


This example is quite typical. Every third word a type, monotonously serving the type checker in every step. This can become worse, when the type becomes more complex, which easily alone can have the length of the whole example in real programs. Then, the method to be expressed by the code becomes buried under types.

To handle the issue, effective methods for type inference where subject of research and Hindley-Milner's covered in this article is one of them. Their method was first used in ML (programming language) (1973) and is used in an extended form in Haskell (programming language) (1990). The HM type inference method is strong enough to not only infer types for expressions but for whole programs including the procedures and local definitions, preserving the type-less style of programming for the user.

The following text gives an impression of the resulting programming style for the quicksort procedure:

quickSort []     = []                           -- Sort an empty list
quickSort (x:xs) = quickSort (filter (<x) xs)   -- Sort the left part of the list
++ [x] ++                    -- Insert pivot between two sorted parts
quickSort (filter (>=x) xs)  -- Sort the right part of the list


Though all of the functions in the above example need type parameters, types are nowhere mentioned. The code is statically type-checked even though the type of the function defined is unknown and must be inferred to type-check its applications in the body.

Over the years other branches of programming languages started to add their own version of parametric types. E.g. for C++ the template library was introduced in 1998 and came into use. Java introduced generics in 2004. When programming with type parameters became more common, likely problems as the ones sketched for LISP surfaced in imperative languages, too, perhaps not as pressing as it was in the functional branch. As a consequence these languages are extended to support some type inference technics, for instance via the "auto" feature in C++11 (2014). Unfortunately, the stronger type inference methods developed for functional programming cannot simply be integrated there, as their type system's features are in part incompatible and a programming language must be designed from ground when a continuous type inference is wanted.

## Features of the HM method

Before presenting the type system and the related algorithms, the following sections make some features of HM more formally precise.

### Type-Checking vs. Type-Inference

In a typing, an expression E is opposed to a type T, formally written as E : T. Usually a typing only makes sense within some context, which is omitted here.

In this setting, the following questions are of particular interest:

1. E : T? In this case, both an expression and a type is given. Now is E really a T? This is scenario is known as type-checking.
2. E : _? Here, only the expression is known. So what is E? If there is a way to derive a type for E, the method accomplishes a type inference.
3. _ : T? The other way round, given only a type, is there any expression for it or does the type have no values? Is there any (example for) T? And in light of the Curry–Howard isomorphism, is there a proof for T?

For the simply typed lambda calculus, all three questions are decidable. The situation is not as comfortable when more expressive types are allowed. Additionally, the simply typed lambda calculus makes the types of the parameters of each function explicit, while they are not needed in HM. While HM is a type inference method, it can be used also for type checking. To this end, a type is first inferred from E and then compared with the result type wanted. The third question might become of interest when looking at recursively defined functions in the end of the article.

### Monomorphism vs. polymorphism

In the simply typed lambda calculus, types ${\displaystyle T}$ are either atomic type constants or function types of form ${\displaystyle T\rightarrow T}$. Such types are monomorphic. Typical examples are the types used in arithmetic values:

 3       : Number
add     : Number -> Number -> Number


Contrary to this, the untyped lambda calculus is neutral to typing at all and many of its functions can be meaningfully applied to all type of arguments. The trivial example is the identity function

id ${\displaystyle \equiv \lambda }$ x . x

which simply returns whatever value it is applied to. Less trivial examples include parametric types like lists.

While polymorphism in general means, that operations accept values of more than one type, the polymorphism here is parametric. One finds the notation of type schemes in the literature, too, emphasizing the parametric nature of the polymorphism. Additionally to type constants, (quantified) type variables are allowed. E.g.:

 cons : forall a . a -> List a -> List a
nil  : forall a . List a.
id   : forall a . a -> a


Polymorphic types can become monomorphic by consistent substitution of their variables. Examples of monomorphic instances are:

id'  : String -> String
nil' : List Number


More generally, types are polymorphic when they contain type variables, while types without them are monomorphic.

Contrary to the type systems used e.g. in Pascal (1970) or C (1972), which only support monomorphic types, HM is designed with emphasis on parametric polymorphism. The successors of the languages mentioned, like C++ (1985), focused on different types of polymorphism, namely subtyping in connection with object-oriented programming and overloading. While subtyping is incompatible with HM, a variant of systematic overloading is added to an HM-based type system in Haskell.

### Let-Polymorphism

Extending the simply typed lambda calculus towards polymorphism one has to define when deriving an instance of a value is admissible. Ideally, this would be allowed with any use of a bound variable, as in:

 (λ id .  ... (id 3) ... (id "text") ... ) (λ x . x)


Unfortunately, type inference in such a system is not decidable. Instead, HM provides a so-called let-polymorphism of the form

 let id = λ x . x
in ... (id 3) ... (id "text") ...


restricting the binding mechanism extending the expression syntax. Only values bound in a let construct are subject to instantiation, i.e. are polymorphic, while the parameters in lambda-abstractions are treated as being monomorphic.

## Overview

The remainder of the article is more technical as it has to present the HM as it is handled in the literature. It proceeds as follows:

• The HM type system is defined. This is finally done by means of a deduction system making precise what expressions have what type, if any.
• Starting from there, it works towards an implementation of the type inference method. After introducing a syntax driven variant of the above deductive system, it sketches an efficient implementation (algorithm J), appealing mostly to the reader's metalogical intuition in this step.
• Because it remains open whether this implementation indeed realises the initial deduction system, a less efficient implementation (algorithm W), is introduced and its use in a proof is hinted.
• Finally, further topics related to the algorithm are discussed.

The description as deduction systems is used throughout, even for the two algorithms, to make the various forms in which the HM method is presented directly comparable.

## The HM type system

The type system is formally described by syntax rules to fix a language for the expressions, types, etc. The presentation of the syntax is not too formal, in that it is written down not looking at the surface grammar, but rather on the depth grammar, leaving some syntactical details open. This form of presentation is usual. Building on this, type rules are used to define how expressions and types are related. Like before, the form is a bit liberal.

### Syntax

 Expressions ${\displaystyle {\begin{array}{lrll}\\e&=&x&{\textrm {variable}}\\&\vert &e_{1}\ e_{2}&{\textrm {application}}\\&\vert &\lambda \ x\ .\ e&{\textrm {abstraction}}\\&\vert &{\mathtt {let}}\ x=e_{1}\ {\mathtt {in}}\ e_{2}&\\\\\end{array}}}$ Types ${\displaystyle {\begin{array}{llrll}\\{\textrm {mono}}&\tau &=&\alpha &\ {\textrm {variable}}\\&&\vert &C\ \tau \dots \tau &\ {\textrm {application}}\\{\textrm {poly}}&\sigma &=&\tau \\&&\vert &\forall \ \alpha \ .\ \sigma &\ {\textrm {quantifier}}\\\\\end{array}}}$ Context and Typing ${\displaystyle {\begin{array}{llrl}\\{\text{Context}}&\Gamma &=&\epsilon \ {\mathtt {(empty)}}\\&&\vert &\Gamma ,\ x:\sigma \\{\text{Typing}}&&=&\Gamma \vdash e:\sigma \\\\\end{array}}}$ Free Type Variables ${\displaystyle {\begin{array}{ll}\\{\text{free}}(\ \alpha \ )&=\ \left\{\alpha \right\}\\{\text{free}}(\ D\ \tau _{1}\dots \tau _{n}\ )&=\ \bigcup \limits _{i=1}^{n}{{\text{free}}(\ \tau _{i}\ )}\\{\text{free}}(\ \Gamma \ )&=\ \bigcup \limits _{x:\sigma \in \Gamma }{\text{free}}(\ \sigma \ )\\\\{\text{free}}(\ \forall \ \alpha \ .\ \sigma \ )&=\ {\text{free}}(\ \sigma \ )\ -\ \left\{\alpha \right\}\\{\text{free}}(\ \Gamma \vdash e:\sigma \ )&=\ {\text{free}}(\ \sigma \ )\ -\ {\text{free}}(\ \Gamma \ )\\\\\end{array}}}$

The expressions to be typed are exactly those of the lambda calculus, enhanced by a let-expression as shown in the adjacent table. Parentheses can be used to disambiguate an expression. The application is left-binding and binds stronger than abstraction or the let-in construct.

Types as a whole are syntactically split into two groups, called mono- and polytypes.[note 2]

#### Monotypes

Monotypes always designate a particular type. Monotypes ${\displaystyle \tau }$ are syntactically represented as terms.

Examples of monotypes include type constants like ${\displaystyle {\mathtt {int}}}$ or ${\displaystyle {\mathtt {string}}}$, and parametric types like ${\displaystyle {\mathtt {Map\ (Set\ string)\ int}}}$. The later types are examples of applications of type functions, for example, from the set ${\displaystyle \{{\mathtt {Map^{2},\ Set^{1},\ string^{0},\ int^{0}}},\ \rightarrow ^{2}\}}$, where the superscript indicates the number of type parameters. The complete set of type functions ${\displaystyle C}$ is arbitrary in HM,[note 3] except that it must contain at least ${\displaystyle \rightarrow ^{2}}$, the type of functions. It is often written in infix notation for convenience. For example, a function mapping integers to strings has type ${\displaystyle {\mathtt {int}}\rightarrow {\mathtt {string}}}$. Again, parentheses can be used to disambiguate a type expression. The application binds stronger than the infix arrow, which is right-binding.

Type variables are admitted as monotypes. Monotypes are not to be confused with monomorphic types, which exclude variables and allow only ground terms.

Two monotypes are equal if they have identical terms.

#### Polytypes

Polytypes (or type schemes) are types containing variables bound by one or more for-all quantifiers, e.g. ${\displaystyle \forall \alpha .\alpha \rightarrow \alpha }$.

A function with polytype ${\displaystyle \forall \alpha .\alpha \rightarrow \alpha }$ can map any value of the same type to itself, and the identity function is a value for this type.

As another example ${\displaystyle \forall \alpha .({\mathtt {Set}}\ \alpha )\rightarrow {\mathtt {int}}}$ is the type of a function mapping all finite sets to integers. The count of members is a value for this type.

Note that quantifiers can only appear top level, i.e. a type ${\displaystyle \forall \alpha .\alpha \rightarrow \forall \alpha .\alpha }$ for instance, is excluded by the syntax of types. Note also that monotypes are included in the polytypes, thus a type has the general form ${\displaystyle \forall \alpha _{1}\dots \forall \alpha _{n}.\tau }$, where ${\displaystyle \tau }$ is a monotype.

Equality of polytypes is up to reordering the quantification and renaming the quantified variables (${\displaystyle \alpha }$-conversion). Further, quantified variables not occurring in the monotype can be dropped.

#### Context and Typing

To bring together meaningfully the still disjoint parts of the syntax, expressions and types, a third part is needed: the context. Syntactically, this is a list of pairs ${\displaystyle x:\sigma }$, called assignments, assumptions or bindings, stating for each value variable ${\displaystyle x_{i}}$ therein a type ${\displaystyle \sigma _{i}}$. All three parts combined give a typing judgment of the form ${\displaystyle \Gamma \ \vdash \ e:\sigma }$, stating that under assumptions ${\displaystyle \Gamma }$, the expression ${\displaystyle e}$ has type ${\displaystyle \sigma }$.

#### Free type variables

In a type ${\displaystyle \forall \alpha _{1}\dots \forall \alpha _{n}.\tau }$, the symbol ${\displaystyle \forall }$ is the quantifier binding the type variables ${\displaystyle \alpha _{i}}$ in the monotype ${\displaystyle \tau }$. The variables ${\displaystyle \alpha _{i}}$ are called quantified and any occurrence of a quantified type variable in ${\displaystyle \tau }$ is called bound and all unbound type variables in ${\displaystyle \tau }$ are called free. Additionally to the quantification ${\displaystyle \forall }$ in polytypes, type variables can also be bound by occurring in the context, but with the inverse effect on the right hand side of the ${\displaystyle \vdash }$. Such variables then behave like type constants there. Finally, a type variable may legally occur unbound in a typing, in which case they are implicitly all-quantified.

The presence of both bound and unbound type variables is a bit uncommon in programming languages. Often, all type variables are implicitly treated all-quantified. For instance, one does not have clauses with free variables in Prolog. Likely in Haskell, in the absence of the ScopedTypeVariables language extension, all type variables implicitly occur quantified, i.e. a Haskell type a -> a means ${\displaystyle \forall \alpha .\alpha \rightarrow \alpha }$ here.

### Type order

Polymorphism means that one and the same expression can have (perhaps infinitely) many types. But in this type system, these types are not completely unrelated, but rather orchestrated by the parametric polymorphism.

As an example, the identity ${\displaystyle \lambda x.x}$ can have ${\displaystyle \forall \alpha .\alpha \rightarrow \alpha }$ as its type as well as ${\displaystyle {\texttt {string}}\rightarrow {\texttt {string}}}$ or ${\displaystyle {\texttt {int}}\rightarrow {\texttt {int}}}$ and many others, but not ${\displaystyle {\texttt {int}}\rightarrow {\texttt {string}}}$. The most general type for this function is ${\displaystyle \forall \alpha .\alpha \rightarrow \alpha }$, while the others are more specific and can be derived from the general one by consistently replacing another type for the type parameter, i.e. the quantified variable ${\displaystyle \alpha }$. The counter-example fails because the replacement is not consistent.

The consistent replacement can be made formal by applying a substitution ${\displaystyle S=\left\{\ a_{i}\mapsto \tau _{i},\ \dots \ \right\}}$ to the term of a type ${\displaystyle \tau }$, written ${\displaystyle S\tau }$. As the example suggests, substitution is not only strongly related to an order, that expresses that a type is more or less special, but also with the all-quantification which allows the substitution to be applied.

 Specialization Rule ${\displaystyle \displaystyle {\frac {\tau '=\left\{\alpha _{i}\mapsto \tau _{i}\right\}\tau \quad \beta _{i}\not \in {\textrm {free}}(\forall \alpha _{1}...\forall \alpha _{n}.\tau )}{\forall \alpha _{1}...\forall \alpha _{n}.\tau \sqsubseteq \forall \beta _{1}...\forall \beta _{m}.\tau '}}}$

Formally, in HM, a type ${\displaystyle \sigma }$ is more general than ${\displaystyle \sigma '}$, formally ${\displaystyle \sigma \sqsubseteq \sigma '}$ if some quantified variable in ${\displaystyle \sigma }$ is consistently substituted such that one gains ${\displaystyle \sigma '}$ as shown in the side bar. This order is part of the type definition of the type system.

While substituting a monomorphic (ground) type for a quantified variable is straight forward, substituting a polytype has some pitfalls caused by the presence of free variables. Most particularly, unbound variables must not be replaced. They are treated as constants here. Additionally, note that quantifications can only occur top-level. Substituting a parametric type, one has to lift its quantors. The table on the right makes the rule precise.

Alternatively, consider an equivalent notation for the polytypes without quantors in which quantified variables are represented by a different set of symbols. In such a notation, the specialization reduces to plain consistent replacement of such variables.

The relation ${\displaystyle \sqsubseteq }$ is a partial order and ${\displaystyle \forall \alpha .\alpha }$ is its smallest element.

#### Principal type

While specialization of a type scheme is one use of the order, it plays a crucial second role in the type system. Type inference with polymorphism faces the challenge of summarizing all possible types an expression may have. The order guarantees that such a summary exists as the most general type of the expression.

#### Substitution in typings

The type order defined above can be extended to typings because the implied all-quantification of typings enables consistent replacement:

${\displaystyle \Gamma \vdash e:\sigma \quad \Longrightarrow \quad S\Gamma \vdash e:S\sigma }$

Contrary to the specialisation rule, this is not part of the definition, but like the implicit all-quantification rather a consequence of the type rules defined next. Free type variables in a typing serve as placeholders for possible refinement. The binding effect of the environment to free type variables on the right hand side of ${\displaystyle \vdash }$ that prohibits their substitution in the specialisation rule is again that a replacement has to be consistent and would need to include the whole typing.

### Deductive system

 The Syntax of Rules ${\displaystyle {\begin{array}{lrl}{\text{Predicate}}&=&\sigma \sqsubseteq \sigma '\\&\vert \ &\alpha \not \in free(\Gamma )\\&\vert \ &x:\alpha \in \Gamma \\\\{\text{Judgment}}&=&{\text{Typing}}\\{\text{Premise}}&=&{\text{Judgment}}\ \vert \ {\text{Predicate}}\\{\text{Conclusion}}&=&{\text{Judgment}}\\\\{\text{Rule}}&=&\displaystyle {\frac {{\textrm {Premise}}\ \dots }{\textrm {Conclusion}}}\quad [{\mathtt {Name}}]\end{array}}}$

The syntax of HM is carried forward to the syntax of the inference rules that form the body of the formal system, by using the typings as judgments. Each of the rules define what conclusion could be drawn from what premises. Additionally to the judgments, some extra conditions introduced above might be used as premises, too.

A proof using the rules is a sequence of judgments such that all premises are listed before a conclusion. The examples below show a possible format of proofs. From left to right, each line shows the conclusion, the ${\displaystyle [{\mathtt {Name}}]}$ of the rule applied and the premises, either by referring to an earlier line (number) if the premise is a judgment or by making the predicate explicit.

#### Typing rules

 Declarative Rule System ${\displaystyle {\begin{array}{cl}\displaystyle {\frac {x:\sigma \in \Gamma }{\Gamma \vdash _{D}x:\sigma }}&[{\mathtt {Var}}]\\\\\displaystyle {\frac {\Gamma \vdash _{D}e_{0}:\tau \rightarrow \tau '\quad \quad \Gamma \vdash _{D}e_{1}:\tau }{\Gamma \vdash _{D}e_{0}\ e_{1}:\tau '}}&[{\mathtt {App}}]\\\\\displaystyle {\frac {\Gamma ,\;x:\tau \vdash _{D}e:\tau '}{\Gamma \vdash _{D}\lambda \ x\ .\ e:\tau \rightarrow \tau '}}&[{\mathtt {Abs}}]\\\\\displaystyle {\frac {\Gamma \vdash _{D}e_{0}:\sigma \quad \quad \Gamma ,\,x:\sigma \vdash _{D}e_{1}:\tau }{\Gamma \vdash _{D}{\mathtt {let}}\ x=e_{0}\ {\mathtt {in}}\ e_{1}:\tau }}&[{\mathtt {Let}}]\\\\\\\displaystyle {\frac {\Gamma \vdash _{D}e:\sigma '\quad \sigma '\sqsubseteq \sigma }{\Gamma \vdash _{D}e:\sigma }}&[{\mathtt {Inst}}]\\\\\displaystyle {\frac {\Gamma \vdash _{D}e:\sigma \quad \alpha \notin {\text{free}}(\Gamma )}{\Gamma \vdash _{D}e:\forall \ \alpha \ .\ \sigma }}&[{\mathtt {Gen}}]\\\\\end{array}}}$

The side box shows the deduction rules of the HM type system. One can roughly divide the rules into two groups:

The first four rules ${\displaystyle [{\mathtt {Var}}]}$ (variable or function access), ${\displaystyle [{\mathtt {App}}]}$ (application, i.e. function call with one parameter), ${\displaystyle [{\mathtt {Abs}}]}$ (abstraction, i.e. function declaration) and ${\displaystyle [{\mathtt {Let}}]}$ (variable declaration) are centered around the syntax, presenting one rule for each of the expression forms. Their meaning is obvious at the first glance, as they decompose each expression, prove their sub-expressions and finally combine the individual types found in the premises to the type in the conclusion.

The second group is formed by the remaining two rules ${\displaystyle [{\mathtt {Inst}}]}$ and ${\displaystyle [{\mathtt {Gen}}]}$. They handle specialization and generalization of types. While the rule ${\displaystyle [{\mathtt {Inst}}]}$ should be clear from the section on specialization above, ${\displaystyle [{\mathtt {Gen}}]}$ complements the former, working in the opposite direction. It allows generalization, i.e. to quantify monotype variables not bound in the context.

The following two examples exercise the rule system in action. Since both the expression and the type are given, they are a type-checking use of the rules.

Example: A proof for ${\displaystyle \Gamma \vdash _{D}id(n):int}$ where ${\displaystyle \Gamma =id:\forall \alpha .\alpha \rightarrow \alpha ,\ n:int}$, could be written

${\displaystyle {\begin{array}{llll}1:&\Gamma \vdash _{D}id:\forall \alpha .\alpha \rightarrow \alpha &[{\mathtt {Var}}]&(id:\forall \alpha .\alpha \rightarrow \alpha \in \Gamma )\\2:&\Gamma \vdash _{D}id:int\rightarrow int&[{\mathtt {Inst}}]&(1),\ (\forall \alpha .\alpha \rightarrow \alpha \sqsubseteq int\rightarrow int)\\3:&\Gamma \vdash _{D}n:int&[{\mathtt {Var}}]&(n:int\in \Gamma )\\4:&\Gamma \vdash _{D}id(n):int&[{\mathtt {App}}]&(2),\ (3)\\\end{array}}}$

Example: To demonstrate generalization, ${\displaystyle \vdash _{D}\ {\textbf {let}}\,id=\lambda x.x\ {\textbf {in}}\ id\,:\,\forall \alpha .\alpha \rightarrow \alpha }$ is shown below:

${\displaystyle {\begin{array}{llll}1:&x:\alpha \vdash _{D}x:\alpha &[{\mathtt {Var}}]&(x:\alpha \in \left\{x:\alpha \right\})\\2:&\vdash _{D}\lambda x.x:\alpha \rightarrow \alpha &[{\mathtt {Abs}}]&(1)\\3:&\vdash _{D}\lambda x.x:\forall \alpha .\alpha \rightarrow \alpha &[{\mathtt {Gen}}]&(2),\ (\alpha \not \in free(\epsilon ))\\4:&id:\forall \alpha .\alpha \rightarrow \alpha \vdash _{D}id:\forall \alpha .\alpha \rightarrow \alpha &[{\mathtt {Var}}]&(id:\forall \alpha .\alpha \rightarrow \alpha \in \left\{id:\forall \alpha .\alpha \rightarrow \alpha \right\})\\5:&\vdash _{D}{\textbf {let}}\,id=\lambda x.x\ {\textbf {in}}\ id\,:\,\forall \alpha .\alpha \rightarrow \alpha &[{\mathtt {Let}}]&(3),\ (4)\\\end{array}}}$

### Let-polymorphism

Not visible immediately, the rule set encodes a regulation under which circumstances a type might be generalized or not by a slightly varying use of mono- and polytypes in the rules ${\displaystyle [{\mathtt {Abs}}]}$ and ${\displaystyle [{\mathtt {Let}}]}$. Remember that ${\displaystyle \sigma }$ and ${\displaystyle \tau }$ denote poly- and monotypes respectively.

In rule ${\displaystyle [{\mathtt {Abs}}]}$, the value variable of the parameter of the function ${\displaystyle \lambda x.e}$ is added to the context with a monomorphic type through the premise ${\displaystyle \Gamma ,\ x:\tau \vdash _{D}e:\tau '}$, while in the rule ${\displaystyle [{\mathtt {Let}}]}$, the variable enters the environment in polymorphic form ${\displaystyle \Gamma ,\ x:\sigma \vdash _{D}e_{1}:\tau }$. Though in both cases the presence of ${\displaystyle x}$ in the context prevents the use of the generalisation rule for any free variable in the assignment, this regulation forces the type of parameter ${\displaystyle x}$ in a ${\displaystyle \lambda }$-expression to remain monomorphic, while in a let-expression, the variable could be introduced polymorphic, making specializations possible.

As a consequence of this regulation, ${\displaystyle \lambda f.(f\,{\textrm {true}},f\,{\textrm {0}})}$ cannot be typed, since the parameter ${\displaystyle f}$ is in a monomorphic position, while ${\displaystyle {\textbf {let}}\ f=\lambda x.x\,{\textbf {in}}\,(f\,{\textrm {true}},f\,{\textrm {0}})}$ has type ${\displaystyle (bool,int)}$, because ${\displaystyle f}$ has been introduced in a let-expression and is treated polymorphic therefore.

### Generalization rule

The generalisation rule is also worth for closer look. Here, the all-quantification implicit in the premise ${\displaystyle \Gamma \vdash e:\sigma }$ is simply moved to the right hand side of ${\displaystyle \vdash _{D}}$ in the conclusion. This is possible, since ${\displaystyle \alpha }$ does not occures free in the context. Again, while this makes the generalisation rule plausible, it is not really a consequence. Vis versa, the generalisation rule is part of the definition of HM's type system and the implicit all-quantification a consequence.

## An inference algorithm

Now that the deduction system of HM is at hand, one could present an algorithm and validate it with respect to the rules. Alternatively, it might be possible to derive it by taking a closer look on how the rules interact and proof are formed. This is done in the remainder of this article focusing on the possible decisions one can make while proving a typing.

### Degrees of freedom choosing the rules

Isolating the points in a proof, where no decision is possible at all, the first group of rules centered around the syntax leaves no choice since to each syntactical rule corresponds a unique typing rule, which determines a part of the proof, while between the conclusion and the premises of these fixed parts chains of ${\displaystyle [{\mathtt {Inst}}]}$ and ${\displaystyle [{\mathtt {Gen}}]}$ could occur. Such a chain could also exist between the conclusion of the proof and the rule for topmost expression. All proofs must have the so sketched shape.

Because the only choice in a proof with respect of rule selection are the ${\displaystyle [{\mathtt {Inst}}]}$ and ${\displaystyle [{\mathtt {Gen}}]}$ chains, the form of the proof suggests the question whether it can be made more precise, where these chains might be needed. This is in fact possible and leads to a variant of the rules system with no such rules.

### Syntax-directed rule system

 Syntactical Rule System ${\displaystyle {\begin{array}{cl}\displaystyle {\frac {x:\sigma \in \Gamma \quad \sigma \sqsubseteq \tau }{\Gamma \vdash _{S}x:\tau }}&[{\mathtt {Var}}]\\\\\displaystyle {\frac {\Gamma \vdash _{S}e_{0}:\tau \rightarrow \tau '\quad \quad \Gamma \vdash _{S}e_{1}:\tau }{\Gamma \vdash _{S}e_{0}\ e_{1}:\tau '}}&[{\mathtt {App}}]\\\\\displaystyle {\frac {\Gamma ,\;x:\tau \vdash _{S}e:\tau '}{\Gamma \vdash _{S}\lambda \ x\ .\ e:\tau \rightarrow \tau '}}&[{\mathtt {Abs}}]\\\\\displaystyle {\frac {\Gamma \vdash _{S}e_{0}:\tau \quad \quad \Gamma ,\,x:{\bar {\Gamma }}(\tau )\vdash _{S}e_{1}:\tau '}{\Gamma \vdash _{S}{\mathtt {let}}\ x=e_{0}\ {\mathtt {in}}\ e_{1}:\tau '}}&[{\mathtt {Let}}]\end{array}}}$ Generalization ${\displaystyle {\bar {\Gamma }}(\tau )=\forall \ {\hat {\alpha }}\ .\ \tau \quad \quad {\hat {\alpha }}={\textrm {free}}(\tau )-{\textrm {free}}(\Gamma )}$

A contemporary treatment of HM uses a purely syntax-directed rule system due to Clement[5] as an intermediate step. In this system, the specialization is located directly after the original ${\displaystyle [{\mathtt {Var}}]}$ rule and merged into it, while the generalization becomes part of the ${\displaystyle [{\mathtt {Let}}]}$ rule. There the generalization is also determined to always produce the most general type by introducing the function ${\displaystyle {\bar {\Gamma }}(\tau )}$, which quantifies all monotype variables not bound in ${\displaystyle \Gamma }$.

Formally, to validate, that this new rule system ${\displaystyle \vdash _{S}}$ is equivalent to the original ${\displaystyle \vdash _{D}}$, one has to show that ${\displaystyle \Gamma \vdash _{D}\ e:\sigma \Leftrightarrow \Gamma \vdash _{S}\ e:\sigma }$, which falls apart into two sub-proofs:

• ${\displaystyle \Gamma \vdash _{D}\ e:\sigma \Leftarrow \Gamma \vdash _{S}\ e:\sigma }$ (Consistency)
• ${\displaystyle \Gamma \vdash _{D}\ e:\sigma \Rightarrow \Gamma \vdash _{S}\ e:\sigma }$ (Completeness)

While consistency can be seen by decomposing the rules ${\displaystyle [{\mathtt {Let}}]}$ and ${\displaystyle [{\mathtt {Var}}]}$ of ${\displaystyle \vdash _{S}}$ into proofs in ${\displaystyle \vdash _{D}}$, it is likely visible that ${\displaystyle \vdash _{S}}$ is incomplete, as one cannot show ${\displaystyle \lambda \ x.x:\forall \alpha .\alpha \rightarrow \alpha }$ in ${\displaystyle \vdash _{S}}$, for instance, but only ${\displaystyle \lambda \ x.x:\alpha \rightarrow \alpha }$. An only slightly weaker version of completeness is provable [6] though, namely

• ${\displaystyle \Gamma \vdash _{D}\ e:\sigma \Rightarrow \Gamma \vdash _{S}\ e:\tau \wedge {\bar {\Gamma }}(\tau )\sqsubseteq \sigma }$

implying, one can derive the principal type for an expression in ${\displaystyle \vdash _{S}}$ allowing us to generalize the proof in the end.

Comparing ${\displaystyle \vdash _{D}}$ and ${\displaystyle \vdash _{S}}$, note that now only monotypes appear in the judgments of all rules. Additionally, the shape of any possible proof with the deduction system is now identical to the shape of the expression (both seen as trees). Thus the expression fully determines the shape of the proof. In ${\displaystyle \vdash _{D}}$ the shape would likely be determined with respect to all rules except ${\displaystyle [{\mathtt {Inst}}]}$ and ${\displaystyle [{\mathtt {Gen}}]}$, which allow building arbitrarily long branches (chains) between the other nodes.

### Degrees of freedom instantiating the rules

Now that the shape of the proof is known, one is already close to formulating a type inference algorithm. Because any proof for a given expression must have the same shape, one can assume the monotypes in the proof's judgements to be undetermined and consider how to determine them.

Here, the substitution (specialisation) order comes into play. Although at the first glance one cannot determine the types locally, the hope is that it is possible to refine them with the help of the order while traversing the proof tree, additionally assuming, because the resulting algorithm is to become an inference method, that the type in any premise will be determined as the best possible. And in fact, one can, as looking at the rules of ${\displaystyle \vdash _{S}}$ suggests:

• ${\displaystyle [Abs]}$: The critical choice is ${\displaystyle \tau }$. At this point, nothing is known about ${\displaystyle \tau }$, so one can only assume the most general type, which is ${\displaystyle \forall \alpha .\alpha }$. The plan is to specialize the type if it should become necessary. Unfortunately, a polytype is not permitted in this place, so some ${\displaystyle \alpha }$ has to do for the moment. To avoid unwanted captures, a type variable not yet in the proof is a safe choice. Additionally, one has to keep in mind that this monotype is not yet fixed, but might be further refined.
• ${\displaystyle [Var]}$: The choice is how to refine ${\displaystyle \sigma }$. Because any choice of a type ${\displaystyle \tau }$ here depends on the usage of the variable, which is not locally known, the safest bet is the most general one. Using the same method as above one can instantiate all quantified variables in ${\displaystyle \sigma }$ with fresh monotype variables, again keeping them open to further refinement.
• ${\displaystyle [Let]}$: The rule does not leave any choice. Done.
• ${\displaystyle [App]}$: Only the application rule might force a refinement to the variables "opened" so far.

The first premise forces the outcome of the inference to be of the form ${\displaystyle \tau \rightarrow \tau '}$.

• If it is, then fine. One can later pick its ${\displaystyle \tau '}$ for the result.
• If not, it might be an open variable. Then this can be refined to the required form with two new variables as before.
• Otherwise, the type checking fails because the first premise inferred a type which is not and cannot be made into a function type.

The second premise requires that the inferred type is equal to ${\displaystyle \tau }$ of the first premise. Now there are two possibly different types, perhaps with open type variables, at hand to compare and to make equal if it is possible. If it is, a refinement is found, and if not, a type error is detected again. An effective method is known to "make two terms equal" by substitution, Robinson's Unification in combination with the so-called Union-Find algorithm.

To briefly summarize the union-find algorithm, given the set of all types in a proof, it allows one to group them together into equivalence classes by means of a ${\displaystyle {\mathtt {union}}}$ procedure and to pick a representative for each such class using a ${\displaystyle {\mathtt {find}}}$ procedure. Emphasizing the word procedure in the sense of side effect, we're clearly leaving the realm of logic in order to prepare an effective algorithm. The representative of a ${\displaystyle {\mathtt {union}}(a,b)}$ is determined such that, if both ${\displaystyle a}$ and ${\displaystyle b}$ are type variables then the representative is arbitrarily one of them, but while uniting a variable and a term, the term becomes the representative. Assuming an implementation of union-find at hand, one can formulate the unification of two monotypes as follows:

unify(ta,tb):
ta = find(ta)
tb = find(tb)
if both ta,tb are terms of the form D p1..pn with identical D,n then
unify(ta[i],tb[i]) for each corresponding ith parameter
else
if at least one of ta,tb is a type variable then
union(ta,tb)
else
error 'types do not match'


Now having a sketch of an inference algorithm at hand, a more formal presentation is given in the next section. It is described in Milner[2] P. 370 ff. as algorithm J.

### Algorithm J

 Algorithm J ${\displaystyle {\begin{array}{cl}\displaystyle {\frac {x:\sigma \in \Gamma \quad \tau ={\mathit {inst}}(\sigma )}{\Gamma \vdash _{J}x:\tau }}&[{\mathtt {Var}}]\\\\\displaystyle {\frac {\Gamma \vdash _{J}e_{0}:\tau _{0}\quad \Gamma \vdash _{J}e_{1}:\tau _{1}\quad \tau '={\mathit {newvar}}\quad {\mathit {unify}}(\tau _{0},\ \tau _{1}\rightarrow \tau ')}{\Gamma \vdash _{J}e_{0}\ e_{1}:\tau '}}&[{\mathtt {App}}]\\\\\displaystyle {\frac {\tau ={\mathit {newvar}}\quad \Gamma ,\;x:\tau \vdash _{J}e:\tau '}{\Gamma \vdash _{J}\lambda \ x\ .\ e:\tau \rightarrow \tau '}}&[{\mathtt {Abs}}]\\\\\displaystyle {\frac {\Gamma \vdash _{J}e_{0}:\tau \quad \quad \Gamma ,\,x:{\bar {\Gamma }}(\tau )\vdash _{J}e_{1}:\tau '}{\Gamma \vdash _{J}{\mathtt {let}}\ x=e_{0}\ {\mathtt {in}}\ e_{1}:\tau '}}&[{\mathtt {Let}}]\end{array}}}$

The presentation of Algorithm J is a misuse of the notation of logical rules, since it includes side effects but allows a direct comparison with ${\displaystyle \vdash _{S}}$ while expressing an efficient implementation at the same time. The rules now specify a procedure with parameters ${\displaystyle \Gamma ,e}$ yielding ${\displaystyle \tau }$ in the conclusion where the execution of the premises proceeds from left to right.

The procedure ${\displaystyle inst(\sigma )}$ specializes the polytype ${\displaystyle \sigma }$ by copying the term and replacing the bound type variables consistently by new monotype variables. '${\displaystyle newvar}$' produces a new monotype variable. Likely, ${\displaystyle {\bar {\Gamma }}(\tau )}$ has to copy the type introducing new variables for the quantification to avoid unwanted captures. Overall, the algorithm now proceeds by always making the most general choice leaving the specialization to the unification, which by itself produces the most general result. As noted above, the final result ${\displaystyle \tau }$ has to be generalized to ${\displaystyle {\bar {\Gamma }}(\tau )}$ in the end, to gain the most general type for a given expression.

Because the procedures used in the algorithm have nearly O(1) cost, the overall cost of the algorithm is close to linear in the size of the expression for which a type is to be inferred. This is in strong contrast to many other attempts to derive type inference algorithms, which often came out to be NP-hard, if not undecidable with respect to termination. Thus the HM performs as well as the best fully informed type-checking algorithms can. Type-checking here means that an algorithm does not have to find a proof, but only to validate a given one.

Efficiency is slightly reduced because the binding of type variables in the context has to be maintained to allow computation of ${\displaystyle {\bar {\Gamma }}(\tau )}$ and enable an occurs check to prevent the building of recursive types during ${\displaystyle union(\alpha ,\tau )}$. An example of such a case is ${\displaystyle \lambda \ x.(x\ x)}$, for which no type can be derived using HM. Practically, types are only small terms and do not build up expanding structures. Thus, in complexity analysis, one can treat comparing them as a constant, retaining O(1) costs.

## Proving the algorithm

In the previous section, while sketching the algorithm its proof was hinted at with metalogical argumentation. While this leads to an efficient algorithm J, it is not clear whether the algorithm properly reflects the deduction systems D or S which serve as a semantic base line.

The most critical point in the above argumentation is the refinement of monotype variables bound by the context. For instance, the algorithm boldly changes the context while inferring e.g. ${\displaystyle \lambda f.(f\ 1)}$, because the monotype variable added to the context for the parameter ${\displaystyle f}$ later needs to be refined to ${\displaystyle int\rightarrow \beta }$ when handling application. The problem is that the deduction rules do not allow such a refinement. Arguing that the refined type could have been added earlier instead of the monotype variable is an expedient at best.

The key to reaching a formally satisfying argument is to properly include the context within the refinement. Formally, typing is compatible with substitution of free type variables.

${\displaystyle \Gamma \vdash _{S}e:\tau \quad \Longrightarrow \quad S\Gamma \vdash _{S}e:S\tau }$

To refine the free variables thus means to refine the whole typing.

### Algorithm W

 Algorithm W ${\displaystyle {\begin{array}{cl}\displaystyle {\frac {x:\sigma \in \Gamma \quad \tau ={\mathit {inst}}(\sigma )}{\Gamma \vdash _{W}x:\tau ,\emptyset }}&[{\mathtt {Var}}]\\\\\displaystyle {\frac {\begin{array}{ll}\Gamma \vdash _{W}e_{0}:\tau _{0},S_{0}&S_{0}\Gamma \vdash _{W}e_{1}:\tau _{1},S_{1}\\\tau '={\mathit {newvar}}&S_{2}={\mathsf {mgu}}(S_{1}\tau _{0},\ \tau _{1}\rightarrow \tau ')\end{array}}{\Gamma \vdash _{W}e_{0}\ e_{1}:S_{2}\tau ',S_{2}S_{1}S_{0}}}&[{\mathtt {App}}]\\\\\displaystyle {\frac {\tau ={\mathit {newvar}}\quad \Gamma ,\;x:\tau \vdash _{W}e:\tau ',S}{\Gamma \vdash _{W}\lambda \ x\ .\ e:S\tau \rightarrow \tau ',S}}&[{\mathtt {Abs}}]\\\\\displaystyle {\frac {\Gamma \vdash _{W}e_{0}:\tau ,S_{0}\quad S_{0}\Gamma ,\,x:{\overline {S_{0}\Gamma }}(\tau )\vdash _{W}e_{1}:\tau ',S_{1}}{\Gamma \vdash _{W}{\mathtt {let}}\ x=e_{0}\ {\mathtt {in}}\ e_{1}:\tau ',S_{1}S_{0}}}&[{\mathtt {Let}}]\end{array}}}$

From there, a proof of algorithm J leads to algorithm W, which only makes the side effects imposed by the procedure ${\displaystyle {\textit {union}}}$ explicit by expressing its serial composition by means of the substitutions ${\displaystyle S_{i}}$. The presentation of algorithm W in the sidebar still makes use of side effects in the operations set in italic, but these are now limited to generating fresh symbols. The form of judgement is ${\displaystyle \Gamma \vdash e:\tau ,S}$, denoting a function with a context and expression as parameter producing a monotype together with a substitution. ${\displaystyle {\textsf {mgu}}}$ is a side-effect free version of ${\displaystyle {\textit {union}}}$ producing a substitution which is the most general unifier.

While algorithm W is normally considered to be the HM algorithm and is often directly presented after the rule system in literature, its purpose is described by Milner[2] on P. 369 as follows:

As it stands, W is hardly an efficient algorithm; substitutions are applied too often. It was formulated to aid the proof of soundness. We now present a simpler algorithm J which simulates W in a precise sense.

While he considered W more complicated and less efficient, he presented it in his publication before J. It has its merits when side effects are unavailable or unwanted. By the way, W is also needed to prove completeness, which is factored by him into the soundness proof.

### Proof obligations

Before formulating the proof obligations, a deviation between the rules systems D and S and the algorithms presented needs to be emphasized.

While the development above sort of misused the monotypes as "open" proof variables, the possibility that proper monotype variables might be harmed was sidestepped by introducing fresh variables and hoping for the best. But there's a catch: One of the promises made was that these fresh variables would be "kept in mind" as such. This promise is not fulfilled by the algorithm.

Having a context ${\displaystyle 1:int,\ f:\alpha }$, the expression ${\displaystyle f\ 1}$ cannot be typed in either ${\displaystyle \vdash _{D}}$ or ${\displaystyle \vdash _{S}}$, but the algorithms come up with the type ${\displaystyle \forall \beta .int\rightarrow \beta }$ after final generalisation, where W additionally delivers the substitution ${\displaystyle \left\{\alpha \mapsto int\rightarrow \beta \right\}}$, meaning that the algorithm fails to detect all type errors. This omission can easily be fixed by more carefully distinguishing proof variables and monotype variables.

The authors were well aware of the problem but decided not to fix it. One might assume a pragmatic reason behind this. While more properly implementing the type inference would have enabled the algorithm to deal with abstract monotypes, they were not needed for the intended application where none of the items in a preexisting context have free variables. In this light, the unneeded complication was dropped in favor of a simpler algorithm. The remaining downside is that the proof of the algorithm with respect to the rule system is less general and can only be made for contexts with ${\displaystyle free(\Gamma )=\emptyset }$ as a side condition.

${\displaystyle {\begin{array}{lll}{\text{(Correctness)}}&\Gamma |-_{W}e:\tau ,S&\quad \Longrightarrow \quad \Gamma \vdash _{S}e:\tau \\{\text{(Completeness)}}&\Gamma |-_{S}e:\tau &\quad \Longrightarrow \quad \Gamma \vdash _{W}e:\tau ',S\quad \quad {\text{forall}}\ \tau \ {\text{where}}\ {\overline {\emptyset }}(\tau ')\sqsubseteq \tau \end{array}}}$

The side condition in the completeness obligation addresses how the deduction may give many types, while the algorithm always produces one. At the same time, the side condition demands that the type inferred is actually the most general.

To properly prove the obligations one needs to strengthen them first to allow activating the substitution lemma threading the substitution ${\displaystyle S}$ through ${\displaystyle \vdash _{S}}$ and ${\displaystyle \vdash _{W}}$. From there, the proofs are by induction over the expression.

Another proof obligation is the substition lemma itself, i.e. the substitution of the typing, which finally establishes the all-quantification. The later cannot formally be proven, since not no such syntax is at hand.

## Extensions

### Recursive definitions

To make programming practical recursive functions are needed. A central property of the lambda calculus is that recursive definitions are not directly available, but can instead be expressed with a fixed point combinator.

Unfortunately, the fixpoint combinator cannot be formulated in a typed version of the lambda calculus without having a disastrous effect on the system as outlined below.

#### Typing rule

The original paper[4] notes that recursion can be realized by a combinator ${\displaystyle {\mathit {fix}}:\forall \alpha .(\alpha \rightarrow \alpha )\rightarrow \alpha }$. A possible recursive definition could thus be formulated as ${\displaystyle {\mathtt {rec}}\ v=e_{1}\ {\mathtt {in}}\ e_{2}\ ::={\mathtt {let}}\ v={\mathit {fix}}(\lambda v.e_{1})\ {\mathtt {in}}\ e_{2}}$.

Alternatively an extension of the expression syntax and an extra typing rule is possible:

${\displaystyle \displaystyle {\frac {\Gamma ,\Gamma '\vdash e_{1}:\tau _{1}\quad \dots \quad \Gamma ,\Gamma '\vdash e_{n}:\tau _{n}\quad \Gamma ,\Gamma ''\vdash e:\tau }{\Gamma \ \vdash \ {\mathtt {rec}}\ v_{1}=e_{1}\ {\mathtt {and}}\ \dots \ {\mathtt {and}}\ v_{n}=e_{n}\ {\mathtt {in}}\ e:\tau }}\quad [{\mathtt {Rec}}]}$

where

• ${\displaystyle \Gamma '=v_{1}:\tau _{1},\ \dots ,\ v_{n}:\tau _{n}}$
• ${\displaystyle \Gamma ''=v_{1}:{\bar {\Gamma }}(\ \tau _{1}\ ),\ \dots ,\ v_{n}:{\bar {\Gamma }}(\ \tau _{n}\ )}$

basically merging ${\displaystyle [{\mathtt {Abs}}]}$ and ${\displaystyle [{\mathtt {Let}}]}$ while including the recursively defined variables in monotype positions where they occur to the left of the ${\displaystyle {\mathtt {in}}}$ but as polytypes to the right of it. This formulation perhaps best summarizes the essence of let-polymorphism.

#### Consequences

While the above is straightforward it does come at a price.

Type theory connects lambda calculus computation and logic. The easy modification above has effects on both:

• The strong normalisation property is spoiled.
• The logic collapses.

Programs in simply typed lambda calculus are guaranteed to always terminate. Moreover, they are even guaranteed to terminate under any evaluation strategy, be it top down, bottom up, breadth first, whatever. The same is true for expressions that have types in HM. It is well-known that separating terminating from non-terminating programs is most difficult, and especially in lambda calculus, which is so expressive that it can formulate recursion with just a few symbols. Thus the initial inability of HM to provide recursive functions was not an omission, but a feature. Adding recursion enables normal programming but the guarantee is not longer valid.

Another reading of the typing is given by the Curry–Howard isomorphism. Here the types are interpreted as logical expressions. Let's look at the type of the fixpoint combinator from this perspective, assuming the variables to have logical value:

${\displaystyle (\alpha \rightarrow \alpha )\rightarrow \alpha \quad \Leftrightarrow \quad {\textsf {true}}\rightarrow \alpha \quad \Leftrightarrow \quad \neg {\textsf {true}}\vee \alpha \quad \Leftrightarrow \quad \alpha }$

This is invalid. Adding an invalid axiom will break the logic in the sense that every formula can then be shown to be true in it, e.g. ${\displaystyle {\text{true}}\Leftrightarrow {\text{false}}}$. Thus the ability to distinguish even two simple things is no longer given. Everything is the same and collapses into 42. The fixpoint combinator that came in so handy above also plays a role in Curry's paradox.

Logic aside, does this matter for typing programs? It does. Since one is now able to formulate non-terminating functions, one can make a function that would return whatever one wants[note 4] but never really returns:

${\displaystyle {\text{fix}}\ {\text{id}}:\forall a.a}$.

In practical programming such a function can come in handy when breaking out of a computation, like with exit(1) in C, while silencing the type checker in the current branch by returning essentially nothing but with a suitable type.

Less desirable is that the type checker (type inferencer) now succeeds with a type for a function that in fact never returns any value, like ${\displaystyle {\text{fix}}\ {\text{id}}}$. The function would return a value of this type, but it cannot because no terminating function with this type exists. The type checker's claim that everything is ok thus has to be taken with a grain of salt. The types might only be claimed to be checked, but the program can still be typed wrong. Only if all functions are terminating does ${\displaystyle \alpha }$ in the logic above have a true value, and the assertions of the type checker become strong again.

Overloading means, that different functions still can be defined and used with the same name. Most programming languages at least provide overloading with the built-in arithmetic operations (+,<,etc.), to allow the programmer to write arithmetic expressions in the same form, even for different numerical types like int or real. Because a mixture of these different types within the same expression also demands for implicit conversion, overloading especially for these operations is often built into the programming language itself. In some languages, this feature is generalized and made available to the user, e.g. in C++.

While ad-hoc overloading has been avoided in functional programming for the computation costs both in type checking and inference[citation needed], a means to systematise overloading has been introduced that resembles both in form and naming to object oriented programming, but works one level upwards. "Instances" in this systematic are not objects (i.e. on value level), but rather types. The quicksort example mentioned in the introduction uses the overloading in the orders, having the following type annotation in Haskell:

quickSort :: Ord a => [a] -> [a]


Herein, the type a is not only polymorphic, but also restricted to be an instance of some type class Ord, that provides the order predicates < and >= used in the functions body. The proper implementations of these predicates are then passed to quicksorts as additional parameters, as soon as quicksort is used on more concrete types providing a single implementation of the overloaded function quickSort.

Because the "classes" only allow a single type as their argument, the resulting type system can still provide inference. Additionally, the type classes can then be equipped with some kind of overloading order allowing one to arrange the classes as a lattice.

### Meta types

Parametric polymorphism implies that types themselves are passed as parameters as if they were proper values. Passed as arguments into a proper functions as in the introduction, but also into "type functions" as in the "parametric" type constants, leads to the question how to more properly type types themselves. A meta type,[note 5] the "type of types" would be useful to create an even more expressive type system.

Though this would be a straight forward extension, unfortunately, only unification is not longer decidable in the presence of meta types, rendering type inference impossible in this extend of generality. Additionally, assuming a type of all types that includes itself as type leads into a paradox, as in the set of all sets, so one must proceed in steps of levels of abstraction. Research in second order lambda calculus, one step upwards, showed, that type inference is undecidable in this generality.

Parts of one extra level has been introduced into Haskell named kind, where it is used helping to type monads. Kinds are left implicit, working behind scene in the inner mechanics of the extended type system.

### Subtyping

Attempts to combine subtyping and type inference have caused quite some frustration. While type inference is needed in object-oriented programming for the same reason as in functional languages, methods like HM cannot be made going for this purpose.[citation needed] It is not difficult to setup a type system with subtyping enabling object-oriented style, as e.g.Cardelli [7] shows in his system ${\displaystyle F_{<:}}$.

• The type equivalence can be broken up into a subtyping relation "<:".
• Extra type rules define this relation e.g. for the functions.
• A suiting record type is then added whose values represent the objects.

Such objects would be immutual in a functional language context, but the type system would enable object-oriented programming style and the type inference method could be reused in imperative languages.

The subtyping rule for the record types is:

${\displaystyle \displaystyle {\frac {n\leq m\quad \quad \tau _{1}<:\tau _{1}'\quad \dots \quad \tau _{n}<:\tau _{n}'}{\Gamma \ \vdash \ \mathbf {Record} \ v_{1}:\tau _{1}\ \dots \ v_{n}:\tau _{n}\ \mathbf {end} <:\mathbf {Record} \ v_{1}=\tau _{1}'\ \dots \ v_{m}=\tau _{m}'\ \mathbf {end} }}\quad [{\mathtt {Record}}]}$

Syntatically, record expressions would have form

${\displaystyle \mathbf {record} \ v_{1}=e_{1},\ \dots \ \mathbf {end} }$

and have a type rule leading to the above type. Such record values could then be used the same way as objects in object oriented programming.

## Notes

1. ^ Hindley–Milner is DEXPTIME-complete. Non-linear behaviour does manifest itself, yet only on pathological inputs. Thus the complexity theoretic proofs by (Mairson 1990) and (Kfoury, Tiuryn & Urzyczyn 1990) came as a surprise to the research community. When the depth of nested let-bindings is bounded—which is the case in realistic programs—Hindley–Milner type inference becomes polynomial.
2. ^ Polytypes are called "type schemes" in the original article.
3. ^ The parametric types ${\displaystyle C\ \tau \dots \tau }$ were not present in the original paper on HM and are not needed to present the method. None of the inference rules below will take care or even note them. The same holds for the non-parametric "primitive types" in said paper. All the machinery for polymorphic type inference can be defined without them. They have been included here for sake of examples but also because the nature of HM is all about parametric types. This comes from the function type ${\displaystyle \tau \rightarrow \tau }$, hard-wired in the inference rules, below, which already has two parameters and has been presented here as only a special case.
4. ^ To make a function that works just like Aladdin's wonderful lamp, one can define: ${\displaystyle {\text{iwisha}}\equiv {\text{fix id}}}$ and formulate the wish in a slightly more expressive type language like: "(iwisha (flying carpet))" passing the type explicitly as in the introduction. When the function returns, it delivers the wish, but no longer as a type (i.e. specification), but as something real, a value of that type. This implementation at least does not return. Many have dreamed of such a useful device, and if it exists it could do wonders even if only in computing. The type ${\displaystyle \forall \alpha .\alpha }$ is so general, that its values belong to the realm of imagination.
5. ^ The term "higher-order" has been so overused, that "meta type" is preferred here.

## References

1. ^ Hindley, J. Roger (1969). "The Principal Type-Scheme of an Object in Combinatory Logic". Transactions of the American Mathematical Society. 146: 29–60. doi:10.2307/1995158. JSTOR 1995158.
2. ^ a b c Milner, Robin (1978). "A Theory of Type Polymorphism in Programming". Journal of Computer and System Science (JCSS). 17: 348–374. CiteSeerX . doi:10.1016/0022-0000(78)90014-4.
3. ^ Damas, Luis (1985). Type Assignment in Programming Languages (PhD thesis). University of Edinburgh (CST-33-85).
4. ^ a b Damas, Luis; Milner, Robin (1982). Principal type-schemes for functional programs (PDF). 9th Symposium on Principles of programming languages (POPL'82). ACM. pp. 207–212.
5. ^ Clement (1986). A Simple Applicative Language: Mini-ML. LFP'86. ACM.
6. ^ Vaughan, Jeff (July 23, 2008) [May 5, 2005]. "A proof of correctness for the Hindley–Milner type inference algorithm" (PDF). Archived from the original (PDF) on 2012-03-24.
7. ^ Cardelli, Luca; Martini, Simone; Mitchell, John C.; Scedrov, Andre (1994). "An extension of system F with subtyping". Information and Computation, vol. 9. North Holland, Amsterdam. pp. 4–56. doi:10.1006/inco.1994.1013.
• Mairson, Harry G. (1990). "Deciding ML typability is complete for deterministic exponential time". Proceedings of the 17th ACM SIGPLAN-SIGACT symposium on Principles of programming languages. POPL '90. ACM: 382–401. doi:10.1145/96709.96748. ISBN 0-89791-343-4.
• Kfoury, A. J.; Tiuryn, J.; Urzyczyn, P. (1990). "ML typability is dexptime-complete". Lecture Notes in Computer Science. CAAP '90. 431: 206–220. doi:10.1007/3-540-52590-4_50.