Skip to content

Value restriction #3

@gatpsilva

Description

@gatpsilva

FreeST implements the infamous "value restriction", which has long been present in ML-like languages. In Polymorphic lambda calculus with context-free session types, which formalizes FreeST 2, Almeida et al. write:

The body of a type abstraction is restricted to a syntactic value as customary. This value restriction guarantees the sound interplay of call-by-value evaluation and polymorphism in the presence of effects like channel-based communication

The standard example of unsound interplay of these features is the following ML program:

let val c = ref (fn x => x)
in c := (fn x => 1+x);
   !c true
end

A naive typechecker would consider this program typable: c is a polymorphic reference holding functions of type a -> a, and both assignment and dereferencing specialize this polymorphic type to a valid monomorphic one (int -> int and bool -> bool). However, it is clear that this program goes wrong: !c true evaluates to 1 + true, which is nonsense (in most sane languages anyway...). Several solutions have been proposed to correct this problem, but the value restriction is perhaps the simplest and most widely adopted (SML, OCaml, etc.). This restriction only allows polymorphism in let bindings in which the right-hand side is a syntactic value (cf. TAPL §22.7; more precisely, it only allows type inference to generalize free type variables in let-bound values).

As such, the type of c above is first assumed to be ref (a -> a) (with a free, awaiting unification), and then specialized once-and-for-all at its first occurrence; in this case, to ref (int -> int) at c := (fn x => 1+x). Thus, the typechecker will consider expression !c true to be ill-typed, and all is good.

The simplicity of this approach comes at the cost of some expressivity: polymorphic bindings cannot perform computations. As a corollary, this reduces the possibility of performing eta-reduction, which severely limits a tacit or point-free style of programming (which boils down to using combinators as much as possible). At the time where this restriction was introduced in ML, this did not seem like a big deal: a big corpus of ML programs was analyzed, and only very few needed to be corrected (mostly through eta-expansion). The ML community has mostly accepted this. FreeST, however, is heavily inspired by Haskell, where this style is sometimes encouraged. Additionally, FreeST makes heavy use of polymorphism in channel continuations, and this restriction would preclude using the point-free style in most session-typed code, which could be an interesting thing to explore in the future (when we finally sort out function composition...). I should also disclose that I personally really enjoy this style of programming, as it was one of the early motivations for functional programming. Fortunately, I think FreeST can support it without restrictions!

What I'd like to point out with this issue is that the "unsound interplay" mentioned by Almeida et al. does not simply arise out of the combination of call-by-value evaluation, polymorphism, and imperative features. It is a result of the semantics chosen for generalization and specialization, which together introduce polymorphism. In ML, generalization is performed by let bindings, and specialization is done once-and-for-all by referencing the bound variable. In FreeST, generalization is performed by type abstractions which suspend the underlying computation, and specialization by type applications which execute the abstracted computation. As it turns out, the former approach does not interact well with imperative features, but the latter does!

This observation was folk lore for some time. In a footnote in Typeful Programming, Cardelli writes

The problems encountered in ML [Gordon Milner Wadsworth 79, page 52] are avoided by the use of explicit polymorphism.

Similarly, in Explicit Polymorphism and CPS conversion, Harper and Lillibridge write

The “standard” strategies treat constructor abstractions as values and constructor applications as significant computation steps. [...] Since polymophic expressions are kept distinct from their instances, the anomalies that arise in implicitly polymorphic languages in the presence of references [45] and control operators [18] do not occur.

I believe the first rigorous treatment of these two semantics and their impact on imperative features is Xavier Leroy's Polymorphism by name for references and continuations. I think this paper does much to clarify these issues. Firstly, it gives a formal statement of the observations by Cardelli, Harper, et al.; secondly, it shows that it is actually not a matter of polymorphism being implicit or explicit, but rather of being by-value (specialize once-and-for-all) or by-name (suspend generalized computation, (re-)execute at every specialization). Thus we can classify ML polymorphism as implicit and by-value, and FreeST polymorphism as explicit and by-name (these two combinations typically go together, but Leroy also shows how to achieve implicit by-name polymorphism).

All this to say: since FreeST is has polymorphism by-name, I don't think need the value restriction!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions