Part 2 in a series. Part 1 began with a discussion hypothetical discussion of a ML-like language with linear types.
So in my previous post, I used the REPL loop of an "Imaginary Future Linear ML." In the following month, I've learned that there are no less than three different proposals of Imaginary Future Linear ML. Two of them, Alms and LinearML, are implemented systems that you can presumably download and run onto the device that you're using to read this very blog post!1 The third, F-Pop by Mazurak et al, is, I believe, less seriously implemented but is formalized in Coq (which is also awesome: three cheers for mechanized metatheory!).
I was actually pleased with how much I "guessed correctly" in my discussion. For instance, Alms is a serious implementation of a language with affine, not linear, types: if you recall, affine functions can't use their arguments twice, but they can avoid using them at all. One argument from the Alms designers, which I had not considered, is that requiring a linear type discipline - where all variables get used exactly once along every "control path" - can make it rather difficult to work with handled exceptions.
Another thing that I "guessed" was in Take 3, when I suggested that we might want some syntactic sugar to make code look more like "methods" - the print method takes a string and a linear reference to a string and returns a new linear reference to that string, so why don't we have syntax that supports writing (print "Hello World" outstream) instead of (let outstream = print "Hello World" outstream). The F-Pop paper explicitly discusses this kind of notation in Section 5.2. I like that this is something they considered, though I'm still a little unsure how I feel about it "in practice." What it's doing is capturing a common intuition, that an argument, passed by reference, is both an input and an output. But how much of this intuition is fundamental, and how much is it based on path dependence - we happened to start out with languages like C and Java that force functions to have only one "official" output, so are we just used to being forced into using this unnatural design pattern?
1 Caveat 1: well, as long as you're not using iOS. (I just got an iPad. I am not proud of this but yes, it is shiny.) Caveat 2: I couldn't get either Alms or LinearML to run on my laptop.
Princeton CS research on secure communications
13 hours ago