Monday, October 10, 2011

Feeble Typing (a thought on Dart)


Update: A Word About The Title. If you came here from Twitter, the line I used - "not unsound, incomplete" - referred to the original title of the article "Incomplete by design," which was based on my misunderstanding of the dominant terminology in static analysis (see updates, and thanks David and Sam in the comments for pointing out my wrongness). When I realized this, I renamed the article "Deliberate decisions," but that wasn't an interesting title. When I read Rafaël Garcia-Suarez's take on Dart this morning, Why Dart is not the language of the future, I decided to support his proposal of calling types as a "lint-type development aid, not a language feature" feeble typing, and renamed the article again. You should read Rafaël's post and this one; it is possibly the most agreement you will ever find between a Perl blogger and a Carnegie Mellon University programming languages graduate student.

...anyway...


There's a lot of feedback, and a non-trivial amount of snark, going around the internet based on the release of Dart, a proposed Javascript-killer by Google. My primary experience with Javascript is that people tell me it's the worst compiler target language that is widely used as a compiler target language, so I basically have nothing invested in the language, but was interested by the discussions it brought up.

The snark about Dart has centered around the following line on Page 72 of the language spec, which I believe was pointed out to the Twitterverse by Debasish Ghosh.
The type system is unsound, due to the covariance of generic types. This is a deliberate choice (and undoubtedly controversial). Experience has shown that sound type rules for generics fly in the face of programmer intuition. It is easy for tools to provide a sound type analysis if they choose, which may be useful for tasks like refactoring.
But what does it mean for a type system to be unsound? I really think that the most illustrative snippet about Dart types came on the following page of the language spec was not the one that Debasish retweeted, but one that came on the next page:
A Dart implementation must provide a static checker that detects and reports exactly those situations this specification identifies as static warnings. However:
  • Running the static checker on a program P is not required for compiling and running P.
  • Running the static checker on a program P must not prevent successful compilation of P nor may it prevent the execution of P, regardless of whether any static warnings occur
This, for me, clarified what was going on substantially. Let me tell you a parable.

How to anger students in an undergraduate PL course


In four easy steps!

Step 1


Tell students to implement the following dynamic semantics of a programming language. Here's an example of a very simple language:

\[\infer {{\tt if}~e~{\tt then}~e_1~{\tt else}~e_2 \mapsto {\tt if}~e'~{\tt then}~e_1~{\tt else}~e_2 } {e \mapsto e'} \] \[\infer {{\tt if}~{\tt true}~{\tt then}~e_1~{\tt else}~e_2 \mapsto e_1} {} \qquad \infer {{\tt if}~{\tt false}~{\tt then}~e_1~{\tt else}~e_2 \mapsto e_2} {} \] \[ \infer {e_1 + e_2 \mapsto e_1' + e_2} {e_1 \mapsto e_1'} \qquad \infer {{\tt num}(n_1) + e_2 \mapsto {\tt num}(n_1) + e_2'} {e_2 \mapsto e_2'} \] \[ \infer {{\tt num}(n_1) + {\tt num}(n_2) \mapsto {\tt num}(n_1 + n_2)} {} \]

The implementation was to be an ML function step with type expr -> expr option, and the specification was that step e = SOME e' if there existed an e' such that e \(\mapsto\) e', and that step e = NONE otherwise (for instance, \(\tt true\) obviously can't take a step according to these rules).

Step 2


Describe how they can type-checking the language, by defining a type system like this. Have them implement this type checker as an ML function with type expr -> typ option, same idea.

\[ \infer {{\tt true} : {\tt bool}} {} \qquad \infer {{\tt false} : {\tt bool}} {} \qquad \infer {{\tt if}~e~{\tt then}~e_1~{\tt else}~e_2 : \tau} {e : {\tt bool} & e_1 : \tau & e_2 : \tau} \] \[ \infer {{\tt num}(n) : {\tt number}} {} \qquad \infer {e_1 + e_2 : {\tt number}} {e_1 : {\tt number} & e_2 : {\tt number}} \]

Step 3


Have students prove the theorem that this type system does something. The theorem statement goes as follows, and the proof is by the by-now standard technique of safety-via-progress-and-preservation.
Theorem (Safety): If \(e : \tau\) and \(e \mapsto \ldots \mapsto e'\), then \(e' : \tau\) and also there either exists some \(e''\) such that \(e' \mapsto e''\) or else \(e'\) is of the form \(\tt true\), \(\tt false\), or \({\tt num}(n)\).

Step 4


Test their ML code from Step 1 on expressions like if 4 then true else 9, breaking many of the students implementations of the dynamic semantics which were prepared only to handle well-typed inputs.

Analysis: is this fair?


Think about the perspective of the student who complained about the fact that their interpreter either crashed (or maybe returned SOME(Num 9)!) after being handed if 4 then true else 9. On one hand, they clearly violated the spirit of the assignment: Step 1 was a perfectly well-defined assignment all on its own, and they didn't fulfill the specification of that particular assignment. But on the other hand, they proved the theorem in Step 3, and perhaps feel as if they should get something out of the fact that they proved that theorem: the ability to only have to reason about the behavior of well-typed programs: why should it be surprising that garbage-in produced garbage-out?

Compiler writers actually get to think like that; indeed it's almost essential that they be allowed to. On a 32-bit machine, most values are compiled to plain ol' 32-bit words, and so the representation of \(\tt false\) might have the same in-memory representation as, say, \({\tt num}(0)\). Or it might have the same in-memory representation as \({\tt num}(1)\)! It doesn't matter, because, for the compiler writer, the safety theorem has already given a guarantee that the language has canonical forms - that if \(e : {\tt bool}\) and \(e\) eventually steps to an irreducible expression, then that irreducible expression must either be \(\tt true\) or \(\tt false\).

This means that the compiler writer need not worry about how if 1 then true else 9 and if 0 then true else 9 might behave - they may raise an exception, return (the memory representation of) true, or return (the memory representation of) false. The only programs upon which the compiler promises to behave the same way as the language definition are those that pass the typechecker, and the type safety theorem is a critical component of that promise.

In this way of looking at the world, the representation independence given by a type system is really quite important, and it means that an unsound type system could cause very very bad things to happen: if you're allowed to mess around with the representation of, say, a pointer, by adding things to it, then you have introduced buffer overflow errors to a language, which would be pretty awful if you allowed code written in this language to execute in a privileged way in your browser. That (hopefully!) is not at all what Dart means when they mean their programming language is unsound.

Theorems versus bugs


I think the problem here is that, as a broad overgeneralization, there are two ways to look at what people are doing with types in the first place. On one hand, there is the view that types are a tool to provably preclude certain classes of errors - like the possibility that you might end up with the expresion if 1 then true else 9 which is "stuck" according to the defined operational semantics. On the other hand, there is the idea that types are a language feature that is aimed at helping document code and reduce the number of bugs in a program without necessarily precluding any particular errors. In the academic circles I travel in, a type system is understood as a technical definition about the former: if you can't prove a theorem about what kind of errors your type system precludes, then it is something else besides a type system. I think it's quite fair to both type systems and static analysis researchers to call the latter notion of types a simple form of static analysis.

[Updated] There's nothing wrong, per se, with such a static analysis, though I think it's fair to call it an unsound static analysis instead of an unsound type system. To use the language of Ciera Jaspan's recent thesis, start with a particular class of error (gets-stuck, raises-a-certain-exception, divides by zero, whatever). An analysis is sound if it never passes a program with a particular sort of error (permitting a safety theorem about that analysis!) and complete if it fails only programs that will actually manifest the error at runtime.1 A sound but incomplete analysis is called conservative; the type checkers of ML and Java represent such analyses. An analysis that is neither sound nor complete is called pragmatic by Jaspan, as there aren't any theorems to be proved about such an analysis: they can be justified only by their utility in practice.

I can certainly think of situations where I'd want a pragmatic analysis. In the past, I have had occasion to write Python, which I will admit I have a certain fondness for. However, I have also complained about how my complicated Python program ran for half an hour and then failed with some cast exception that, upon inspection of the code, was always, statically, evidently going to happen no matter what the complicated bits of the code did and why couldn't it have warned me that it was going to do that before running for half an hour. Even if I implemented an analysis to generate such a warning, Python is and would remain a (hopefully) safe, strongly-typed programming language with exactly one type - the type of tagged data that causes an exception if you try to use an object as an integer (or whatever). The static analysis is trying to prove a different kind of theorem - one that says "you have indicated that raising ClassCastException is undesirable, and here's a proof that your current program will raise ClassCastException". If the static analysis can't prove that theorem (thus demonstrating a bug), I'm no worse off than I was when I used Python without that static analysis. A type safety theorem, however, would have the form "if the program passes the typechecker, then ClassCastException will not be raised."

And with my current understanding, the "unsound type system" of Dart is just such a "pragmatic analysis" as described by Jaspan. I hope my examples explain what might still be wrong with such a language - if you can't static preclude certain classes of errors, you must either allow "unspecified behavior" (and that way lies buffer overruns and security violations) or else you must be able and willing to check, at execution time, for the occurrence of those errors, which is not efficient (and for some classes of errors may be impossible). You're back in the world of our hypothetical angry student: you've got to be able to handle all the ill-formed programs and obey the language definition on a wider class of programs.

[Updated] ...you could argue, of course, that you're no worse off than you were when you finished Step 1. On some level, you're certainly right; my hypothetical Python+tool-to-catch-a-couple-of-errors is better, in my humble opinion, than Python without (and this view has the virtue of honesty). If you want to make that argument, however, I encourage you to read Chung-chieh Shan's related blog post about covariant generics, which argues from a less practical-compiler-optimizations and more philosophical point of view that I also find quite convincing. The point he makes is that the traditional view of types is important because types should mean things - unless you want to choke on pencils!

Conclusion


In summary: type systems are useful because of type safety theorems: a type safety theorem means that certain things can't happen. One reason this is nice is because the complier writer, the programmer, and the language designer needn't worry about what happens in the case that such an impossible thing happens.

Types are also a form of documentation, and they're a regular form of documentation that a complier can then take, generating warnings or errors about certain classes of bugs without actually promising to preclude those bugs. A static analysis that uses type information to preclude some (but not all) errors of a particular type is probably better termed an "pragmatic type-based analysis" than an "unsound type system." Garcia-Suarez called it "feeble typing" and I renamed my post accordingly. It's a bit more pejorative than the tone I was originally trying to take in the article, but I like it anyway.

I think that's a fair way of looking at things, and it puts decidable type systems, on a practical level, as a member of a larger class of type based-static analyses.2 In a type system, we must be able to ask a question about what sort of theorem is proved about programs that are well-typed, and if there's no such theorem, then the analysis is still within the class of type-based static analyses, but isn't so much a type system. At the end of the day, of course, English has no central authority, so asking people to distinguish "type-based analyses" from "type systems" may be a fool's errand,3 but I think it's worthwhile to delineate the difference, and I don't think my delineation significantly departs from current usage (apart from "feeble typing" which was, I think, coined the day after I originally posted this).

Something I started off wanting to talk about before this post got too long was why it is the case that "sound type rules for generics fly in the face of programmer intuition" (which is pretty clearly, in my opinion, missing the addendum "in languages with subtyping"), because two of the reasons why I think this is the case are quite interesting on their own. One of them has to do with polynomial data types and persistent data, and the other has to do with base types and refinement types as explored by William Lovas in his thesis. Neither of these ideas are adequately represented in existing programming languages, though they are more-or-less theoretically understood at this point. Another day, perhaps.

Last word


I noticed that one of the principal language designers was quoted as follows
You have to understand that the types are interface types, not implementation types – so the runtime cannot use the type information for anything. The types are simply thrown away during compilation.
That is to say, insofar as this article goes, I think I've only said factual things that the language designers would essentially agree with; in particular, they seem to recognize that their use of the word "types" seems bound to confuse (or troll) others. But calling a "feeble type" an interface types and a "type" an implementation type seems to just be making up words. And, as made-up-words go, I really dislike "interface types" as a neologism (certainly it has nothing to do with Java interface or anything that comes up when I search for "interface type"). The theorist's critique of Dart is precisely that things you call "type" should define inviolate interfaces and not mere suggestions that are neither enforced nor checked. Calling them interface types makes them sound like contracts, which are not thrown away by the compiler. "Suggestion type" might be a less pejorative version of "feeble type," perhaps? And "implementation types" is a terrible term to use to describe types in a a type system, types that (by way of a theorem about canonical forms) can be relied upon by both the programmer and the implementation.

1 Godefroid, Nori, Rajamani, and Tetal call the sound analyses may analyses (though it should perhaps be may not, as a sound analysis precludes a certain behavior) and call the complete analyses must analyses (the error must happen) in the paper "Compositional may-must analysis."
2 This view really isn't fair to the reason that we are interested in type systems and believe they're supposed to be useful, but that has to do with Curry-Howard and the unreasonable effectiveness of mathematics and with other things Shan talked about in his linked post.
3 Honestly, it probably won't help the fool's errand if I try to call the other group's type based analyses "feeble typing," but here I am, doing it anyway.

Monday, October 3, 2011

Slicing and dicing validity: existing ideas

This is a story of modal logic and the judgmental methadology, a style of presenting and justifying logic that Pfenning and Davies [1] adapted from Martin Löf's Siena Lectures [2] and which was, in turn, adapted by Bernado Toninho and I [3]. My goal is to re-present some existing work: I changed the title from "old ideas" to "existing ideas" to remove any implicit negative connotation - my next step is to extend these existing ideas a bit, not to replace these old ideas with better ones.

The topic of this story is, at least at first, an account of modal logic that I have adapted from Pfenning and Davies's presentation [1]. I assume some familiarity with the background, this isn't a comprehensive introduction. Pfenning and Davies's topic was a presentation of modal logic as a logic where there are two categorial judgments, \(A~{\it true}\) and \(A~{\it valid}\); the intent is that validity captures unconditional truth, which is eventually related to modal necessity.

We collect the categorical judgments of the form \(A~{\it true}\) into true contexts which are written as \(\Gamma\) or \(\Psi\), and we collect the categorical judgments of the form \(A~{\it valid}\) into valid contexts which are written as \(\Delta\). It is by use of contexts that categorical judgments give rise to hypothetical judgments, and Pfenning-Davies modal logic has two hypothetical judgments: \(\seq{\Delta}{A~{\it valid}}\) and \(\seq{\Delta; \Gamma}{A~{\it true}}\)

In a system with hypothetical judgments, meaning is given by describing three kinds of principles: the weakening principles (expressing how the context works), the identity principles (expressing how assumptions are used), and the substitution principles (expressing how assumptions are discharged). We rely on these principles as we explain a logic, and we are required to tie the knot eventually, demonstrating that our logic's particular rules satisfy its defining principles. Here are the defining principles for Pfenning-Davies modal logic:

Weakening principles

  • If \(\Delta \subseteq \Delta'\) and \(\seq{\Delta}{A~{\it valid}}\), then \(\seq{\Delta'}{A~{\it valid}}\)
  • If \(\Delta \subseteq \Delta'\), \(\Gamma \subseteq \Gamma'\), and \(\seq{\Delta; \Gamma}{A~{\it true}}\), then \(\seq{\Delta'; \Gamma'}{A~{\it true}}\)

Identity/hypothesis principles

  • \(\seq{\Delta, A~{\it valid}}{A~{\it valid}}\)
  • \(\seq{\Delta; \Gamma, A~{\it true}}{A~{\it true}}\)

Substitution principles

  • If \(\seq{\Delta}{A~{\it valid}}\) and \(\seq{\Delta, A~{\it valid}}{C~{\it valid}}\), then \(\seq{\Delta}{C~{\it valid}}\).
  • If \(\seq{\Delta}{A~{\it valid}}\) and \(\seq{\Delta, A~{\it valid}; \Gamma}{C~{\it true}}\), then \(\seq{\Delta; \Gamma}{C~{\it true}}\).
  • If \(\seq{\Delta; \Gamma}{A~{\it true}}\) and \(\seq{\Delta; \Gamma, A~{\it true}}{C~{\it true}}\), then \(\seq{\Delta; \Gamma}{C~{\it true}}\).

Two judgmental rules define the fundamental relationship between the vaildity and truth judgments; I'll call the first of these rules valid introduction and the second valid elimination; they capture the notion that validity is defined relative to truth, and that validity is truth in all contexts.

The fundamental relationship between truth and validity is set up by two rules, which express that validity is unconditional truth. This means that \(A~{\it valid}\) should be entailed by \(A~{\it true}\) in the absence of any conditions, and that proving \(A~{\it valid}\) should entail \(A~{\it true}\) in any conditions, where "conditions" are represented by the context \(\Gamma\) of true hypotheses.The first rule is the introduction rule for validity: it shows how we verify the judgment \(A~{\it valid}\). The second rule is the elimination rule for validity: it shows that, given a proof of \(A~{\it valid}\), we can use it to show the truth of \(A\) in any context \(\Gamma\).

\[ \infer {\seq{\Delta}{A~{\it valid}}} {\seq{\Delta; \cdot}{A~{\it true}}} \qquad \infer {\seq{\Delta; \Gamma}{A~{\it true}}} {\seq{\Delta}{A~{\it valid}}} \]

The validity judgment is made interesting by its use in defining modal necessity \(\Box A\). Notice how similar the introduction rule for modal necessity is to the elimination rule for the validity judgment! The elimination rule, however, acts like a let- or case-expression; it is the kind of elimination rule known as a large elimination, which are associated with the positive connectives (if you're a fan of polarized logic).

\[ \infer {\seq{\Delta; \Gamma}{\Box A~{\it true}}} {\seq{\Delta}{A~{\it valid}}} \qquad \infer {\seq{\Delta; \Gamma}{C~{\it true}}} {\seq{\Delta; \Gamma}{\Box A~{\it true}} & \seq{\Delta, A~{\it valid}; \Gamma}{C~{\it true}}} \]

A truth-oriented justification of logic

This story above is (I believe) broadly compatible with the story from Pfenning and Davies' original paper; certainly what I called the introduction and elimination rules for validity are just the result of writing down the "Definition of Validity" at the beginning of Section 4 of their paper. However, Pfenning and Davies banish the second half of the definition of validity, what I called the elimination rule, from their actual formal system. In its place, they give a valid hypothesis rule, variously written as \(uhyp\) or \(hyp^*\).

\[ \infer {\seq{\Delta, A~{\it valid}; \Gamma}{A~{\it true}}} {} \]

With this change, the introduction rule for validity now only appears in the conclusion of the validity introduction rule and the premise of the \(\Box\) introduction rule, and the two can be collapsed together. The introduction rule for validity is what is called invertible - the conclusion implies the premise - so we don't fundamentally change the \(\Box\) introduction rule if we replace the premise \(\seq{\Delta}{A~{\it valid}}\) with the equivalent premise \(\seq{\Delta; \cdot}{A~{\it true}}\).

\[ \infer {\seq{\Delta; \Gamma}{\Box A~{\it true}}} {\seq{\Delta; \cdot}{A~{\it true}}} \]

These changes to the logical system, write Pfenning and Davies, are sound with respect to the initial definitions I gave (the \(uhyp\) rule, in particular, is derivable by a combination of the valid hypothesis principle and the elimination rule for validity). However, the resulting system is incomplete with respect to the categorical judgment \(A~{\it valid}\). This necessitates that we give up on the one hypothesis principle and one substitution principle that deal exclusively with validity, and also edit the substitution principle with a premise \(\seq{\Delta}{A~{\it valid}}\) to have the premise \(\seq{\Delta; \cdot}{A~{\it true}}\) instead.

These changes flow naturally out of the intentional goal of explaining the meaning of the logical connectives entirely through the lens of the categorical judgment \(A~{\it true}\). This is a perfectly fine way of explaining S4 modal logic, but I don't see it as a fundamental part of the character of judgmental presentations of logic. For instance, there are plenty of situations, (Hybrid LF, Simpson's IK, Constructive Provability Logic) where there is no single categorical judgement for truth, but rather a whole family of indexed judgments, and it's possible to prove things at each of the judgments. This suggests a different story, a different presentation of the same system above. This presentation is one that I've adapted from Jason Reed [4].

Another story

If one is perhaps less interested in the correspondence to modal logic, it's possible to take the judgmental setup that we started with and tease apart the notion of modal necessity just a bit. We do this by introducing two connectives, \(\unicode{x25F8}A\) and \(\unicode{x25FF}A\), with the intentional visual pun that \(\unicode{x25F8}\!\!\!\!\unicode{x25FF} A = \Box A\). The introduction and elimination rules for validity are now properly understood as introduction and elimination rules for \(\unicode{x25FF}\), whereas the introduction and elimination rules for \(\Box\) are now understood as introduction and elimination rules for \(\unicode{x25F8}\).

\[ \infer {\seq{\Delta}{\unicode{x25FF}A~{\it valid}}} {\seq{\Delta; \cdot}{A~{\it true}}} \qquad \infer {\seq{\Delta; \Gamma}{A~{\it true}}} {\seq{\Delta}{\unicode{x25FF}A~{\it valid}}} \] \[ \infer {\seq{\Delta; \Gamma}{\unicode{x25F8}A~{\it true}}} {\seq{\Delta}{A~{\it valid}}} \qquad \infer {\seq{\Delta; \Gamma}{C~{\it true}}} {\seq{\Delta; \Gamma}{\unicode{x25F8}A~{\it true}} & \seq{\Delta, A~{\it valid}; \Gamma}{C~{\it true}}} \]

Now, if these rules are really the only ones that deal with validity — if all the "regular" connectives like implication are defined the traditional way based only on truth...

\[ \infer {\seq{\Delta; \Gamma}{A \supset B~{\it true}}} {\seq{\Delta; \Gamma, A~{\it true}}{B~{\it true}}} \qquad \infer {\seq{\Delta; \Gamma}{B~{\it true}}} {\seq{\Delta; \Gamma}{A \supset B~{\it true}} & \seq{\Delta; \Gamma}{A~{\it true}}} \]

...then it is possible to observe that this way in which we have teased apart box into two parts actually syntactically differentiates the propositions judged by the two categorical judgments: if we're starting from truth, we'll only introduce propositions that live under a \(\unicode{x25F8}\), and we can only meaningfully use valid propositions that have \(\unicode{x25FF}\) as their outermost connective. This observation, which I picked up from by Jason Reed, makes it possible to talk about true propositions - those propositions judged as true \((A ::= \unicode{x25F8}P \mid A \supset B \mid a )\) separately from valid propositions - those propositions judged valid \((P ::= \unicode{x25FF}A)\) [4]. The rules that we have set up essentially enforce this syntactic separation already, however; there's no particular requirement that we enforce it on our own.

In fact, one thing we can do is put another version of implication "up at validity," like this:

\[ \infer {\seq{\Delta}{A \supset B~{\it valid}}} {\seq{\Delta, A~{\it valid}}{B~{\it valid}}} \qquad \infer {\seq{\Delta}{B~{\it valid}}} {\seq{\Delta}{A \supset B~{\it valid}} & \seq{\Delta}{A~{\it valid}}} \]

It's not terribly obvious why you'd want to do this, but as Neel points out to me in the comments, this was first done in the context of linear logic by Wadler and Benton, where it turns out to be quite useful for programming [5,6]. I was previously unaware of that work (doh). Reed used the idea of putting implication up at validity in order to describe both the necessity modality and the lax-logic circle (the "monad") as fragments of the same logic, which is where I got this notion from. If you ignore the "valid" implication, you get modal logic with \(\Box A = \unicode{x25F8}\!\!\!\!\unicode{x25FF} A\), but if you ignore the "true" implication and instead reason at "validity" (at which point it's definitely not quite right to call the old "validity" validity anymore), you get lax logic with \(\bigcirc A = \unicode{x25FF} \unicode{x25F8} A\). That observation is essentially the same one made by the two translations in Figure 6 of Wander and Benton's paper, though again Wadler and Benton's "truth" judgment was linear, the one I've presented here was persistent, and Reed considered both variants [6].

To reiterate, this isn't a very useful story if one is interested in giving a fundamental account of modal logic, but I believe that "teasing apart" the modal operator, and even opening up the possibility of inserting other propositions in the space of "valid propositions" that such a teasing apart naturally create, raises interesting possibilities. Hopefully in the next post I'll say about about that in the context of contextual modal type theory (CMTT).


[1] Frank Pfenning and Rowan Davies, "A Judgmental Reconstruction of Modal Logic," 1999 (published 2001)
[2] Per Martin-Löf, "On the Meanings of the Logical Constants and the Justifications of the Logical Laws" a.k.a. "The Siena Lectures," 1983 (published 1996)
[3] Robert J. Simmons and Bernardo Toninho, "Principles of Constructive Provability Logic," CMU Tech Report 2010.
[4] Jason Reed, "A Judgmental Deconstruction of Modal Logic," unpublished 2009.
[5] Nick Benton, "A Mixed Linear and Non-Linear Logic: Proofs, Terms and Models," CSL 1995.
[6] Nick Benton and Phil Wadler, "Linear Logic, Monads and the Lambda Calculus," LICS 1996.