*focusing lets you treat*. This is not an especially new idea if you are "A Twelf Person," but the details are still a bit pecular.

**propositions**as**rules**Let's start with a little baby logical framework. Here are the types:

A ::= A → A | P⁺ | P⁻Those

`P⁺`and

`P⁻`are the

*atomic propositions*, and there can be as many of them as we want for there to be.

## Focusing, real quick

There are three judgments that we need to be worried about.`Γ ⊢ [ A ]`is the

*right focus*judgment,

`Γ[ A ] ⊢ Q`is the

*left focus*judgment, and

`Γ ⊢ A`is the

*out-of-focus*judgment.

Okay. So focusing (any sequent caclulus presentation of logic, really) encourages you to read rules from the bottom to the top, and that's how the informal descriptions will work. The first set of rules deal with right-focus, where you have to prove

`A`

*right now*. If you are focused on a positive atomic proposition, it has to be available

*right now*as one of the things in the context. Otherwise (if you are focused on a negative atomic proposition or

`A → B`), just try to prove it regular-style.

P⁺ ∈ ΓThe second set of rules deal with left-focus. One pecular bit: we write left focus as

-----------

Γ ⊢ [ P⁺ ]

Γ ⊢ P⁻

----------

Γ ⊢ [ P⁻ ]

Γ ⊢ A → B

-------------

Γ ⊢ [ A → B ]

`Γ[ A ] ⊢ Q`, and by

`Q`we mean either a positive atomic proposition

`P⁺`or a negative atomic proposition

`P⁻`. If we're in left focus on the positive atom, then we stop focusing and just add

`P⁺`to the set of antecedents

`Γ`, but if we're in left focus on a negative atomic proposition

`P⁻`, then we

*must*to be trying to prove

`P⁻`on the right

*right now*in order for the proof to succeed. Then, finally, if we're left focused on

`A → B`, then we have to prove

`A`in right focus and

`B`in left focus.

Γ, P⁺ ⊢ QFinally, we need rules that deal with out-of-focus sequents. If we have an out-of-focus sequent and we're trying to prove

------------

Γ[ P⁺ ] ⊢ Q

-------------

Γ[ P⁻ ] ⊢ P⁻

Γ ⊢ [ A ]

Γ[ B ] ⊢ Q

---------------

Γ[ A → B ] ⊢ Q

`P⁺`, then we can go ahead and finish if

`P⁺`is already in the context. There is no rule for directly proving

`A⁻`, but if we have a positive or negative atomic proposition that we're trying to prove, we can left-focus and work from there. And if we're trying to prove

`A → B`, we can assume

`A`and keep on trying to prove

`B`.

P⁺ ∈ ΓThere are a lot of different similar presentations of focusing, most of which amount to the same thing, and most of which take some shortchuts. This one is no different, but the point is that this system is "good enough" that it lets us talk about the two big points.

-------

Γ ⊢ P⁺

A ∈ Γ

A is not a positive atomic proposition

Γ[ A ] ⊢ Q

---------------------------------------

Γ ⊢ Q

Γ, A ⊢ B

-----------

Γ ⊢ A → B

The first big point about focusing is that it's

*complete*- any sequent caclulus or natural deduction proof system for intuitionstic logic will prove exactly the same things as the focused sequent calculus. Of course, the "any other sequent calculus" you picked probably won't have a notion of positive and negative atomic propositions. That's the second big point: atomic propositions can be assigned as either positive or negative, but a given atomic proposition has to always be assigned the

*same*positive-or-negativeness (that positive-or-negativeness is called

*polarity*, btw). And on a similar note, you can change an atomic proposition's polarity if you change it everywhere. This may radically change the structure of a proof, but the same things will definitely be provable. Both of these things, incidentally, were noticed by Andreoli.

## Synthetic inference rules

An idea that was also noticed by Andreoli but that was really developed by Kaustuv Chaudhuri is the idea that, when talking about a focused system, we should really think about proofs as being made up of*synthetic inference rules*, which are an artifact of focusing. The particular case of unfocused sequents where the conclusion is an atomic proposition,

`Γ ⊢ Q`, is a special case that we can call

*neutral sequents*. The only way we can prove a neutral sequent is to pull something out of the context and either finish (if the thing in the context is the positive atomic proposition we want to prove) or go into left focus. For instance, say that it is the case that

`P⁻ → Q⁻ → R⁻ ∈ Γ`. Then the following derivation consists only of choices that we

*had*to make if we left-focus on that proposition.

...This is a proof that has two leaves which are neutral sequents and a conclusion which is a neutral sequent, and where all the choices (including the choice of what the conclusion was) were totally forced by the rules of focusing. Therefore, we can cut out all the middle steps (which are totally determined anyway) and say that we have this

Γ⊢Q⁻

... ------ ---------

Γ⊢P⁻ Γ⊢[Q⁻] Γ[R⁻]⊢R⁻

------ ----------------

Γ⊢[P⁻] Γ[Q⁻→R⁻]⊢R⁻

------------------

Γ[P⁻→Q⁻→R⁻]⊢R⁻ P⁻ → Q⁻ → R⁻ ∈ Γ

------------------------------------

Γ⊢R⁻

*synthetic inference rule*:

P⁻ → Q⁻ → R⁻ ∈ ΓThis synthetic inference rule is more compact and somewhat clearer than the rule with all the intermediate focusing steps. As a side note, proof search with the inverse method is often much faster, too, if we think about these synthetic inference rules instead of the regular rules: that's part of the topic of Kaustuv Chaudhuri and Sean McLaughlin's Ph.D. theses. Chaudhri calls these things "derived rules" in his Ph.D. thesis, but I believe he is also the originator of the terms "synthetic connective" and "synthetic inference rule."

Γ ⊢ Q⁻

Γ ⊢ P⁻

-----------------

Γ ⊢ R⁻

Let's do a few more examples. First, let's look at a synthetic inference rule for a proposition that has positive atomic propositions in its premises:

Q⁺∈ΓBy convention, when one of the premises is of the form

... ------ ---------

Γ⊢P⁻ Γ⊢[Q⁺] Γ[R⁻]⊢R⁻

------ ----------------

Γ⊢[P⁻] Γ[Q⁺→R⁻]⊢R⁻

------------------

Γ[P⁻→Q⁺→R⁻]⊢R⁻ P⁻ → Q⁺ → R⁻ ∈ Γ

------------------------------------

Γ⊢R⁻

`Q⁺ ∈ Γ`, we go ahead and write the premise

`Q⁺`into the context everywhere, so the synthetic inference rule for this proposition is:

P⁻ → Q⁺ → R⁻ ∈ ΓIf the conclusion ("head") of the proposition is a positive atom instead of a negative one, then we end up with an arbitrary conclusion.

Γ, Q⁺ ⊢ P⁻

-----------------

Γ, Q⁺ ⊢ R⁻

... ....The synthetic inference rule looks like this, where

Γ⊢P⁻ Γ,Q⁺⊢S

------ -------

Γ⊢[P⁻] Γ[Q⁺]⊢S

----------------

Γ[P⁻→Q⁺]⊢S P⁻ → Q⁺ ∈ Γ

---------------------------------------

Γ⊢R

`S`is required to be an atomic proposition, but it can be either positive or negative:

P⁻ → Q⁺ ∈ ΓIf we have a higher-order premise (that is, an arrow nested to the left of an arrow -

Γ ⊢ P⁻

Γ, Q⁺ ⊢ S

----------

Γ ⊢ S

`(P⁻ → Q⁺) → R⁻`is one such proposition), then we gain new assumptions in some of the branches of the proof. Note that the basic "shape" of this rule would not be affected if we gave

`P⁻`or

`Q⁺`the opposite polarity - synthetic inference rules are a little less sensitive to the polarity of atoms within higher-order premises.

...The synthetic inference rule, one more time, looks like this:

Γ,P⁻⊢Q⁺

-------

Γ⊢P⁻→Q⁺

--------- ---------

Γ⊢[P⁻→Q⁺] Γ[R⁻]⊢R⁻

--------------------

Γ[(P⁻→Q⁻)→R⁻]⊢R⁻ (P⁻ → Q⁺) → R⁻ ∈ Γ

---------------------------------------

Γ⊢R⁻

(P⁻ → Q⁺) → R⁻ ∈ Γ

Γ, P⁻ ⊢ Q⁺

----------

Γ ⊢ R⁻

## Application to logical frameworks

One annoyance in all of these derived rules is that each of them had a premise like`(P⁻ → Q⁺) → R⁻ ∈ Γ`. However, in a logical framework, we usually define a number of propositions in some "signature" Σ, and consider these propositions to be always true. Therefore, given any finite signature, we can "compile" that signature into a finite set of synthetic inference rules, add those to our logic, and

*throw away the signature*- we don't need it anymore, as the synthetic inference rules contain precisely the logical information that was contained in the signature. Hence the motto, which admittedly may need some work:

*focusing lets you treat*.

**propositions**as**rules**This is a strategy that hasn't been explored too much in logics where atomic propositions have mixed polarity - Jason Reed and Frank Pfenning's constructive resource semantics papers are the only real line of work that I'm familiar with, though Vivek's comment reminds me that I learned about the idea by way of Jason from Vivek and Dale Miller's paper "A framework for proof systems," section 2.3 in particular. (They in turn got it from something Girard wrote in French, I believe. Really gotta learn French one of these days.) The big idea here is that this is expressing the

*strongest possible form of adequacy*- the synthetic inference rules that your signature gives rise to have an exact correspondance to the original, "on-paper" inference rules.

If this is our basic notion of adequacy, then I claim that everyone who has ever formalized the sequent calculus in Twelf has actually wanted positive atomic propositions. Quick, what's the synthetic connective corresponding to this pseudo-Twelf declaration of

`∨L`in the sequent calculus?

∨L : (hyp A → conc C)If you thought this:

→ (hyp B → conc C)

→ (hyp (A ∨ B) → conc C)

Γ, hyp (A ∨ B), hyp A ⊢ conc Cthen what you wrote down corresponds to what we like to write in "on-paper" presentations of the intuitionstic sequent calculus, but it is

Γ, hyp (A ∨ B), hyp B ⊢ conc C

-------------------------------- ∨L

Γ, hyp (A ∨ B) ⊢ conc C

*not*the correct answer. Twelf has only negative atomic propositions, so the correct answer is this:

Γ ⊢ hyp (A ∨ B)This is still adequate in the sense that

Γ, hyp A ⊢ conc C

Γ, hyp B ⊢ conc C

-------------------------------- ∨L

Γ ⊢ conc C

*complete*on-paper sequent calculus proofs are in one-to-one correspondence with the

*complete*LF proofs: the reason that is true is that, when I am trying to prove

`Γ ⊢ hyp (A ∨ B)`, by a global invariant of the sequent calculus I can only succeed by left-focusing on some

`hyp (A ∨ B) ∈ Γ`and then immediately succeeding. However, the

*partial*proofs that focusing and synthetic connectives give rise to are

*not*in one-to-one correspondence.

In order to get the rule that we desire, of course, we need to think of

`hyp A`as a

*positive*atomic proposition (and

`conc C`as negative). If we do that, then the first proposed synthetic inference rule is dead-on correct.

## Poll

Hey, I'm kind of new at logicblogging and don't really know who is following me. This was really background for a post I want to write in the future. Background-wise, how was this post?**[Update Nov 11, 2010]**Vivek's comments reminded me of the forgotten source for the "three levels of adequacy," Vivek and Dale's "A framework for proof systems," which is probably a more canonical source than Kaustuv's thesis for using these ideas for representation. Also, the tech report mentioned in Vivek's comment replays the whole story in intuitionistic logic and is very close to the development in this blog post.

Hi,

ReplyDeleteI was wondering whether you've seen the paper:

"A framework for proof systems"

that Dale Miller and I wrote last year. It seems to contain the same idea that you propose, but in a focused linear logic proof system for classical logic, a la Andreoli's proof system. There was also a follow-up tech-report showing that our results could be replayed in focused intuitionistic logic.

Anders Starcke Henriksen. Using LJF as a Framework for Proof Systems. Technical Report, University of Copenhagen, 2009.

Best,

Vivek

YES THAT'S IT! Since neither Jason nor I know French, I'm quite certain that an early draft of that paper was Jason's reference point (and not Girard's "Le Point Aveugle: Cours de Logique: Tome 1, Vers la Perfection") for the "three levels of adequacy" (Section 2.3).

ReplyDeleteI was, of course, aware that your work with Dale was considering these same ideas because the idea is very clearly being used in "Algorithmic specifications in linear logic with subexponentials," but I'd lost the particular reference to the "three levels of adequacy."

I was, however, unaware of the tech report you mentioned; it's definitely after the same ideas I was discussing here. One follow-up to it: in that tech report, the conclusion mentions "

This means that it would be impossible to encode nicely in LJF, linear systems or systems with multiple conclusions that do not allow weakening. As LLF uses linear logic as a base these systems should be possible to encode in LLF." - recent unpublished work by Frank Pfenning and Jason Reed discusses how this (and more!) is doable by adding a term language with a dash of algebraic properties. The most recent draft, "Focus-Preserving Embeddings of Substructural Logics in Intuitionistic Logic", is available from Frank's web page. Critically, while they need no positive connectives, positive-polarityatomsplay an important role.I think this sort of algebraic or hybrid formulation in an LJF-family logic is where I see the CMU group currently looking in our attempts to formalize substructural logics and formalize systems within substructural logics; the language in my thesis proposal actually makes me a bit of a lone holdout in this area, though I think even I am wavering :).