Talk:Linear logic

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Better Introduction[edit]

Please consider motivating the rest of the article with the resource discussion near the end. The article proceeds too quickly into explaining the symbols and formal rules without giving any clue what they mean "intuitively" (not meaning to pun on intuitive logic)

TODO:[edit]

  • better examples
  • connexion to proof nets, ludics, etc.
  • linear type theory (Wadler et al)
  • Fix symbols (most symbols show up as boxes) —Preceding unsigned comment added by 132.68.42.146 (talk) 10:11, 30 July 2008 (UTC)[reply]

--Kaustuv Chaudhuri 00:03, 2 Jun 2004 (UTC)

  • make it less chatty. Please snip "non-encyclopedic" parts.

--Kaustuv Chaudhuri 09:06, 2 Jun 2004 (UTC)


There are two kinds of non-commutativity: self-dual and non-self-dual. Discussion required. Charles Stewart 02:47, 12 Aug 2004 (UTC)


"While this freeness of truth is usually what is desired, for example in formal mathematics, but for many applications of logic truth can be abstract and unwieldy." I think the word "While" should be removed, changing the sentence to 2 sub-sentences joined by the conjunction "but". But I don't understand the subject well enough to know if that's what he meant to say. Art LaPella 02:14, Aug 21, 2004 (UTC)

Either the "but" or the "while" should be removed. On a more general note, the present version of this article has major expository issues: it is currently not accessible to the lay reader. Examples that should be illuminating are obscure and ill-constructed. — Kaustuv 03:27, 2004 Aug 21 (UTC)

Internal and external choice[edit]

Can someone please confirm that the names given for the & and ⊕ operators are really internal and external choice, respectively. The reason I ask is that this is exactly the opposite to what I've encountered in other contexts, e.g. process algebra, where an internal choice is one you can't control and an external choice is one you can. I didn't find the names in the papers linked off the bottom of the article (but I only skimmed them).--Malcohol 14:47, 11 August 2006 (UTC)[reply]

I have removed the names as I don't think they are common, and anyway it is probably not meant to be opposite to other computational contexts. 192.75.48.150 18:38, 11 August 2006 (UTC)[reply]

Intuitionstic linear implication[edit]

I expect the answer is {{sofixit}}, but:

Intuitionistic linear logic (ILL) allows only a single conclusion. Unlike CLL, connectives in ILL do not have perfect duals... As a result, linear implication is a basic connective in ILL.

The statement is correct, but the rules for linear implication are not given in the article. They should be. 72.137.20.109 03:52, 2 January 2007 (UTC)[reply]

Does linear logic have a not function?

To answer the previous two questions: there is a linear negation, but it and linear implication are not given in the table of rules (for the CLL sequent calculus). I can't edit that table, and there is some sense to its organisation for purposes of symmetry, and in fact there are (now) explained after it, but I'll try to make that clearer. --Toby Bartels 20:29, 29 June 2007 (UTC)[reply]
It depends what you mean by a "not". There's a perfectly good involution which (in the presence of contraction and weakening) would be equivalent to classical not, but which is not the same as linear negation (implies falsity). Francis Davey 09:14, 18 October 2007 (UTC)[reply]

Mostly, this makes sense, but the explanations for the & and ? operators really need some work. Can someone who understands this look into it?

I really agree with this; we need to talk about implication in ILL. This (I think) introduces a lot of new complications though, as we need to talk about assumptions on the LHS of the turnstile, which is very different from the presentation given here. mkehrt (talk) 12:07, 26 June 2009 (UTC)[reply]

It makes no sense to give the two inference rules for ILL's linear implication when all the context we have is the one-sided presentation of CLL. This is a case for a new article, which is a bit of a job. — Charles Stewart (talk) 12:43, 26 June 2009 (UTC)[reply]
Good point. I'm not sure LL really needs two articles, but given the depth of presentation in this article, it may make sense to split it. I'm not really the person for the job, though, so if anyone knowledgeable is up to the job...? —Preceding unsigned comment added by Mkehrt (talkcontribs)
I am not sure I see the need to specifically discuss ILL implication (as opposed to just adding more discussion of implication in CLL) in the current version of the article. What specifically did you (Mkehrt, Charles, ...) have in mind? Noamz (talk) 04:48, 27 June 2009 (UTC)[reply]
The problem is that as it stands now the page is incomprehensible. The BNF grammar of the logic doesn't include an implication operator, and then half of the page tries to explain the logic in terms of an operator that doesn't exist. 2A02:A448:3A80:1:B86E:6578:A8A3:5A2F (talk) 12:43, 2 June 2023 (UTC)[reply]

Vending-machine example[edit]

I know it seems less professional, but the vending-machine example was much clearer to me when it was in first person rather than third. Having just read a bunch of stuff on linear logic, I think that I finally understand things (at least as far as the distinction between times and par; 1 versus bottom is still subtle). So I'm going to rewrite much of that (especially the bit on par) using what seems the clearest language of all (IMHO): the second person! --Toby Bartels 20:32, 29 June 2007 (UTC)[reply]

first example ?![edit]

The introduction's example is wrong:

<< One example: churning cream into butter. Suppose we represent a quart of cream by the proposition A, and a pound of butter by B. To state the fact that a quart of cream can be churned into a pound of butter, we might write the implication A ⇒ B. But in ordinary (classical or intuitionistic) logic, from A and A ⇒ B one can conclude A ∧ B. >>

Seems to state that "(A ∧ (A ⇒ B)) ⇒ (A ∧ B)" does not fit reality. Actually, what is false here is expressing "a quart of cream can be churned into a pound of butter" using "A ⇒ B". There is no implication here in the formal logic sense of the term. There is "potential transformation" instead. So that, indeed, later conclusions are wrong, too...

Sorry not to propose a better example -- I do not know much about linear logic (the reason why I was reading the article ;-). This one, sure, cannot properly convince any reader that the topic is of highest interest :(.

Denis --Denispir (talk) 13:50, 10 March 2009 (UTC)[reply]

possible new version[edit]

<< One example: cream can be churned into butter. Let us try to express that using classical logic. Suppose we represent the fact that we have a quart of cream by the proposition A, and a pound of butter by B. To state the fact that a quart of cream can be churned into a pound of butter, a naive attempt may be to write the implication A ⇒ B. But in ordinary (classical or intuitionistic) logic, from A and A ⇒ B one can conclude A ∧ B, meaning that we have now both cream & butter.

What is wrong in our trial is to "translate" the possible churning of cream into butter using a logical implication. This shows anyway that classical if often unable to express simple facts of the reality.

[Please improve and correct, for english is not my mother tongue.] Denis --Denispir (talk) 14:17, 10 March 2009 (UTC)[reply]

I'm not an expert in linear logic, though I am familiar with logic, and the example reads as nonsense to me. 'Suppose we represent a quart of cream by the proposition A' - a quart of cream is not a proposition for a start. Ridiculous. 93.96.236.8 (talk) 10:52, 10 April 2009 (UTC)[reply]

It is in linear logic. Note that one interpretation of LL is as a logic of resources rather than truths. A proposition is merely a syntactic structure, and any "meaning" we give this is only a product of how we interpret it. In propositional logic (the terminology here is not so great), we interpret these propositions as truths we want to prove the truth of. In LL, however, we interpret these propositions as resources (i.e., things) we want to prove the existence of. mkehrt (talk) 22:37, 26 June 2009 (UTC)[reply]
"It is in linear logic." I had exactly the same problem as Denispir (and English is my native language). If a pound of butter is a proposition, then I don't understand what a proposition is in this logic, and IMHO there should be more of an explanation of the term early on (unless I missed it, which is possible). More explanation--for the "lay" person--is in general required, as others have commented above. (BTW, a quart of cream should make about 4 pounds of butter...blame the English system of measurement.)Mcswell (talk) 14:38, 27 April 2020 (UTC)[reply]

Major revision[edit]

I made a first pass at improving the article -- comments welcome. I have some time now to work on this article -- I'd still like to add a section on models of linear logic, and give more explanation of the applications of linear logic. Noamz (talk) 22:13, 10 June 2009 (UTC)[reply]

A context ( Γ, Δ) is a list of propositions A1, ..., An ??[edit]

What does the sentence above mean? The list A1, ..., An does not match the pattern ( Γ, Δ), so I am at a loss what Γ and Δ thus introduced mean individually. Could somebody please rephrase this sentence? Marc van Leeuwen (talk) 11:17, 2 February 2011 (UTC)[reply]

This means that either Γ or Δ can be used as a metavariable for a context. Note that the parentheses are not in math mode, and indicate a parenthetical remark in the sentence, rather than being part of the notation. mkehrt (talk) 04:16, 11 February 2011 (UTC)[reply]
Yes this is a standard notational convention, but there's no harm in being more explicit. I've edited the paragraph to do this, and also tried to make a few other minor improvements. Noamz (talk) 16:13, 12 February 2011 (UTC)[reply]

On this note, the turnstile doesn't appear to work in the equations the way it's explained in the text; in the text it appears to be an infix operator, but in the equations symbols only appear after it. Also, in the semi-distributivity formula, there is a symbol - a curly p or rho - which does not appear anywhere else and appears to be some sort of binary operator. Cyrapas (talk) 00:12, 4 April 2013 (UTC)[reply]

The Classical linear logic of Girard has an operator (the dual introduced in the article) which has the property that A |- G is the same as |- ~A, G (where I am using ~ for the dual - sorry I don't know how to do this sort of thing in wiki code). In other words you can always shift formulae from the left to the right hand side of the turnstile. For economy one can then present the logic with no formulae on the left hand side. I hope that explains the turnstile question. I think the "curly p" you see if an upside-down ampersand. It is an infix operator - the multiplicative "or". The intuitive way to think of it in proof-style is that A p B means "if I have a refutation of A I can prove B AND if I have a refutation of B I can prove A" (where I have used the "p" letter for the upside-down ampersand, again because I don't know how to do this right). Francis Davey (talk) 09:20, 4 April 2013 (UTC)[reply]

algebraic properties[edit]

i think the section "remarkable formulae" should be greatly expanded (to something like the equations and implications listed here: http://ncatlab.org/nlab/show/linear+logic and here: http://iml.univ-mrs.fr/~lafont/pub/llpages.pdf) and propably brought closer to the beginning. as someone unfamiliar with sequent calculus, i find these formulae (together with the vending machine analogy) are very helpful for understanding the topic. — Preceding unsigned comment added by Opensofias (talkcontribs) 08:48, 26 July 2014 (UTC)[reply]

Doubtful use of the concept of a proposition[edit]

Suppose we represent a candy bar by the atomic proposition candy, and a dollar by $1.[dubious ]

But a proposition refers to a state of affairs, an assertion about the world that can be true or false -- not to an object. Does the original source really make this conflation?

We seem to be entering the "Politicians lie in cast iron sinks" territory of verbal recombination. See http://genius.com/Douglas-hofstadter-chromatic-fantasy-and-feud-annotated 178.38.97.233 (talk) 12:42, 2 May 2015 (UTC)[reply]

Changed statement to Suppose we represent having a candy bar by the atomic proposition candy, and having a dollar by $1. Ghartshaw (talk) 05:04, 27 October 2015 (UTC)[reply]

Is Girard one-sided sequent calculus the best way to explain the logic?[edit]

It's true that the one-sided sequents give a more economical presentation, and it's easier to translate the one-sided form into further understandings (like the resource lambda calculus). But it's a harder presentation to grasp. I think it would be easier to first explain things in the two-sided form, then later explain how it can be translated to one-sided form (and explicitly show how the dual harmony develops).

First, for people who aren't used to one-sided sequents, that's an extra complexity to get over.

But, even for people who are familiar with the one-sided form, they're still going to be stuck on grasping the rules until they've fully internalized the duality rules. And, since those rules are the definition of the operators, and the duality rules are hard to intuitively grasp without already grasping what the operators do, they have to see the entire circle and grasp it all at once instead of just going step by step.

For example, look at the conjunction rules. If X entails A and Y entails B then X,Y (or XxY) entails AxB; if a single premise multiset Z (such as XxY) entails both A and B, then it entails A&B. That's all you need to understand multiplicative vs. additive. Plus, you immediately get that it's multiplicative conjunction in the premises and disjunction in the conclusions. Sure, you need almost twice as many rules, but they're all rules that mean something on first reading. By contrast, the one-sided form requires you to have already internalized the duals of each operator—which is hard to do before you understand what the operators mean, and since the operators are defined by the very rules you're trying to understand… --50.0.128.145 (talk) 02:25, 3 December 2016 (UTC)[reply]

Intuitive descriptions of the operators?[edit]

Another way of solving the same problem mentioned in the last section might be to give a good-enough intuitive explanation for each operator before getting into the rules. Once you do that, the more economical one-sided presentation of the rules should be fine. I can think of three ways to do this.

The first is just moving (maybe a shortened version of?) the resource explanations up top, although this has the problem with intuitively defining par. You can put it in terms of trading ("A par B" means that we can trade the negation of A to get B), but that's not nearly as clear as alternative vs. cumulative conjunction. (In the case of the vending machine, "($1 negated) par (candy & chips & drink)" almost works, but it also means that we could trade our choice of candy, chips, and drink for $1, which is how money works, but not how vending machines work…)

Another way to do it is in terms of provability: "A or B" means "it's provable that if not A then B, and vice-versa", but "A par B" means "if A is disprovable then B is provable, and vice-versa", and so on.

A third way, which may be a lot more verbose (and basically equivalent to showing the two-sided sequent rules), is to show how "and" splits into "times" and "with" and how "or" splits into "plus" and "par" by directly showing what taking away weakening does. We can define "and" as a language-internal notation for a comma in the premises, or as a way of conjoining the premises of two separate sequents, and it's trivial to prove that these definitions are the same. But any such proof requires weakening, which is what we're giving up. So, if they're not equivalent definitions, let's let them define different things: "times" is the comma in the premises, while "with" is conjoining separate premises. And likewise for the disjunctives. (I realize that my explanation here is terrible, but maybe someone can come up with a better one.) --50.0.128.145 (talk) 03:34, 3 December 2016 (UTC)[reply]

External links modified[edit]

Hello fellow Wikipedians,

I have just modified 2 external links on Linear logic. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 01:24, 22 March 2017 (UTC)[reply]

encoding classical logic[edit]

Over at the nForum, Mike Shulman raised a question about the claim in the article that classical implication can be encoded as !A ⊸ ?B. At first I thought it was correct, but then realized there is a counterexample, so I've gone ahead and modified the article. Now, looking in the history, I see that it was me who introduced this mistaken translation way back in June 2009. That might explain why I was "pretty sure" it was correct! Sorry about this, mea culpa! Noamz (talk) 11:27, 18 October 2017 (UTC)[reply]

"Logic of Unity (LU)"?[edit]

Has anyone heard of the "Logic of Unity (LU)" mentioned in the See Also section? The page itself doesn't exist, and you have to search very explicitly unless you enjoy wading through passages like this:

   Once it is known that the total dimension of perception - the logic of unity - functions in each and every person, then and only then can the field of comparative thought and philosophy be cleared of all preconceptions and move into a more fruitful exchange of ideas.

Moving on... [Linear Logic as a Framework for Specifying Sequent Calculus] (Miller, Pimentel) references [On the Unity of Logic] (Girard), which brought me to Intuitionistic Logic (e.g. A ≠ !(!A)) and my brain promptly flattened against the back of my skull in terror. The general concept of LU seems simple enough on the surface - any axiomatic systems of logic are compatible as long as the strength of axioms is conserved across the boundaries - but I can't grasp enough to appreciate its novelty, especially with respect to Linear Logic.

Could someone take a look and determine if LU would warrant its own page? (While I'm at it, are "Logic of Unity" and "Unity of Logic" interchangeable? Girard's paper was titled "Equipe de Logique", but I imagine "Logique universelle" contributed to the acronym LU and "Logic of Unity" emerged out of convenience.) Linear logic sounds awfully similar to LU, but the paper I linked clarifies:

   In the unified calculus LU that we present in this paper, classical, linear and intuitionistic logics appear as fragments.

Sobeita (talk) 23:15, 25 December 2017 (UTC)[reply]

In Linear Logic: Its Syntax and Semantics (21), Girard writes
LU is little more than a clever reformulation of linear sequent calculus.
This inclines me to thinking it probably doesn't need it's own article, but I'd be interested if others felt differently. Shonfeder (talk) 21:32, 9 November 2023 (UTC)[reply]

Readability[edit]

Two comments on the article's present readability:

  1. Even in the article's lead section, there's so much technicality that it'd be nearly impossible for a non-specialist to grasp why linear logic exists – what it's good for – and how it's different from "everyday" classical logic.
  2. When symbols are introduced, it'd be great to tell the reader how to say them. For example, the top (⊤) and bottom (⊥) symbols are used, but never named.

yoyo (talk) 19:17, 22 August 2018 (UTC)[reply]

What does mean?[edit]

It is never explained what the exponential before a context means. DerSpezialist (talk) 09:41, 12 March 2024 (UTC)[reply]