Linear logic is a refinement of classical and intuitionistic logic.Instead of emphasizingtruth, as in classical logic, orproof, as in intuitionistic logic, linear logic emphasizesthe role of formulas asresources. To achieve this focus,linear logic does not allow the usual structural rules of contractionand weakening to apply to all formulas but only those formulas markedwith certain modals. Linear logic contains a fully involutive negationwhile maintaining a strong constructive interpretation. Linear logicalso provides new insights into the nature of proofs in both classicaland intuitionistic logic. Given its focus on resources, linear logichas found many applications in Computer Science.
Linear logic was introduced by Jean-Yves Girard in his seminal work (Girard 1987). While the origin of the discovery of this new logic comes from asemantical analysis of the models of System F (or polymorphic\(\lambda\)-calculus), one can see the whole system of linear logic asa bold attempt to reconcile the beauty and symmetry of classical logicafforded by De Morgan's dualities with the quest for constructive proofsthat had led to intuitionistic logic.
Indeed, one could present a fragment of linear logic, known asmultiplicative additive linear logic (MALL), as the outcomeof two simple observations.
So, if we want to eliminate the non-constructive proofs withoutdestroying the symmetry of the sequent calculus, as is done inintuitionistic logic, we can try to eliminate the contraction andweakening rules instead. In doing so, we are left with two differentversions of each connective: an additive version and a multiplicativeversion of conjunction and of disjunction. These different versions ofthe same connecitve are now no longer equivalent. These newconnectives are & (“with”, additiveand),\(\oplus\) (“plus”, additiveor), \(\otimes\)(“tensor”, multiplicativeand) and \(\lpar\)(“par”, multiplicativeor).
This duplication of the connectives actually leads to a much clearerunderstanding of the conflict between classical and intuitionisticlogic. For example, the law of excluded middle (\(A\)ornot-\(A\)) is considered valid in the classical world and absurd inthe intuitionistic one. But in linear logic, this law has tworeadings: the additive version \((A \oplus \neg A)\) is not provableand corresponds to the intuitionistic reading of disjunction; themultiplicative version \((A \lpar \neg A)\) is trivially provable andsimply corresponds to the tautology \((A\)implies \(A)\)that is perfectly acceptable in intuitionistic logic too.
Also, the disjunction property, essential in constructivism, is easilyestablished for the additive disjunction.
We find then inside this richer logic a way to represent both theneeds of intuitionism and the elegance of classical logic: negation isinvolutive, sequents are symmetric, and connectives areinter-definable. Contrast these properties with those ofintuitionistic logic, where negation is not involutive, sequents arenot symmetric, and connectives \(\neg, \wedge, \vee, \Rightarrow\) are all not inter-definable.
Notice that once one has eliminated the contraction andweakening rules, formulas no longer behave as truth values. Indeed, ifwe compose a proof of \(A \Rightarrow B\) and a proof of \(A\) withoutthese structural rules, we consume them to get a proof of \(B\): thatis, \(A \Rightarrow B\) and \(A\) are no longer available after thecomposition. Linear logic formulas behave more likeresources that can only be used in restricted ways.
To recover the full expressive power of intuitionistic and classicallogic, it is then necessary to add to the MALL fragment two dualmodalities, which are usually calledexponentials in thelinear logic literature. In particular, the “of-course”exponential \(\bang\) permits contraction and weakening to be appliedto the formula \(\bang B\) in the left-hand sequent context while the“why-not” exponential \(\quest\) permits contraction andweakening to be applied to the formula \(\quest B\) on the right-handsequent context. This leads to the full propositional linear logic andto a very nice connection with computer science.
Notice that besides MALL, there are two other widely used fragments ofLinear Logic: Multiplicative Linear Logic (MLL), which is MALL withoutthe additive connectives; and Multiplicative Exponential Linear Logic(MELL), which is Linear Logic without the additive connectives.
Gentzen's sequent calculus proof system for intuitionistic logic wasthe result of restricting his sequent calculus for classical logic sothat all sequents contained at most one formula on the right-hand sideof sequents (Gentzen 1935). Such a single-conclusion restriction can also be explored in thecontext of linear logic. Various researchers have consideredintuitionistic variants of MLL, MALL, and MELL: for example, IMLL (Lamarche 1996), IMALL (Lincolnet al. 1993), and IMELL (Dal Lago and Laurent 2008). Schellinx (1991) provided some general conservativity results of classical linear logic over the corresponding intuitionistic variants: see also(Troelstra 1992).
Prior to the introduction of linear logic in 1987, various researchershad been working on various kinds of substructural logic in which notall of Gentzen’s structural rules (contraction, weakening, andexchange) are accepted. Lambek studied a sequent calculus proofsystems in which none of these structural rules were permitted (Lambek 1958). Other examples of such logics are relevant logic (in which weakeningis not accepted) (Anderson & Belnap 1975,Andersonet al. 1992,Avron 1988,Read 1988). and affine logic (in which contraction is not accepted) (Grishin 1981).
The computer science exploitation of intuitionistic and constructive logic began when people saw the possibility of reading\(A \Rightarrow B\) as‘if you give me an \(A\), I will give you a \(B\)’, whichis a significant departure from the classical reading‘\(A\) isfalse or \(B\) is true’.
Linear logic provides a further twist in the interpretation of theimplication ‘\(A \Rightarrow B\)’: now it can be read as‘give meas many \(A\)'s as I might need and I willgive youone \(B\)’. The notion ofcopy whichis so central to the idea of computation is now wired into the logic.This new viewpoint opens up new possibilities, including:
The core propositional connectives of linear logic are divided intoadditive and multiplicative connectives. The classical conjunction andits identity, \(\wedge\) and \(\top\), split into the additive\(\amp\) (with) and \(\top\) (top) and the multiplicative \(\otimes\)(tensor) and 1 (one). Similarly, the classical disjunction and itsidentity, \(\vee\) and \(\bot\), split into the additive \(\oplus\)(oplus) and 0 (zero) and the multiplicative \(\lpar\) (par) and\(\bot\) (bottom). Negation is generally treated in one of two ways inpresentations a linear logic. Negation can be viewed as a primitivepropositional connective with no restrictions on its occurrenceswithin formulas. Since De Morgan dualities exist between negation andall propositional connectives, exponentials, and quantifiers, it isalso possible to treat negation as a special symbol that only occursapplied to atomic formulas. Implications are also commonly introducedinto linear logic via definitions: the linear implication \(B \limpC\) can be defined as \(B^{\bot} \lpar C\), while the intuitionisticimplication \(B \Rightarrow C\) can be defined as \(\bang B \limp C\).The operators \(\bang\) and \(\quest\) are variously called modals orexponentials. The term “exponential” is particularlyappropriate since, following the usual relationship betweenexponentiation, addition, and multiplication, linear logic supportsthe equivalences \(\bang (B \amp C) \equiv (\bang B \otimes \bang C)\)and \(\quest(B \oplus C) \equiv (\quest B \lpar \quest C)\), as wellas the 0-ary versions of these equivalences, namely, \((\bang\top\equiv 1)\) and \((\quest 0 \equiv \bot)\). Here, we use the binaryequivalence \(B \equiv C\) to mean that the formula \((B \limp C)\amp(C \limp B)\) is derivable in linear logic.
A two-sided sequent calculus for linear logic is presented in thefigure below. Notice here that negation is treated as if it were anyother logic connective: that is, its occurrences in formulas are notrestricted and there are introduction rules on the left and right fornegation. The left and right side of sequents are multiset offormulas: thus, the order of formulas in these contexts does notmatter but their multiplicity does matter.
Notice that the rules of weakening and contraction are available onlyfor formulas marked with the exponential \(\bang\) on the left or\(\quest\) on the right of the sequent. The \(\quest\)R and \(\bang\)Lrules are often called “dereliction” rules. The \(\quest\)L and \(\bang\)R rules are often called“promotion” rules andare similar to the sequent calculus rules used in the treatment ofS4 modal logic (Ono 1998). The usualproviso for the \(\forall\)-right and \(\exists\)-left introductionrules is assumed; in particular, the variable \(y\) must not be freein the formulas of the lower sequent of those inference rules.Quantification here is assumed to be first-order. Girard presented a higher-order version of linear logic in his1987 paper.
The cut rule can be eliminated and completeness is still maintained.Dually, theinit rule can also be eliminated as well exceptfor the occurrences ofinit involving atomic formulas.
An important normal form theorem for the structure of cut-free proofswas provided by Andreoli (1992). He classified a non-atomic formula asasynchronous if itstop-level logical connective is \(\top\), &, \(\bot , \lpar\),\(\quest\), or \(\forall\) orsynchronous if its top-levellogical connective is \(0, \oplus , 1, \otimes\), \(\bang\), or\(\exists\).
When viewing proof search as a computational model, we consider how inference rules can transform a sequent as itsconclusion intopremises. As a result, we can see formulas in a sequent as being “agents” that may actindependently or in concert with others in their environment. Here, the actions of such agents are determined by reading the introductionrule for them bottom-up. If an asynchronous formula occurs on theright of a sequent, it can evolve without affecting provability andwithout interacting with its context, i.e., the correspondingintroduction rule is invertible. For example, the agent \((B \lparC)\) becomes (by applying the \(\lpar\)-right introduction rule) thetwo agents \(B\) and \(C\) (now working in parallel). Similarly, theagent \((B \amp C)\) yields (by applying the &-right introductionrule) two different identical worlds (sequents) except that \(B\) isin one of these worlds and \(C\) is in the other.
On the other hand, if we view a synchronous formula as an agent whoseevolution is determined by reading the corresponding right-introduction rule from the bottom up,then it is possible for a provable sequent to evolve to a non-provablesequent (for example, by applying the \(\oplus\) right-introductionrule). Also, the instances of such inference rules depend on detailsof the context of the formula. For example, the success of the 1-rightintroduction rule requires that the surrounding context is empty andthe success of the \(\otimes\)-right introduction rule depends on howthe agent’s surrounding context is divided into two contexts.Thus, the evolution of such agents depends on“synchronizing” with other parts of the context.
Now consider a one-sided sequent calculus presentation of linear logicwhere the only introduction rules are right-introduction rules. Giventhe above classification of connectives, it is possible to show thatproof search can be structured into the following phases without lossof completeness. Theasynchronous phase occurs if there is anasynchronous formula present in the sequent. In this phase,right-introduction rules are applied (bottom-up) in any order until there are nofurther asynchronous formulas. In thesynchronous phase somesynchronous formula is selected and becomes the “focus” ofthis phase: that is, right-introduction rules are applied (bottom-up) to it and toany synchronous subformula that it might generate.
The following figure presents the focusing proof system for linearlogic. Notice that the two phases are represented by different arrows:the up-arrow denotes the asynchronous phase and the down-arrow denotesthe synchronous phase. Also, sequents are divided into threezones (where the zones are separated by either a semicolon oran up or down-arrow). In particular, to the left of the up-arrow anddown-arrow are the two zones. The zone written as \(\Psi\) denotes aset of formulas that can be used any number of times in the proof ofthat sequent. The zone written as \(\Delta\) denotes a multiset offormulas. The zone to the right of anup-arrow is also a multiset of formulas while the zone to the right ofa down-arrow is a single formula. It is possible to impose anarbitrary order on the formulas to the right of the up-arrow since theintroduction of asynchronous formulas can be done in any order. Atomsare given polarity and in the figure below, \(A\) stands for positiveatoms and the negation of \(A\) stands for negative atoms. Proofsbuilt by these inference rules are calledfocused proofs. Thecompleteness of focused proofs for linear logic is proved inAndreoli 1992.
Focused proof systems have also been designed for classical andintuitionistic logics (Danoset al. 1997;Laurentet al. 2005;Liang & Miller 2009).
Proofs presented using a sequent calculus contain a lot of detail thatsometimes is uninteresting: consider for example how manyuninterestingly different ways there are to form a proof of \(\vdash\Gamma , (A_1\lpar A_2), \ldots ,(A_{n-1}\lpar A_n)\) from aderivation of \(\vdash \Gamma , A_1, A_2 , \ldots ,A_n\). Thisunpleasant fact derives from the sequential nature of proofs in asequent calculus: if we want to apply a set \(S\) of \(n\) rules todifferent parts of a sequent, we cannot apply them in one step, evenif they do not interfere with each other! We mustsequentialize them, i.e., choose a linear order on \(S\) andapply the rules in \(n\) steps, according to this order.
A natural question arises: “Is there a representation of proofsthat abstracts from such uninteresting details?”. A similarquestion is answered positively in the case of intuitionistic sequentcalculus by means of what is known asnatural deduction, thathas, via the Curry-Howard correspondence (Howard 1980),a strong connection with thecomputational device known as \(\lambda\)-calculus.
For linear logic, this succinct representation of proofs is given byproof nets, graph-like structures that enjoy particularlygood properties for the MLL fragment of the logic. The first steptowards this representation is to convert all the sequent calculussystem, using the involutivity of negation, into a one-sided system,where sequents are of the form \(\vdash \Gamma\). As a consequence,the number of rules is reduced since we have no left-introductionrules, but we keep the same expressive power, as provability stays thesame.
To each sequent calculus proof in MLL, one can inductively associate aproof net with the sameconclusions as follows:
All this can be properly formalized using hypergraphs (formulas arenodes and “links” are oriented hyperedges with hypothesesand conclusions), and we can formally define as a proof net ahypergraph inductively built out of a sequent calculus derivation ofMLL. Notice that there are quite a lot of hypergraphs that are notproof nets.
Now if you look at the proof net built from the derivations of\(\vdash \Gamma , (A_1\lpar A_2), \ldots ,(A_{n-1}\lpar A_n)\)obtained from \(\vdash \Gamma , A_1, A_2 , \ldots ,A_n\), you will seethat all trace of the order of application of the rules hasdisappeared. In a sense, the proof nets are an equivalence class ofsequent calculus derivations with respect to the derivation order ofrules whose application commute.
Suppose that somebody now comes to you with a huge hypergraph builtwith axiom, cut, par and tensor links, claiming that it is actuallya representation of a proof.How can you verify that it is actually a representation of aproof, and not just a random structure?
Performing thiscorrectness check is a challenge that amountsto rebuilding a sequential construction history for your structure,corresponding to a derivation in sequent calculus, and seems at firsta very complex problem. The first correctness criterion for MLL proofnets, called the “long trip criterion”, and present inGirard’s original paper, is exponential, as well as the ACC(Acyclic connected) criterion of Danos and Regnier (1989) found later on. Nevertheless, there exists a much more efficientcriterion, known as Contractibility, due to Danos and Regnier, thathas more recently been reformulated as the following elegant graphparsing criterion by Guerrini, Martini and Masini: a hypergraph is aproof net if and only if it reduces to the singleton node“net” via the following graph reduction rules

Performing this check naively can take quadratic time (eachapplication of a rule may require an entire lookup of the graph tofind the redex, and we need to perform as many steps as there arehyperlinks in the graph). Linear time algorithms have been give byGuerrini (2011) and by Murawski and Ong (2006).
Another style of correctness criterion for MLL proof nets is given byRetoré(2003) in which a quadratic algorithm is given for MLL.
On proof nets, one can perform cut elimination in a particularly cleanway, because of their parallel nature, cuts can be eliminated locally viatwo simplification rules:
These are actually computation rules over proof nets, and thecorrectness criteria allow to verify easily that any such rulepreserves correctness, and as a consequence, the reduction of a proofnet still comes from a sequent calculus proof of the same sequent.
Hence, cut elimination in MLL proof nets can be performed in lineartime and gives a simple, elegant cut-elimination result for all ofMLL.
The proof nets approach can be extended to larger subsets of linearlogic, but then it is less clear how to obtain the same elegantresults as for MLL: the original system proposed inGirard 1987 works for MELL, for example, by associating to the four exponentialrules the following hypergraph constructions:
While these constructions and the associated graph reductions bearstriking similarity with \(\lambda\)-calculus with explicitsubstitutions, as first remarked by Di Cosmo & Kesner (1997), they are too similar to the corresponding sequent calculus rules. Theparallelization effect so elegant for MLL does not properly carry onhere, and the graph reduction rules involve boxes and are notlocal.
To recover a satisfactory system, many proposals have been made,starting from the one by Danos & Regnier (1995) but we want to mention here the very elegant approach by Guerrini,Martini and Masini (Guerriniet al. 2003), that neatly shows the connection between two level proof systems formodal logic, proper proof nets for MELL, and optimal reduction in the\(\lambda\)-calculus.
A recent paper by Heijltjes and Houston(2016) has shown that there can be no satisfactory notion of proof nets forMLL if units are also allowed.
It is possible to provide a canonical treatment of additiveconnectives, even with first-order quantification(Heijltjeset al. 2019). Proof nets for formulas containing both multiplicative and additiveconnectives have various technical presentations, none of whichappears canonical and satisfactory. Their treatment in proof-net-likeproof systems is currently a topic of active research. In particular,see(Hughes and van Glabbeek 2005) and(Hughes and Heijltjes 2016).
Approaching the semantics of linear logic is usually done along twodifferent paths. First, there are various semantic structuresavailable that can be used to map formulas to denotations in suchstructures. That approach can be used to establish soundness andcompleteness for various fragments of linear logic. A more novelsemantic approach to linear logic involves giving semantics to proofsdirectly. We describe briefly these two approaches and provide somelinks to the literature.
One approach to attempting a sound and complete semantics forfragments of linear logic associates to a formula the set of allcontexts that can be used to prove that formula. Of course, such acollection may need to be more abstract and to be given variousclosure properties. Thephase semantics of Girard (1987) provides one such semantics: some uses of such semantics have beenmade in computer science to provide counterexamples and as a tool thatcan help establish that a given concurrent system cannot evolve intoanother with certain properties (Fageset al. 2001).Phase semantics have also been used to give semantic proofs of cut eliminationfor both first-order and higher-order versions of linear logic (Okada 1999). Similarly, Kripke-style semantics have been provided inAllwein & Dunn 1993 andHodas & Miller 1994. Quantales, which are certain kind of partially ordered algebraic structures,have also been used to provide semantic models for parts oflinear logic early on (Yetter 1990).
In the formulas-as-types analogy which is so popular and fruitful intheoretical computer science, a logical system is put incorrespondence with a typed computational device (like typed\(\lambda\)-calculus), by associating to each proof of that formula aprogram having that formula as a type. For example, a proof of thetautology \(A \Rightarrow A\) corresponds to the programfun\((x:A).x:A\rightarrow A\), the identity function onobjects of type \(A\). This is why, in constructive logical systems(such as intuitionistic logic and arithmetic), and in linear logic, somuch importance is attached to proofs, to the point that building andstudying models of proofs gets so much more attention than buildingand studying models of provability: we are not satisfied to know thata formula is provable, we really want to know thecomputationalcontent of its proof.
Many models of linear logic proofs have been proposed; we considerthat, to date, the simplest and most intuitive construction is the onebased on the so-called “relational semantics” or“Kripke-style semantics”, where formulas are interpretedas multisets, one-sided sequents are interpreted as tuples ofmultisets, and proofs are interpreted as relations over theinterpretation of sequents (Tortora de Falco 2003,Ehrhard 2012,Melliès 2018). If one wants to give a purelyset-theoretic semantics, without resorting to multisets, it ispossible to do it by means of coherence spaces, sets equipped with aspecial coherence relation, as originally shown by inGirard 1987. There are interesting category theoretical models of linear logic,such as the *-autonomous categories (Barr 1991) and hypercoherences (Ehrhard 1993).
Another approach to the semantics of proofs is given by Girard’sGeometry of Interaction, that allows us to obtain a fullyalgebraic characterization of proofs. To each proof net, one canassociate a partial permutation matrix \(\sigma\) corresponding to thecut links, and a proper matrix \(M\) corresponding expressions builtout of a certain dynamic algebra, that describe the possible pathsinside the proof net. It is then possible to fully describe the proofnet via theexecution formula
\[\mathrm{EX}(\sigma ,M) = (1-\sigma^2) \left(\sum_i M(\sigma M)\right) (1-\sigma^2)\]which, in the MLL case, is an invariant of the normalization process.Some nice connection to results coming from data-flow theory has beenshown in some early work of Abramsky & Jagadeesan (1994).
The area of semantics that has developed around so-calledgamesemantics deserves special attention. The strong connectionbetween games and linear logic was pointed out quite early by A. Blass (1992). In fact, there are two different traditions to connecting logic togames. In the tradition ofdialog games dating back toLorenzen, one player attempts to prove a formula while a second playerattempts to refute it. It is possible to provide MALL with such adialog game that is completely symmetric for both the prover and therefuter (Delandeet al. 2010). In another tradition, formulas are interpreted as games, logicalconnectives as game constructors, and proofs as strategies thatdescribe how a player reacts to opponent moves. By imposing differentrestrictions on the rules of the game, one can actually provide aprecise semantics (technically, a fully abstract model) for variousfeatures of actual programming languages, hence the huge interest inthis area over the past years. See, for example,Abramsky & Jagadeesan 1994,Abramsky & Melliès 1999, andHyland & Ong 2000.
For any given logic, it is useful to know whether or not there is aneffective procedure to determine, for each sentence in the logic, ifit is provable or not. Adecidable logic—i.e., one forwhich there is an effective procedure for provability—is oftendescribed by itscomplexity class, which characterizes howdifficult it is to perform the decision procedure. Extensive researchwork has been dedicated to the study of the complexity anddecidability issues for several fragments of propositional linearlogic. It is known that
Here, NP and PSPACE are complexity classes such that NP\(\subseteq\) PSPACE. Surprisingly, for thosethat may forget that the novelty in linear logic lies in the wayformulas are managed without the structural rules of contraction andweakening, these results stay the same even if we focus on thefragments of the logics where only the constants, and no propositionalvariables, are allowed (Lincoln & Winkler 1994). Indeed, it is possible to encode arbitrary formulasinto constant-only formulas preserving provability.
MELL is a surprisingly expressive logic. For example, the reachabilityproblem in Petri nets can be encoded into MELL(Gunter & Gehlot 1989) and that problem is equivalent to the reachability problem of vectoraddition systems with states (VASS)(Reutenauer 1989). Furthermore, the decidability problem of MELL is equivalent to thereachability problem for branching VASS(de Grooteet al. 2004) and the latter is known to have a non-elementary lower bound(Lazic and Schmitz 2015). Thus, if MELL turns out to be decidable, it will be at least TOWER-hard(Lazic and Schmitz 2015). A proof of the decidability of MELL has been given by Bimbó(2015) but a gap in that proof has been reported by Straßburger(2019).
Proofs of the undecidability of propositional linear logic werepublished in the early 1990s (Lincolnetal. 1992,Lincoln 1995).Kanovich (2016) showed that thisundecidability result holds for a greatly reduced fragment ofpropositional linear logic. and a machine checked proof of undecidabilityhas been published in (Forster& Larchey-Wendling, 2019). However, a proofof decidability of propositional linear logic has also been publishedby Bimbó & Dunn in (2022), who alsoclaim to have found mistakes in the above-mentioned papers.
Linear logic with the unrestricted weakening rule added (also known aslinear affine logic) is decidable (Kopylov 1995) and exponential space hard (Urquhart 2000).
A good overview of complexity results surrounding linear logic can befound inLincoln 1995.
When intuitionistic logic was first proposed early in the lastcentury, it was presented as a challenge to the way traditionalmathematicians were supposed to do business. Uses of the excludedmiddle and of proof-by-contradiction were considered suspect andproblematic, particularly in the presence of infinity. Asintuitionistic logic concerns were developed into constructivemathematics, new constructive approaches have arisen to topics such astopology, algebra, and analysis. Given that linear logic encompassesthe dynamics of proof (algorithm) and resources, its primary impacthas been not in traditional mathematics but in computer science.Before overviewing the nature of that impact, we outline the variousways in which logic more generally is used in computer science.
Logic plays different roles in the specification of computations. Wecan identify the following broad different approaches and note whichroles have felt influences from linear logic.
In thecomputation-as-model approach, computations areencoded as mathematical structures and consist of such items as nodes,transitions, and states. Logic is used externally to makestatementsabout those structures. That is, computations areused as models for logical expressions. Intensional operators, such asthe modals of temporal and dynamic logics or the triples of Hoarelogic, are often employed to express propositions about the change instate. This use of logic to represent and reason about computation isprobably the most broadly successful use of logic forrepresenting computation. This role for logic has felt littleinfluence from linear logic.
In thecomputation-as-deduction approach, pieces oflogic’s syntax (such as formulas, terms, types, and proofs) areused directly as elements of the specified computation. In this morerarefied setting, there are two rather different approaches to howcomputation is modeled, called theproof normalizationapproach and theproof search approach.
We outline below the significant impacts that linear logic has had onboth of these different settings.
Theproof normalization approach views the state of acomputation as a proof term and the process of computing asnormalization (known alternatively as \(\beta\)-reduction orcut-elimination). Functional programming can be explained usingproof-normalization as its theoretical basis (Martin-Löf 1982) and has been used to justify the design of new functional programminglanguages, e.g.,Abramsky 1993. Linear logic provides this approach to computational specificationwith new types, new declarative means for statically understanding howresources may be used in a computation, and provided an appealingmeans for formalizing the duality between a function and theenvironment that supplies it with arguments.
Another area where linear logic has been a powerful theoreticalinstrument is that ofoptimal reduction. The problem ofbuilding efficient (optimal) interpreters for the\(\lambda\)-calculus, that stayed open for quite a long time after itsoriginal definition by J.J. Lévy, was solved for the first timeinLamping 1990, via a sophisticated sharing graph implementation involving a quiteimpressive amount of rules. Using ideas and intuition from linearlogic, many authors reconstructed Lamping’s solution,simplifying it and leading to a rich theory connected to that of theGeometry of Interaction. For further reading, a good reference isAsperti & Guerrini 1998.
The refinement of intuitionistic logic provided by linear logic andthe dualities of linear logic provided a setting in which one couldview a function and its environment as similar entities that interactdually. For example, a function with the type \(A \limp B\) can bemodeled as a process that consumes a value of type \(A\) from itsenvironment and transforms it into a value of type \(B\). In linearlogic, this implication is equivalent to its contrapositive form: thetype \(B^{\bot} \limp A^{\bot}\) can lead to interpreting the samefunction as a process that transforms a demand for a value of type\(B\) into a demand for a value of type \(A\) (notice that this doesnot happen with functions of intuitionistic type since, for example,input argument may be vacuous) (Curien 2003). The recent successes of using game semantics to model functionalcomputation are similarly related to the dual treatment of functionand environment (Abramsky and Jagadeesan 1994,Hyland & Ong 2000).
Finally, we mention that in the area of encoding computation as proofnormalization, linear logic has been used to provide a type-baseddescription of polytime recursive functions. For example, M. Hofmann (2003) introduced a \(\lambda\)-calculus with modal and linear types thatextended the function algebra ofBellantoni & Cook 1992 to higher types. Types based on linear logic have also been usedwithin functional programs: seeGuzman & Hudak 1990 andWadler 1991.
Theproof search approach views the state of a computation asa sequent (a structured collection of formulas) and the process ofcomputing as the process of searching for a proof of a sequent: thechanges that take place in sequents capture the dynamics ofcomputation. With this view of computation, we generally readinference rules bottom up, i.e., as a transformation of theirconclusion into their premise(s). Logic programming can be explainedusing proof search as its theoretical basis, and linear logic providesthis approach to computational specification with new combinators forbuilding logic programs, new means to capture rich dynamics, and newdeclarative approaches to specifying concurrent computations. (SeeMiller 2004 for an overview of linear logic programming languages.)
The completeness of focusing proof system can be used to provide adeclarative explanation of part of the operational semantics of logicprogramming within linear logic. Consider, for example, the subset\(L\) of formulas of linear logic that are built from only theconnectives \(\top\), &, \(\limp , \Rightarrow\), and \(\forall\).(Notice that if one adds \(\bot\) to this list, it is possible toencode all connectives of linear logic.) It is possible to see thatcut-free proof search in \(L\) can be defined into to phases. Given asequent \(\Gamma\); \(\Delta \vdash G\), where \(\Gamma\) is aset of formulas (which can be contracted and weakened), \(\Delta\) ismultiset of formulas (which cannot be contracted nor weakened), and\(G\) (the “goal” formula) is a single formula (all from\(L)\), then proof search proceeds as follows.
Formally, these various phases can be described using the followinginference rules. Here, a new sequent arrow is introduced: this arrowis labeled with the formula that is the result of a left-introductionrule. Notice that the rule for left-introduction of \(\limp\) requiressplitting the \(\Delta\) context into two parts \(\Delta_1,\Delta_2\)(when reading the rule bottom up). There are, of course, \(2^n\) suchsplittings if that context has \(n \ge 0\) distinct formulas. Thesyntactic variable \(A\) in these inference rules ranges over atomicformulas.
Linear logic restricted to \(L\) can be viewed as a linear logicprogramming language. As a consequence, it can serve as aspecification language for computational systems, a role that is alsooccupied by, say, Petri nets, process calculi, \(\lambda\)-calculus,etc. Given that linear logic has a proof theory and various kinds ofsemantics, broad avenues of reasoning about computations specified in\(L\) are provided by the meta-theory of linear logic.
Given that the sequent calculus for linear logic uses multisets offormulas, proof search can directly encodemultisetrewriting. Since many computations can naturally be seen asmultiset rewriting, it has been possible to make numerous connectionsbetween linear logic and Petri nets (Gunter & Gehlot 1989), process calculi (Andreoli & Pareschi 1991,Kobayashiet al. 1999,Miller 1996), and security protocols (Cervesatoet al. 1999,Miller 2003).
If one uses traditional sequent calculus presentations, the exponentials are notcanonical in the following sense: if you introduce another copy ofexponentials, say \(\bang'\) and \(\quest'\), with the same rules asthe original ones, there is no way to prove that \(\bang\) isequivalent to \(\bang'\), and \(\quest\) to \(\quest'\), while for theother connectives this is easily established. Various applications ofnon-canonical exponentials can be found in (Danos et. al., 1993;Nigam & Miller, 2009). Although the decidability of MELL is currently beingdebated (Bimbó 2015,Straßburger 2019), extending MLL with three pairs of \(\bang\) and\(\quest\) yields a logic that is undecidable(Chaudhuri 2018). Martini and Masini1995 describe a “2-sequent” proof system in which theexponentials are canonical.
The fact that cut-elimination can make proofs in classical andintuitionistic logic grow to enormous size can be analyzed in terms ofthe application of the contraction rule in those proof systems. Ifproof systems introduce restrictions on contraction, it is possible todesign new logics and proof systems for which cut-elimination has amuch-reduced complexity. For example,elementary linear logic(ELL) is obtained by replacing the \(\bang\) and \(\quest\)introduction by a single rule introducing \(\bang\) and \(\quest\) atthe same time. As a consequence, ELL can encode all and only theKalmar elementary functions (computable in time bounded by a tower ofexponentials of fixed height) (Girard 1998,Baillot 2015). Still, other variations on the inference rules for theexponentials have been studied. For example,light linearlogic (Girard 1998) andsoft linear logic (Lafont 2004) both characterize functions computable in polynomial time: see also (Baillot & Terui 2004).
While linear logic rejects the universal application of the twostructural rules of weakening and contraction, it allows theunrestricted use of the structural rule calledexchange. Asequent calculus that does not universally employ the exchange rulehas sequents whose left and right contexts are lists: the order offormulas within context becomes an expressive element of the logic. Inthis case, the multiplicative disjunction and conjunction can becomenon-commutative.
One of the first logics that rejects all three structural rules of thesequent calculus was given inLambek 1958. While this logic containstwo implications, it contains neither a negation nor any exponentials. Various papers haveproposed extending linear logic to include non-commutative featuresand, at present, no proposal seems canonical. For a sampling ofnon-commutative linear logics, seeYetter 1990,Abrusci 1991,Retoré 1997,Abrusci & Ruet 1999, andGuglielmi & Straßburger 2001.
While the MALL logic is an expressive and novel logic, it is alsodecidable and, thus, not capable of capturing the unbounded behaviorsfound in, say, iteration and recursion. As described above,the addition of the exponentials ! and ? enriches MALL to full linearlogic and to a setting where unbounded behaviors can be captured. Asecond approach to extending MALL to capture unbounded behaviorsinvolves the addition of the least and greatest fixed point operatorsas logical connectives directly into MALL. In order to properlycharacterize fixed points as being either the least or the greatest,it is necessary for the inference rules for fixed points to be“higher-order” in the sense that they involve invariants.Such an extension to MALL (also with first-order quantification andterm equality) has been developed by Baelde (Baelde & Miller, 2007;Baelde 2012) and has been used to provide a proof-theoreticfoundation for model checking (Heath & Miller 2018).
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
logic: and games |logic: classical |logic: dialogical |logic: intuitionistic |logic: substructural |proof theory
We thank an anonymous reviewer at the SEP for many comments that havehelped us revise this article.
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054