Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Combinatory logic

From Wikipedia, the free encyclopedia
Logical formalism using combinators instead of variables
Not to be confused withcombinational logic, a topic in digital electronics.

Combinatory logic is a notation to eliminate the need forquantified variables inmathematical logic. It was introduced byMoses Schönfinkel[1] andHaskell Curry,[2] and has more recently been used incomputer science as a theoreticalmodel of computation and also as a basis for the design offunctional programming languages. It is based oncombinators, which were introduced bySchönfinkel in 1920 with the idea of providing an analogous way to build up functions—and to remove any mention of variables—particularly inpredicate logic. A combinator is ahigher-order function that uses onlyfunction application and earlier defined combinators to define a result from its arguments.

In mathematics

[edit]

Combinatory logic was originally intended as a 'pre-logic' that would clarify the role ofquantified variables in logic, essentially by eliminating them. Another way of eliminating quantified variables isQuine'spredicate functor logic. While theexpressive power of combinatory logic typically exceeds that offirst-order logic, the expressive power of predicate functor logic is identical to that of first order logic (Quine 1960, 1966, 1976).

The original inventor of combinatory logic,Moses Schönfinkel, published nothing on combinatory logic after his original 1924 paper.Haskell Curry rediscovered the combinators while working as an instructor atPrinceton University in late 1927.[3] In the late 1930s,Alonzo Church and his students at Princeton invented a rival formalism for functional abstraction, thelambda calculus, which proved more popular than combinatory logic. The upshot of these historical contingencies was that until theoretical computer science began taking an interest in combinatory logic in the 1960s and 1970s, nearly all work on the subject was byHaskell Curry and his students, or byRobert Feys inBelgium. Curry and Feys (1958), and Curryet al. (1972) survey the early history of combinatory logic. For a more modern treatment of combinatory logic and the lambda calculus together, see the book byBarendregt,[4] which reviews themodelsDana Scott devised for combinatory logic in the 1960s and 1970s.

In computing

[edit]

Incomputer science, combinatory logic is used as a simplified model ofcomputation, used incomputability theory andproof theory. Despite its simplicity, combinatory logic captures many essential features of computation.

Combinatory logic can be viewed as a variant of thelambda calculus, in which lambda expressions (representing functional abstraction) are replaced by a limited set ofcombinators, primitive functions withoutfree variables. It is easy to transform lambda expressions into combinator expressions, and combinator reduction is much simpler than lambda reduction. Hence combinatory logic has been used to model somenon-strictfunctional programming languages andhardware. The purest form of this view is the programming languageUnlambda, whose sole primitives are the S and K combinators augmented with character input/output. Although not a practical programming language, Unlambda is of some theoretical interest.

Combinatory logic can be given a variety of interpretations. Many early papers by Curry showed how to translate axiom sets for conventional logic into combinatory logic equations.[5]Dana Scott in the 1960s and 1970s showed how to marrymodel theory and combinatory logic.

Summary of lambda calculus

[edit]
Main article:Lambda calculus

Lambda calculus is concerned with objects calledlambda-terms, which can be represented by the following three forms of strings:

wherev{\displaystyle v} is a variable name drawn from a predefined infinite set of variable names, andE1{\displaystyle E_{1}} andE2{\displaystyle E_{2}} are lambda-terms.

Terms of the formλv.E1{\displaystyle \lambda v.E_{1}} are calledabstractions. The variablev is called theformal parameter of the abstraction, andE1{\displaystyle E_{1}} is thebody of the abstraction. The termλv.E1{\displaystyle \lambda v.E_{1}} represents the function which, applied to an argument, binds the formal parameterv to the argument and then computes the resulting value ofE1{\displaystyle E_{1}}— that is, it returnsE1{\displaystyle E_{1}}, with every occurrence ofv replaced by the argument.

Terms of the form(E1E2){\displaystyle (E_{1}E_{2})} are calledapplications. Applications model function invocation or execution: the function represented byE1{\displaystyle E_{1}} is to be invoked, withE2{\displaystyle E_{2}} as its argument, and the result is computed. IfE1{\displaystyle E_{1}} (sometimes called theapplicand) is an abstraction, the term may bereduced:E2{\displaystyle E_{2}}, the argument, may be substituted into the body ofE1{\displaystyle E_{1}} in place of the formal parameter ofE1{\displaystyle E_{1}}, and the result is a new lambda term which isequivalent to the old one. If a lambda term contains no subterms of the form((λv.E1)E2){\displaystyle ((\lambda v.E_{1})E_{2})} then it cannot be reduced, and is said to be innormal form.

The expressionE[v:=a]{\displaystyle E[v:=a]} represents the result of taking the termE and replacing all free occurrences ofv in it witha. Thus we write

((λv.E)a)E[v:=a]{\displaystyle ((\lambda v.E)a)\Rightarrow E[v:=a]}

By convention, we take(abc){\displaystyle (abc)} as shorthand for((ab)c){\displaystyle ((ab)c)} (i.e., application isleft associative).

The motivation for this definition of reduction is that it captures the essential behavior of all mathematical functions. For example, consider the function that computes the square of a number. We might write

The square ofx isxx{\displaystyle x*x}

(Using "{\displaystyle *}" to indicate multiplication.)x here is theformal parameter of the function. To evaluate the square for a particular argument, say 3, we insert it into the definition in place of the formal parameter:

The square of 3 is33{\displaystyle 3*3}

To evaluate the resulting expression33{\displaystyle 3*3}, we would have to resort to our knowledge of multiplication and the number 3. Since any computation is simply a composition of the evaluation of suitable functions on suitable primitive arguments, this simple substitution principle suffices to capture the essential mechanism of computation.Moreover, in lambda calculus, notions such as '3' and '{\displaystyle *}' can be represented without any need for externally defined primitive operators or constants. It is possible to identify terms in lambda calculus, which, when suitably interpreted, behave like the number 3 and like the multiplication operator, q.v.Church encoding.

Lambda calculus is known to be computationally equivalent in power to many other plausible models for computation (includingTuring machines); that is, any calculation that can be accomplished in any of these other models can be expressed in lambda calculus, and vice versa. According to theChurch–Turing thesis, both models can express any possible computation.

It is perhaps surprising that lambda-calculus can represent any conceivable computation using only the simple notions of function abstraction and application based on simple textual substitution of terms for variables. But even more remarkable is that abstraction is not even required.Combinatory logic is a model of computation equivalent to lambda calculus, but without abstraction. The advantage of this is that evaluating expressions in lambda calculus is quite complicated because the semantics of substitution must be specified with great care to avoid variable capture problems. In contrast, evaluating expressions in combinatory logic is much simpler, because there is no notion of substitution.

Combinatory calculi

[edit]

Since abstraction is the only way to manufacture functions in the lambda calculus, something must replace it in the combinatory calculus. Instead of abstraction, combinatory calculus provides a limited set of primitive functions out of which other functions may be built.

Combinatory terms

[edit]

A combinatory term has one of the following forms:

SyntaxNameDescription
xVariableA character or string representing a combinatory term.
PPrimitive functionOne of the combinator symbolsI,K,S.
(M N)ApplicationApplying a function to an argument. M and N are combinatory terms.

The primitive functions arecombinators, or functions that, when seen as lambda terms, contain nofree variables.

To shorten the notations, a general convention is that(E1E2E3...En){\displaystyle (E_{1}E_{2}E_{3}...E_{n})}, or evenE1E2E3...En{\displaystyle E_{1}E_{2}E_{3}...E_{n}}, denotes the term(...((E1E2)E3)...En){\displaystyle (...((E_{1}E_{2})E_{3})...E_{n})}. This is the same general convention (left-associativity) as for multiple application in lambda calculus.

Reduction in combinatory logic

[edit]

In combinatory logic, each primitive combinator comes with a reduction rule of the form

(Px1 ...xn) =E

whereE is a term mentioning only variables from the set{x1 ...xn}. It is in this way that primitive combinators behave as functions.

Examples of combinators

[edit]

The simplest example of a combinator isI, the identity combinator, defined by

(Ix) =x

for all termsx. Another simple combinator isK, which manufactures constant functions: (Kx) is the function which, for any argument, returnsx, so we say

((Kx)y) =x

for all termsx andy. Or, following the convention for multiple application,

(Kxy) =x

A third combinator isS, which is a generalized version of application:

(Sx y z) = (x z (y z))

S appliesx toy after first substitutingz intoeach of them. Or put another way,x is applied toy inside the environmentz.

GivenS andK,I itself is unnecessary, since it can be built from the other two:

((S K K)x)
= (S K Kx)
= (Kx (Kx))
=x

for any termx. Note that although ((S K K)x) = (Ix) for anyx, (S K K)itself is not equal toI. We say the terms areextensionally equal. Extensional equality captures the mathematical notion of the equality of functions: that two functions areequal if they always produce the same results for the same arguments. In contrast, the terms themselves, together with the reduction of primitive combinators, capture the notion ofintensional equality of functions: that two functions areequal only if they have identical implementationsup to the expansion of primitive combinators. There are many ways to implement anidentity function; (S K K) andI are among these ways. (S K S) is yet another. We will use the wordequivalent to indicate extensional equality, reservingequal for identical combinatorial terms.

A more interesting combinator is thefixed point combinator orY combinator, which can be used to implementrecursion.

Completeness of the S-K basis

[edit]

S andK can be composed to produce combinators that are extensionally equal toany lambda term, and therefore, by Church's thesis, to anycomputable function whatsoever. The proof is to present a transformation,T[ ], which converts an arbitrary lambda term into an equivalent combinator.

T[ ] may be defined as follows:

  1. T[x] ⇒x
  2. T[(E1E2)] ⇒ (T[E1]T[E2])
  3. T[λx.E] ⇒ (KT[E]) (ifx does not occur free inE)
  4. T[λx.x] ⇒I
  5. T[λx.λy.E] ⇒T[λx.T[λy.E]] (ifx occurs free inE)
  6. T[λx.(E1E2)] ⇒ (ST[λx.E1]T[λx.E2]) (ifx occurs free inE1 orE2)

Note thatT[ ] as given is not a well-typed mathematical function, but rather a term rewriter: Although it eventually yields a combinator, the transformation may generate intermediary expressions that are neither lambda terms nor combinators, via rule (5).

This process is also known asabstraction elimination. This definition is exhaustive: any lambda expression will be subject to exactly one of these rules (seeSummary of lambda calculus above).

It is related to the process ofbracket abstraction, which takes an expressionE built from variables and application and produces a combinator expression [x]E in which the variable x is not free, such that [x]E x =E holds. A very simple algorithm for bracket abstraction is defined by induction on the structure of expressions as follows:[6]

  1. [x]y :=Ky
  2. [x]x :=I
  3. [x](E1E2) :=S([x]E1)([x]E2)

Bracket abstraction induces a translation from lambda terms to combinator expressions, by interpreting lambda-abstractions using the bracket abstraction algorithm.

Conversion of a lambda term to an equivalent combinatorial term

[edit]

For example, we will convert the lambda termλx.λy.(yx) to a combinatorial term:

T[λx.λy.(yx)]
=T[λx.T[λy.(yx)]] (by 5)
=T[λx.(ST[λy.y]T[λy.x])] (by 6)
=T[λx.(S IT[λy.x])] (by 4)
=T[λx.(S I (KT[x]))] (by 3)
=T[λx.(S I (Kx))] (by 1)
= (ST[λx.(S I)]T[λx.(Kx)]) (by 6)
= (S (K (S I))T[λx.(Kx)]) (by 3)
= (S (K (S I)) (ST[λx.K]T[λx.x])) (by 6)
= (S (K (S I)) (S (K K)T[λx.x])) (by 3)
= (S (K (S I)) (S (K K)I)) (by 4)

If we apply this combinatorial term to any two termsx andy (by feeding them in a queue-like fashion into the combinator 'from the right'), it reduces as follows:

(S (K (SI)) (S (KK)I) x y)
= (K (SI) x (S (KK)I x) y)
= (SI (S (KK)I x) y)
= (I y (S (KK)I x y))
= (y (S (KK)I x y))
= (y (KK x (I x) y))
= (y (K (I x) y))
= (y (I x))
= (y x)

The combinatory representation, (S (K (S I)) (S (K K)I)) is much longer than the representation as a lambda term,λx.λy.(y x). This is typical. In general, theT[ ] construction may expand a lambda term of lengthn to a combinatorial term of lengthΘ(n3).[7]

Explanation of theT[ ] transformation

[edit]

TheT[ ] transformation is motivated by a desire to eliminate abstraction. Two special cases, rules 3 and 4, are trivial:λx.x is clearly equivalent toI, andλx.E is clearly equivalent to (KT[E]) ifx does not appear free inE.

The first two rules are also simple: Variables convert to themselves, and applications, which are allowed in combinatory terms, are converted to combinators simply by converting the applicand and the argument to combinators.

It is rules 5 and 6 that are of interest. Rule 5 simply says that to convert a complex abstraction to a combinator, we must first convert its body to a combinator, and then eliminate the abstraction. Rule 6 actually eliminates the abstraction.

λx.(E1E2) is a function which takes an argument, saya, and substitutes it into the lambda term (E1E2) in place ofx, yielding (E1E2)[x : =a]. But substitutinga into (E1E2) in place ofx is just the same as substituting it into bothE1 andE2, so

(E1E2)[x :=a] = (E1[x :=a]E2[x :=a])
(λx.(E1E2)a) = ((λx.E1a) (λx.E2a))
= (Sλx.E1λx.E2a)
= ((Sλx.E1λx.E2)a)

By extensional equality,

λx.(E1E2) = (Sλx.E1λx.E2)

Therefore, to find a combinator equivalent toλx.(E1E2), it is sufficient to find a combinator equivalent to (Sλx.E1λx.E2), and

(ST[λx.E1]T[λx.E2])

evidently fits the bill.E1 andE2 each contain strictly fewer applications than (E1E2), so the recursion must terminate in a lambda term with no applications at all—either a variable, or a term of theformλx.E.

Simplifications of the transformation

[edit]

η-reduction

[edit]

The combinators generated by theT[ ] transformation can be made smaller if we take into account theη-reduction rule:

T[λx.(Ex)] =T[E] (ifx is not free inE)

λx.(E x) is the function which takes an argument,x, and applies the functionE to it; this is extensionally equal to the functionE itself. It is therefore sufficient to convertE to combinatorial form.

Taking this simplification into account, the example above becomes:

  T[λx.λy.(yx)]
= ...
= (S (K (S I))T[λx.(Kx)])
= (S (K (S I))K) (by η-reduction)

This combinator is equivalent to the earlier, longer one:

  (S (K (S I))Kx y)
= (K (S I)x (Kx)y)
= (S I (Kx)y)
= (Iy (Kx y))
= (y (Kx y))
= (y x)

Similarly, the original version of theT[ ] transformation transformed the identity functionλf.λx.(fx) into (S (S (K S) (S (K K)I)) (K I)). With the η-reduction rule,λf.λx.(fx) istransformed intoI.

One-point basis

[edit]

There are one-point bases from which every combinator can be composed extensionally equal toany lambda term. A simple example of such a basis is {X} where:

Xλx.((xS)K)

It is not difficult to verify that:

X (X (XX)) =βK and
X (X (X (XX))) =βS.

Since {K,S} is a basis, it follows that {X} is a basis too. TheIota programming language usesX as its sole combinator.

Another simple example of a one-point basis is:

X'λx.(xKSK) with
(X'X')X' =βK and
X' (X'X') =βS

The simplest known one-point basis is a slight modification ofS:

S'λxλyλz. (x z) (y (λw. z))) with
S' (S'S') (S' (S'S')S'S'S'S'S') =βK and
S' (S' (S'S' (S'S' (S'S'))(S' (S' (S'S' (S'S'))))))S'S' =βS.

In fact, there exist infinitely many such bases.[8]

Combinators B, C

[edit]

In addition toS andK,Schönfinkel (1924) included two combinators which are now calledB andC, with the following reductions:

(Cfgx) = ((fx)g)
(Bfgx) = (f (gx))

He also explains how they in turn can be expressed using onlyS andK:

B = (S (K S)K)
C = (S (S (K (S (K S)K))S) (K K))

These combinators are extremely useful when translating predicate logic or lambda calculus into combinator expressions. They were also used byCurry, and much later byDavid Turner, whose name has been associated with their computational use. Using them, we can extend the rules for the transformation as follows:

  1. T[x] ⇒x
  2. T[(E1E2)] ⇒ (T[E1]T[E2])
  3. T[λx.E] ⇒ (KT[E]) (ifx is not free inE)
  4. T[λx.x] ⇒I
  5. T[λx.λy.E] ⇒T[λx.T[λy.E]] (ifx is free inE)
  6. T[λx.(E1E2)] ⇒ (ST[λx.E1]T[λx.E2]) (ifx is free in bothE1 andE2)
  7. T[λx.(E1E2)] ⇒ (CT[λx.E1]T[E2]) (ifx is free inE1 but notE2)
  8. T[λx.(E1E2)] ⇒ (BT[E1]T[λx.E2]) (ifx is free inE2 but notE1)

UsingB andC combinators, the transformation ofλx.λy.(yx) looks like this:

  T[λx.λy.(yx)]
=T[λx.T[λy.(yx)]]
=T[λx.(CT[λy.y]x)] (by rule 7)
=T[λx.(CIx)]
= (CI) (η-reduction)
=C{\displaystyle ={\mathsf {C}}_{*}} (traditional canonical notation:X=XI{\displaystyle {\mathsf {X}}_{*}={\mathsf {XI}}})
=I{\displaystyle ={\mathsf {I}}'} (traditional canonical notation:X=CX{\displaystyle {\mathsf {X}}'={\mathsf {CX}}})

And indeed, (CIxy) does reduce to (yx):

  (CIxy)
= (Iyx)
= (yx)

The motivation here is thatB andC are limited versions ofS. WhereasS takes a value and substitutes it into both the applicand and its argument before performing the application,C performs thesubstitution only in the applicand, andB only in the argument.

The modern names for the combinators come fromHaskell Curry's doctoral thesis of 1930 (seeB, C, K, W System). InSchönfinkel's original paper, what we now callS,K,I,B andC were calledS,C,I,Z, andT respectively.

The reduction in combinator size that results from the new transformation rules can also be achieved without introducingB andC, as demonstrated in Section 3.2 ofTromp (2008).

CLK versus CLI calculus
[edit]

A distinction must be made between theCLK as described in this article and theCLI calculus. The distinction corresponds to that between the λK and the λI calculus. Unlike the λK calculus, the λI calculus restricts abstractions to:

λx.E wherex has at least one free occurrence inE.

As a consequence, combinatorK is not present in the λI calculus nor in theCLI calculus. The constants ofCLI are:I,B,C andS, which form a basis from which allCLI terms can be composed (modulo equality). Every λI term can be converted into an equalCLI combinator according to rules similar to those presented above for the conversion of λK terms intoCLK combinators. See chapter 9 in Barendregt (1984).

Reverse conversion

[edit]

The conversionL[ ] from combinatorial terms to lambda terms is trivial:

L[I] =λx.x
L[K] =λx.λy.x
L[C] =λx.λy.λz.(xzy)
L[B] =λx.λy.λz.(x (yz))
L[S] =λx.λy.λz.(xz (yz))
L[(E1E2)] = (L[E1]L[E2])

Note, however, that this transformation is not the inversetransformation of any of the versions ofT[ ] that we have seen.

Undecidability of combinatorial calculus

[edit]

Anormal form is any combinatory term in which the primitive combinators that occur, if any, are not applied to enough arguments to be simplified. It is undecidable whether a general combinatory term has a normal form; whether two combinatory terms are equivalent, etc. This can be shown in a similar way as for the corresponding problems for lambda terms.

Undefinability by predicates

[edit]

The undecidable problems above (equivalence, existence of normal form, etc.) take as input syntactic representations of terms under a suitable encoding (e.g.,Church encoding). One may also consider a toy trivial computation model where we "compute" properties of terms by means of combinators applied directly to the terms themselves as arguments, rather than to their syntactic representations. More precisely, let apredicate be a combinator that, when applied, returns eitherT orF (whereT andF represent the conventionalChurch encodings of true and false,λx.λy.x andλx.λy.y, transformed into combinatory logic; the combinatory versions haveT =K andF = (KI)). A predicateN isnontrivial if there are two argumentsA andB such thatNA =T andNB =F. A combinatorN iscomplete ifNM has a normal form for every argumentM. An analogue ofRice's theorem for this toy model then says that every complete predicate is trivial. The proof of this theorem is rather simple.[9]

Proof

By reductio ad absurdum. Suppose there is a complete non trivial predicate, sayN. BecauseN is supposed to be non trivial there are combinatorsA andB such that

(NA) =T and
(NB) =F.
Define NEGATION ≡λx.(if (Nx) thenB elseA) ≡λx.((Nx)BA)
Define ABSURDUM ≡ (Y NEGATION)

Fixed point theorem gives: ABSURDUM = (NEGATION ABSURDUM), for

ABSURDUM ≡ (Y NEGATION) = (NEGATION (Y NEGATION)) ≡ (NEGATION ABSURDUM).

BecauseN is supposed to be complete either:

  1. (N ABSURDUM) =F or
  2. (N ABSURDUM) =T
  • Case 1:F = (N ABSURDUM) =N (NEGATION ABSURDUM) = (NA) =T, a contradiction.
  • Case 2:T = (N ABSURDUM) =N (NEGATION ABSURDUM) = (NB) =F, again a contradiction.

Hence (N ABSURDUM) is neitherT norF, which contradicts the presupposition thatN would be a complete non trivial predicate.Q.E.D.

From this undefinability theorem it immediately follows that there is no complete predicate that can discriminate between terms that have a normal form and terms that do not have a normal form. It also follows that there isno complete predicate, say EQUAL, such that:

(EQUALA B) =T ifA =B and
(EQUALA B) =F ifAB.

If EQUAL would exist, then for allA,λx.(EQUALx A) would have to be a complete non trivial predicate.

However, note that it also immediately follows from this undefinability theorem that many properties of terms that are obviously decidable are not definable by complete predicates either: e.g., there is no predicate that could tell whether the first primitive function letter occurring in a term is aK. This shows that definability by predicates is a not a reasonable model of decidability.

Applications

[edit]

Compilation of functional languages

[edit]

David Turner used his combinators to implement theSASL programming language.

Kenneth E. Iverson used primitives based on Curry's combinators in hisJ programming language, a successor toAPL. This enabled what Iverson calledtacit programming, that is, programming in functional expressions containing no variables, along with powerful tools for working with such programs. It turns out that tacit programming is possible in any APL-like language with user-defined operators.[10]

Logic

[edit]

TheCurry–Howard isomorphism implies a connection between logic and programming: every proof of a theorem ofintuitionistic logic corresponds to a reduction of a typed lambda term, and conversely. Moreover, theorems can be identified withfunction type signatures. Specifically, a typed combinatory logic corresponds to aHilbert system inproof theory.

TheK andS combinators correspond to the axioms

AK:A → (BA),
AS: (A → (BC)) → ((AB) → (AC)),

and function application corresponds to the detachment (modus ponens) rule

MP: fromA andAB inferB.

The calculus consisting ofAK,AS, andMP is complete for the implicational fragment of the intuitionistic logic, which can be seen as follows. Consider the setW of all deductively closed sets of formulas, ordered byinclusion. ThenW,{\displaystyle \langle W,\subseteq \rangle } is an intuitionisticKripke frame, and we define a model{\displaystyle \Vdash } in this frame by

XAAX.{\displaystyle X\Vdash A\iff A\in X.}

This definition obeys the conditions on satisfaction of →: on one hand, ifXAB{\displaystyle X\Vdash A\to B}, andYW{\displaystyle Y\in W} is such thatYX{\displaystyle Y\supseteq X} andYA{\displaystyle Y\Vdash A}, thenYB{\displaystyle Y\Vdash B} by modus ponens. On the other hand, ifXAB{\displaystyle X\not \Vdash A\to B}, thenX,AB{\displaystyle X,A\not \vdash B} by thededuction theorem, thus the deductive closure ofX{A}{\displaystyle X\cup \{A\}} is an elementYW{\displaystyle Y\in W} such thatYX{\displaystyle Y\supseteq X},YA{\displaystyle Y\Vdash A}, andYB{\displaystyle Y\not \Vdash B}.

LetA be any formula which is not provable in the calculus. ThenA does not belong to the deductive closureX of the empty set, thusXA{\displaystyle X\not \Vdash A}, andA is not intuitionistically valid.

See also

[edit]

References

[edit]
  1. ^Schönfinkel 1924, The article that founded combinatory logic. English translation:Schönfinkel (1967).
  2. ^Curry 1930.
  3. ^Seldin 2008.
  4. ^Barendregt 1984.
  5. ^Hindley & Meredith 1990.
  6. ^Turner 1979.
  7. ^Lachowski 2018.
  8. ^Goldberg 2004.
  9. ^Engeler 1995.
  10. ^Cherlin 1991.

Literature

[edit]

External links

[edit]
International
National
Other
Retrieved from "https://en.wikipedia.org/w/index.php?title=Combinatory_logic&oldid=1333322125"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp