This is thetalk page for discussing improvements to theLambda calculus article. This isnot a forum for general discussion of the subject of the article.
This article is within the scope ofWikiProject Computing, a collaborative effort to improve the coverage ofcomputers,computing, andinformation technology on Wikipedia. If you would like to participate, please visit the project page, where you can jointhe discussion and see a list of open tasks.ComputingWikipedia:WikiProject ComputingTemplate:WikiProject ComputingComputing
This article is within the scope ofWikiProject Mathematics, a collaborative effort to improve the coverage ofmathematics on Wikipedia. If you would like to participate, please visit the project page, where you can jointhe discussion and see a list of open tasks.MathematicsWikipedia:WikiProject MathematicsTemplate:WikiProject Mathematicsmathematics
This article is within the scope ofWikiProject Philosophy, a collaborative effort to improve the coverage of content related tophilosophy on Wikipedia. If you would like to support the project, please visit the project page, where you can get more details on how you can help, and where you can join thegeneral discussion about philosophy content on Wikipedia.PhilosophyWikipedia:WikiProject PhilosophyTemplate:WikiProject PhilosophyPhilosophy
This article is within the scope ofWikiProject Linguistics, a collaborative effort to improve the coverage oflinguistics on Wikipedia. If you would like to participate, please visit the project page, where you can jointhe discussion and see a list of open tasks.LinguisticsWikipedia:WikiProject LinguisticsTemplate:WikiProject LinguisticsLinguistics
This article is within the scope ofWikiProject Computer science, a collaborative effort to improve the coverage ofComputer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can jointhe discussion and see a list of open tasks.Computer scienceWikipedia:WikiProject Computer scienceTemplate:WikiProject Computer scienceComputer science
This comment:"It is a universal model of computation that can be used to simulate any single-taped Turing machine..."
Seems odd because any single-taped Turing machine can simulate any multitape Turing machine. By modus ponens then, shouldn't lambda calculus be able to simulate any multitape Turing machine because it can simulate any single-taped one?
Assuming that's true (and I would appreciate an expert confirming it is just to be sure) then we can probably change the above to "It is a universal model of computation that can be used to simulate any Turing machine..."
I added what I think is the quote being referred to:
This mechanism works well for Algol 60 but in a language in which functions can be returned as results, a free variable might be held onto after the function call in which it was created has returned, and will no longer be present on the stack. Landin (1964) solved this in his SECD machine.
I think the basic issue is that what the article says,
In a functional programming language where functions are first class citizens, this systematic change in variables to avoid capture of a free variable can introduce an error, when returning functions as results.
does not follow from this. It is not capture-avoiding substitution that causes issues, it is the fact that a function partially applied to an argument can outlive its containing function. It's true that the Algol 60 mechanism used capture-avoiding substitution, but every functional programming language avoids unwanted captures in some way.Mathnerd314159 (talk)20:30, 11 May 2023 (UTC)[reply]
And Turner 2012, immediately after his Landin 1964 ref says "Landin (1964) solved this in his SECD machine.A function is represented by a closure, consisting of code for the function plus the environment for its free variables. The environment is a linked list of name-value pairs. Closures live in the heap".
So the function definition (i.e. the lambda abstraction on x where , has to be augmented by itsclosure [x, as well as environment y,z ...] which Turner puts on a heap; ie. in lambda notation we need, in order not to overrun a function call, say application. Otherwise we get an indication that the expression evaluation has not yet terminated, and machine application slows. My mental picture is the sequence oftwin primes, the output getting slower and slower with each application to successive twins, as the list of divisors y,z,w, ... (the closures) grow without bound.) --Ancheta Wis (talk | contribs)05:18, 12 May 2023 (UTC)[reply]
WillNess is pointing out that there is a problem with the scope of parameters used in intertwining functions (when expressing functions in lambda notation). The problem reminds me of the slogan 'no global variables'.
I believe something's gone awry. Turner 2012 does not say that capture avoiding substitution introduces errors. The error he is talking about is theFunarg problem, which doesn't require capture-avoiding substitution to occur. He is saying that capture avoiding substitution isone of the basic mechanisms for correct functional programming (it allows passing functions as arguments), but isnot enough (the funarg problem means you can't always return functions). The funarg problem has no chance of occurring in the lambda calculus because there is no concept of a stack or other kind of evaluation context; there is never a need to "look up" a variable while evaluating an expression. I have removed the whole statement as I believe it to be irrelevant to the section (if not incorrect or confusing) and will make a similar edits toSECD machine.Howtonotwin (talk)09:39, 8 January 2024 (UTC)[reply]
"Functions that operate on functions" feels incomplete
The section "Functions that operate on functions" contains a discussion on the identity and constant functions, but fails to use them to illustrate the point of the section - functions being used as input or output.VeeIsMe (talk)15:55, 4 November 2023 (UTC)[reply]
That sentence is supposed to be describing applicative order. I think it is confusing and unnecessary. For starters, a beta reduction is always applied to aredex. What exactly is an "argument" here? If, in that sentence, argument refers to the right side of a redex, then that sentence is false (that interpretation would fit call by value, not applicative order). And if it refers to bound variables, then the sentence is still false, becausebound variables are not redexes. I think the most charitable interpretation of the sentence is that given a redexR, beforeR is reduced, one must reduce every redexU, ifU's right side is a bound variable ofR's function. Unfortunately, despite that new sentence's verbosity, it is not a complete definition of applicative order reduction.81.158.160.194 (talk)04:14, 29 March 2024 (UTC)[reply]
An argument is the right hand side of an application (this is defined in the lead and mentioned in numerous places). E.g. in M N, M is the function and N is the argument. So what this "intuition" is trying to explain is that for example with A = B = C = λx.x, applicative order will reduce A(BC) to A(C) rather than BC.
In the definition suddenly appear M and N and the notation M[x], for the first time in the entire article, and it is nowhere said what M, N is supposed to be, nor what M[x] is supposed to mean. I mean, there is also 'y' for which we must (but can) guess that it is "the same" as x which is the only thing mentioned earlier. But we have to make a very bold guess that M is supposed to be a general lambda expression in contrast to the atomic variables x and y. We also have to guess that M[x] refers to "M and allfree occurrences of x" and we have to guess whatfree means. —MFH:Talk13:02, 20 October 2025 (UTC)[reply]
these are all explained in the definition article. The section here is a summary, to be shorter, it must leave some details out. Would you be happier if the confusing notation was removed?Mathnerd314159 (talk)15:53, 21 October 2025 (UTC)[reply]
At least, the notation could be unified. In thealpha conversion item, rewriting is indicated byM[x] -->M[y], while in thebeta reduction item, it is indicated byt -->t[x:=s]. -Jochen Burghardt (talk)07:09, 22 October 2025 (UTC)[reply]
Imo, in an introductory article, starting with capture-avoiding substitution is hard to understand. I think, usually, one starts with alpha conversion (demonstrating that the particular variable names are unimportant), then beta reduction without need of capture-avoidance (demonstrating how a function is applied to an argument; formal definition of the notion of substitution may be deferred), then one gives an example where capture-avoidance is needed, then, maybe, eta conversion. In the long run, I'd suggest to intermix the examples from section "Motivation" into the Definition section.
Some asides: I'd suggest to introduce a setV of variable names, andT of lambda terms at the start of the formal definition. You can then write e.g. "let and let". — In the BNF definition, parantheses should be added, along with the remark that there are rules to omit some of them. (I learned a variant where only applications are paranthesized, but that one seems to be non-popular nowadays.)Jochen Burghardt (talk)18:15, 22 October 2025 (UTC)[reply]
well, Wikipedia isnot a textbook. So it doesn't make sense to follow the sort of guided, exploratory approach you suggest. Even in the definition article, where there are the examples of substitution gone wrong, the order is definition-subtlety-example, not example-subtlety-definition. But my goal for the definition section is essentially "lambda calculus on a postcard". The issue is that capture-avoiding substitution doesn't fit on a postcard, hence another page.User:Mathnerd314159/Capture-avoiding substitution if you want to check it out, feel free to edit. But it is just AI slop, I still have to go through and fact-check it.Mathnerd314159 (talk)18:39, 22 October 2025 (UTC)[reply]