Talk:Lambda calculus

Single-Taped Turing Machine Simulation?

[edit]

This comment: "It is a universal model of computation that can be used to simulate any single-taped Turing machine..."

Seems odd because any single-taped Turing machine can simulate any multitape Turing machine. By modus ponens then, shouldn't lambda calculus be able to simulate any multitape Turing machine because it can simulate any single-taped one?

Assuming that's true (and I would appreciate an expert confirming it is just to be sure) then we can probably change the above to "It is a universal model of computation that can be used to simulate any Turing machine..."

Turner's citation

[edit]

Let's talk about what is missing in the statement. --Ancheta Wis   (talk | contribs) 16:24, 11 May 2023 (UTC)[reply]

I added what I think is the quote being referred to:

This mechanism works well for Algol 60 but in a language in which functions can be returned as results, a free variable might be held onto after the function call in which it was created has returned, and will no longer be present on the stack. Landin (1964) solved this in his SECD machine.

I think the basic issue is that what the article says,

In a functional programming language where functions are first class citizens, this systematic change in variables to avoid capture of a free variable can introduce an error, when returning functions as results.

does not follow from this. It is not capture-avoiding substitution that causes issues, it is the fact that a function partially applied to an argument can outlive its containing function. It's true that the Algol 60 mechanism used capture-avoiding substitution, but every functional programming language avoids unwanted captures in some way. Mathnerd314159 (talk) 20:30, 11 May 2023 (UTC)[reply]
And Turner 2012, immediately after his Landin 1964 ref says "Landin (1964) solved this in his SECD machine. A function is represented by a closure, consisting of code for the function plus the environment for its free variables. The environment is a linked list of name-value pairs. Closures live in the heap".
So the function definition (i.e. the lambda abstraction on x where , has to be augmented by its closure [x, as well as environment y,z   ...] which Turner puts on a heap; ie. in lambda notation we need   , in order not to overrun a function call, say application   . Otherwise we get an indication that the expression evaluation has not yet terminated, and machine application   slows. My mental picture is the sequence of twin primes, the output getting slower and slower with each application to successive twins, as the list of divisors y,z,w,   ... (the closures) grow without bound.) -- Ancheta Wis   (talk | contribs) 05:18, 12 May 2023 (UTC)[reply]
WillNess is pointing out that there is a problem with the scope of parameters used in intertwining functions (when expressing functions in lambda notation). The problem reminds me of the slogan 'no global variables'.
There must be a way to resolve this, otherwise why use this notation? --Ancheta Wis   (talk | contribs) 22:17, 12 May 2023 (UTC)[reply]
I believe something's gone awry. Turner 2012 does not say that capture avoiding substitution introduces errors. The error he is talking about is the Funarg problem, which doesn't require capture-avoiding substitution to occur. He is saying that capture avoiding substitution is one of the basic mechanisms for correct functional programming (it allows passing functions as arguments), but is not enough (the funarg problem means you can't always return functions). The funarg problem has no chance of occurring in the lambda calculus because there is no concept of a stack or other kind of evaluation context; there is never a need to "look up" a variable while evaluating an expression. I have removed the whole statement as I believe it to be irrelevant to the section (if not incorrect or confusing) and will make a similar edits to SECD machine. Howtonotwin (talk) 09:39, 8 January 2024 (UTC)[reply]

"Functions that operate on functions" feels incomplete

[edit]

The section "Functions that operate on functions" contains a discussion on the identity and constant functions, but fails to use them to illustrate the point of the section - functions being used as input or output. VeeIsMe (talk) 15:55, 4 November 2023 (UTC)[reply]

 Done I added function composition. The whole section is a mess; I tried to clean up a little bit. - Jochen Burghardt (talk) 09:44, 5 November 2023 (UTC)[reply]

"Intuitively this means a function's arguments are always reduced before the function itself."

[edit]

That sentence is supposed to be describing applicative order. I think it is confusing and unnecessary. For starters, a beta reduction is always applied to a redex. What exactly is an "argument" here? If, in that sentence, argument refers to the right side of a redex, then that sentence is false (that interpretation would fit call by value, not applicative order). And if it refers to bound variables, then the sentence is still false, because bound variables are not redexes. I think the most charitable interpretation of the sentence is that given a redex R, before R is reduced, one must reduce every redex U, if U's right side is a bound variable of R's function. Unfortunately, despite that new sentence's verbosity, it is not a complete definition of applicative order reduction. 81.158.160.194 (talk) 04:14, 29 March 2024 (UTC)[reply]

An argument is the right hand side of an application (this is defined in the lead and mentioned in numerous places). E.g. in M N, M is the function and N is the argument. So what this "intuition" is trying to explain is that for example with A = B = C = λx.x, applicative order will reduce A(BC) to A(C) rather than BC.
The complete definition of applicative order is "leftmost-innermost reduction", it is already pretty concise. Mathnerd314159 (talk) 05:16, 29 March 2024 (UTC)[reply]

Definition ill-defined

[edit]

In the definition suddenly appear M and N and the notation M[x], for the first time in the entire article, and it is nowhere said what M, N is supposed to be, nor what M[x] is supposed to mean. I mean, there is also 'y' for which we must (but can) guess that it is "the same" as x which is the only thing mentioned earlier. But we have to make a very bold guess that M is supposed to be a general lambda expression in contrast to the atomic variables x and y. We also have to guess that M[x] refers to "M and all free occurrences of x" and we have to guess what free means. — MFH:Talk 13:02, 20 October 2025 (UTC)[reply]

these are all explained in the definition article. The section here is a summary, to be shorter, it must leave some details out. Would you be happier if the confusing notation was removed? Mathnerd314159 (talk) 15:53, 21 October 2025 (UTC)[reply]
At least, the notation could be unified. In the alpha conversion item, rewriting is indicated by M[x] --> M[y], while in the beta reduction item, it is indicated by t --> t[x:=s]. - Jochen Burghardt (talk) 07:09, 22 October 2025 (UTC)[reply]
alright, I just went and defined capture-avoiding substitution, and used that for the alpha and beta rules. Should be better now. Mathnerd314159 (talk) 16:12, 22 October 2025 (UTC)[reply]
TBH though, capture avoiding substitution should probably be its own page, I will see if I can draft an article Mathnerd314159 (talk) 16:29, 22 October 2025 (UTC)[reply]
Imo, in an introductory article, starting with capture-avoiding substitution is hard to understand. I think, usually, one starts with alpha conversion (demonstrating that the particular variable names are unimportant), then beta reduction without need of capture-avoidance (demonstrating how a function is applied to an argument; formal definition of the notion of substitution may be deferred), then one gives an example where capture-avoidance is needed, then, maybe, eta conversion. In the long run, I'd suggest to intermix the examples from section "Motivation" into the Definition section.
Some asides: I'd suggest to introduce a set V of variable names, and T of lambda terms at the start of the formal definition. You can then write e.g. "let and let ". — In the BNF definition, parantheses should be added, along with the remark that there are rules to omit some of them. (I learned a variant where only applications are paranthesized, but that one seems to be non-popular nowadays.) Jochen Burghardt (talk) 18:15, 22 October 2025 (UTC)[reply]
well, Wikipedia is not a textbook. So it doesn't make sense to follow the sort of guided, exploratory approach you suggest. Even in the definition article, where there are the examples of substitution gone wrong, the order is definition-subtlety-example, not example-subtlety-definition. But my goal for the definition section is essentially "lambda calculus on a postcard". The issue is that capture-avoiding substitution doesn't fit on a postcard, hence another page. User:Mathnerd314159/Capture-avoiding substitution if you want to check it out, feel free to edit. But it is just AI slop, I still have to go through and fact-check it. Mathnerd314159 (talk) 18:39, 22 October 2025 (UTC)[reply]