Incomputer science, atail call is asubroutine call performed as the final action of a procedure.[1]If the target of a tail is the same subroutine, the subroutine is said to betail recursive, which is a special case of directrecursion.Tail recursion (ortail-end recursion) is particularly useful, and is often easy to optimize in implementations.
Tail calls can be implemented without adding a newstack frame to thecall stack. Most of the frame of the current procedure is no longer needed, and can be replaced by the frame of the tail call, modified as appropriate (similar tooverlay for processes, but for function calls). The program can thenjump to the called subroutine. Producing such code instead of a standard call sequence is calledtail-call elimination ortail-call optimization. Tail-call elimination allows procedure calls in tail position to be implemented as efficiently asgoto statements, thus allowing efficientstructured programming. In the words ofGuy L. Steele, "in general, procedure calls may be usefully thought of as GOTO statements which also pass parameters, and can be uniformly coded as [machine code] JUMP instructions."[2]
Not all programming languages require tail-call elimination. However, infunctional programming languages, tail-call elimination is often guaranteed by thelanguage standard, allowing tail recursion to use a similar amount of memory as an equivalentloop. The special case of tail-recursive calls, when a function calls itself, may be more amenable to call elimination than general tail calls. When the language semantics do not explicitly support general tail calls, a compiler can often still optimizesibling calls, or tail calls to functions which take and return the same types as the caller.[3]
When a function is called, the computer must "remember" the place it was called from, thereturn address, so that it can return to that location with the result once the call is complete. Typically, this information is saved on thecall stack, a list of return locations in the order that the call locations were reached. In addition, compilers allocate memory for local variables of the called function and push register content (if any and/or relevant) onto the stack. Typically, it is done by allocating a stack frame including saved registers, space allocated for non-register local variables, return address and call parameters (unless they are passed in registers). For tail calls, there is no need to remember the caller or preserve content of registers – instead, tail-call elimination avoids allocation of new stack frames and makes only the minimum necessary changes to the existing stack frame before passing it on, and the tail-called function will return directly to theoriginal caller.[4]This, however, leads to complete loss of the caller's stack frame, which is sometimes considered as a hindrance in debugging. The tail call doesn't have to appear lexically after all other statements in the source code; it is only important that the calling function return immediately after the tail call, returning the tail call's result if any, since the calling function is bypassed when the optimization is performed.
For non-recursive function calls, this is usually anoptimization that saves only a little time and space, since there are not that many different functions available to call. When dealing with recursive ormutually recursive functions where recursion happens through tail calls, however, the stack space and the number of returns saved can grow to be very significant, since a function can call itself, directly or indirectly, creating a new call stack frame each time. Tail-call elimination often reduces asymptotic stack space requirements from linear, orO(n), to constant, orO(1). Tail-call elimination is thus required by the standard definitions of some programming languages, such asScheme, and languages in theML family among others.[5][6]The Scheme language definition formalizes the intuitive notion of tail position exactly, by specifying which syntactic forms allow having results in tail context.[7]Implementations allowing an unlimited number of tail calls to be active at the same moment, thanks to tail-call elimination, can also be called 'properly tail recursive'.[5]
Besides space and execution efficiency, tail-call elimination is important in thefunctional programming idiom known ascontinuation-passing style (CPS), which would otherwise quickly run out of stack space.
A tail call can be located just before the syntactical end of a function.
inta(intn);intb(intn);intc(intn);intfoo(intdata){a(data);returnb(data);}
Here, botha(data) andb(data) are calls, butb is the last thing the procedure executes before returning and is thus in tail position. However, not all tail calls are necessarily located at the syntactical end of a subroutine:
intbar(intdata){if(a(data)>0){returnb(data);}returnc(data);}
Here, both calls tob andc are in tail position. This is because each of them lies in the end of if-branch respectively, even though the first one is not syntactically at the end ofbar's body.
Consider this example:
intfoo1(intdata){returna(data)+1;}intfoo2(intdata){intret=a(data);returnret;}intfoo3(intdata){intret=a(data);return(ret==0)?1:ret;}
The call toa(data) is in tail position infoo2, but it isnot in tail position either infoo1 or infoo3, because control must return to the caller to allow it to inspect or modify the return value before returning it.
The following program is an example inScheme:[8]
;; factorial : number -> number;; to calculate the product of all positive;; integers less than or equal to n.(define(factorialn)(if(=n0)1(*n(factorial(-n1)))))
This is not written in a tail-recursive style, because the multiplication function ("*") is in the tail position. This can be compared to:
;; factorial : number -> number;; to calculate the product of all positive;; integers less than or equal to n.(define(factorialn)(fact-iter1n))(define(fact-iterproductn)(if(=n0)product(fact-iter(*productn)(-n1))))
This program assumesapplicative-order evaluation. The inner procedurefact-iter calls itselflast in the control flow. This allows aninterpreter orcompiler to reorganize the execution which would ordinarily look like this:[8]
call factorial (4) call fact-iter (1 4) call fact-iter (4 3) call fact-iter (12 2) call fact-iter (24 1) return 24 return 24 return 24 return 24 return 24
into the moreefficient variant, in terms of both space and time:
call factorial (4) call fact-iter (1 4) replace arguments with (4 3) replace arguments with (12 2) replace arguments with (24 1) return 24 return 24
This reorganization saves space because no state except for the calling function's address needs to be saved, either on the stack or on the heap, and the call stack frame forfact-iter is reused for the intermediate results storage. This also means that the programmer need not worry about running out of stack or heap space for extremely deep recursions. In typical implementations, the tail-recursive variant will be substantially faster than the other variant, but only by a constant factor.
Some programmers working in functional languages will rewrite recursive code to be tail recursive so they can take advantage of this feature. This often requires addition of an "accumulator" argument (product in the above example) to the function.
Tail recursion modulo cons is a generalization of tail-recursion optimization introduced byDavid H. D. Warren[9] in the context ofcompilation ofProlog, seen as anexplicitlyset once language. It was described (though not named) byDaniel P. Friedman andDavid S. Wise in 1974[10] as aLISP compilation technique. As the name suggests, it applies when the only operation left to perform after a recursive call is to prepend a known value in front of the list returned from it (or to perform a constant number of simple data-constructing operations, in general). This call would thus be atail call save for ("modulo") the saidcons operation. But prefixing a value at the start of a liston exit from a recursive call is the same as appending this value at the end of the growing liston entry into the recursive call, thus building the list as aside effect, as if in an implicit accumulator parameter. The following Prolog fragment illustrates the concept:
% Prolog, tail recursive modulo cons:partition([],_,[],[]).partition([X|Xs],Pivot,[X|Rest],Bigs):-X@<Pivot,!,partition(Xs,Pivot,Rest,Bigs).partition([X|Xs],Pivot,Smalls,[X|Rest]):-partition(Xs,Pivot,Smalls,Rest). | -- In Haskell, guarded recursion:partition[]_=([],[])partition(x:xs)p|x<p=(x:a,b)|otherwise=(a,x:b)where(a,b)=partitionxsp |
% Prolog, with explicit unifications:% non-tail recursive translation:partition([],_,[],[]).partition(L,Pivot,Smalls,Bigs):-L=[X|Xs],(X@<Pivot->partition(Xs,Pivot,Rest,Bigs),Smalls=[X|Rest];partition(Xs,Pivot,Smalls,Rest),Bigs=[X|Rest]). | % Prolog, with explicit unifications:% tail-recursive translation:partition([],_,[],[]).partition(L,Pivot,Smalls,Bigs):-L=[X|Xs],(X@<Pivot->Smalls=[X|Rest],partition(Xs,Pivot,Rest,Bigs);Bigs=[X|Rest],partition(Xs,Pivot,Smalls,Rest)). |
Thus in tail-recursive translation such a call is transformed into first creating a newlist node and setting itsfirst field, andthen making the tail call with the pointer to the node'srest field as argument, to be filled recursively. The same effect is achieved when the recursion isguarded under a lazily evaluated data constructor, which is automatically achieved in lazy programming languages like Haskell.
The following fragment defines a recursive function inC that duplicates a linked list (with some equivalent Scheme and Prolog code as comments, for comparison):
typedefstructLinkedList{void*value;structLinkedList*next;}LinkedList;LinkedList*duplicate(constLinkedList**ls){LinkedList*head=NULL;if(ls){LinkedList*p=duplicate(ls->next);head=(LinkedList*)malloc(sizeof(*head));head->value=ls->value;head->next=p;}returnhead;} | ;; in Scheme,(define(duplicatels)(if(not(null?ls))(cons(carls)(duplicate(cdrls)))'())) |
%% in Prolog,duplicate([X|Xs],R):-duplicate(Xs,Ys),R=[X|Ys].duplicate([],[]). |
In this form the function is not tail recursive, because control returns to the caller after the recursive call duplicates the rest of the input list. Even if it were to allocate thehead node before duplicating the rest, it would still need to plug in the result of the recursive call into thenext fieldafter the call.[a]So the function isalmost tail recursive. Warren's method pushes the responsibility of filling thenext field into the recursive call itself, which thus becomes tail call.[b] Using sentinel head node to simplify the code,
voidduplicate_aux(constLinkedList*ls,LinkedList*end){if(ls){end->next=(LinkedList*)malloc(sizeof(*end));end->next->value=ls->value;duplicate_aux(ls->next,end->next);}else{end->next=NULL;}}LinkedList*duplicate(constLinkedList*ls){LinkedListhead;duplicate_aux(ls,&head);returnhead.next;} | ;; in Scheme,(define(duplicatels)(let((head(list1)))(letdup((lsls)(endhead))(cond((not(null?ls))(set-cdr!end(list(carls)))(dup(cdrls)(cdrend)))))(cdrhead))) |
%% in Prolog,duplicate([X|Xs],R):-R=[X|Ys],duplicate(Xs,Ys).duplicate([],[]). |
The callee now appends to the end of the growing list, rather than have the caller prepend to the beginning of the returned list. The work is now done on the wayforward from the list's start,before the recursive call which then proceeds further, instead ofbackward from the list's end,after the recursive call has returned its result. It is thus similar to the accumulating parameter technique, turning a recursive computation into an iterative one.
Characteristically for this technique, a parentframe is created on the execution call stack, which the tail-recursive callee can reuse as its own call frame if the tail-call optimization is present.
The tail-recursive implementation can now be converted into an explicitly iterative implementation, as an accumulatingloop:
LinkedList*duplicate(constLinkedList*ls){LinkedListhead;LinkedList*end;end=&head;while(ls){end->next=(LinkedList*)malloc(sizeof(*end));end->next->value=ls->value;ls=ls->next;end=end->next;}end->next=NULL;returnhead.next;} | ;; in Scheme,(define(duplicatels)(let((head(list1)))(do((endhead(cdrend))(lsls(cdrls)))((null?ls)(cdrhead))(set-cdr!end(list(carls)))))) |
%% in Prolog,%% N/A |
In a paper delivered to theACM conference in Seattle in 1977,Guy L. Steele summarized the debate over theGOTO andstructured programming, and observed that procedure calls in the tail position of a procedure can be best treated as a direct transfer of control to the called procedure, typically eliminating unnecessary stack manipulation operations.[2] Since such "tail calls" are very common inLisp, a language where procedure calls are ubiquitous, this form of optimization considerably reduces the cost of a procedure call compared to other implementations. Steele argued that poorly-implemented procedure calls had led to an artificial perception that the GOTO was cheap compared to the procedure call. Steele further argued that "in general procedure calls may be usefully thought of as GOTO statements which also pass parameters, and can be uniformly coded as [machine code] JUMP instructions", with the machine code stack manipulation instructions "considered an optimization (rather than vice versa!)".[2] Steele cited evidence that well-optimized numerical algorithms in Lisp could execute faster than code produced by then-available commercial Fortran compilers because the cost of a procedure call in Lisp was much lower. InScheme, a Lisp dialect developed by Steele withGerald Jay Sussman, tail-call elimination is guaranteed to be implemented in any interpreter.[11]
Tail recursion is important to somehigh-level languages, especiallyfunctional andlogic languages and members of theLisp family. In these languages, tail recursion is the most commonly used way (and sometimes the only way available) of implementing iteration. The language specification of Scheme requires that tail calls are to be optimized so as not to grow the stack. Tail calls can be made explicitly inPerl, with a variant of the "goto" statement that takes a function name:goto &NAME;[12]
However, for language implementations which store function arguments and local variables on acall stack (which is the default implementation for many languages, at least on systems with ahardware stack, such as thex86), implementing generalized tail-call optimization (including mutual tail recursion) presents an issue: if the size of the callee's activation record is different from that of the caller, then additional cleanup or resizing of the stack frame may be required. For these cases, optimizing tail recursion remains trivial, but general tail-call optimization may be harder to implement efficiently.
For example, in theJava virtual machine (JVM), tail-recursive calls can be eliminated (as this reuses the existing call stack), but general tail calls cannot be (as this changes the call stack).[13][14] As a result, functional languages such asScala that target the JVM can efficiently implement direct tail recursion, but not mutual tail recursion.
TheGCC,LLVM/Clang, andIntel compiler suites perform tail-call optimization forC and other languages at higher optimization levels or when the-foptimize-sibling-calls option is passed.[15][16][17] Though the given language syntax may not explicitly support it, the compiler can make this optimization whenever it can determine that the return types for the caller and callee are equivalent, and that the argument types passed to both function are either the same, or require the same amount of total storage space on the call stack.[18]
Various implementation methods are available.
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(June 2014) (Learn how and when to remove this message) |
Tail calls are often optimized byinterpreters andcompilers offunctional programming andlogic programming languages to more efficient forms ofiteration. For example,Scheme programmers commonly expresswhile loops as calls to procedures in tail position and rely on the Scheme compiler or interpreter to substitute the tail calls with more efficientjump instructions.[19]
For compilers generating assembly directly, tail-call elimination is easy: it suffices to replace a call opcode with a jump one, after fixing parameters on the stack. From a compiler's perspective, the first example above is initially translated into pseudo-assembly language (in fact, this is validx86 assembly):
foo:callBcallAret
Tail-call elimination replaces the last two lines with a single jump instruction:
foo:callBjmpA
After subroutineA completes, it will then return directly to the return address offoo, omitting the unnecessaryret statement.
Typically, the subroutines being called need to be supplied withparameters. The generated code thus needs to make sure that thecall frame for A is properly set up before jumping to the tail-called subroutine. For instance, onplatforms where thecall stack does not just contain thereturn address, but also the parameters for the subroutine, the compiler may need to emit instructions to adjust the call stack. On such a platform, for the code:
function foo(data1, data2) B(data1)return A(data2)
(wheredata1 anddata2 are parameters) a compiler might translate that as:[c]
foo:movreg,[sp+data1]; fetch data1 from stack (sp) parameter into a scratch register.pushreg; put data1 on stack where B expects itcallB; B uses data1pop; remove data1 from stackmovreg,[sp+data2]; fetch data2 from stack (sp) parameter into a scratch register.pushreg; put data2 on stack where A expects itcallA; A uses data2pop; remove data2 from stack.ret
A tail-call optimizer could then change the code to:
foo:movreg,[sp+data1]; fetch data1 from stack (sp) parameter into a scratch register.pushreg; put data1 on stack where B expects itcallB; B uses data1pop; remove data1 from stackmovreg,[sp+data2]; fetch data2 from stack (sp) parameter into a scratch register.mov[sp+data1],reg; put data2 where A expects itjmpA; A uses data2 and returns immediately to caller.
This code is more efficient both in terms of execution speed and use of stack space.
Since manyScheme compilers useC as an intermediate target code, the tail recursion must be encoded in C without growing the stack, even if the C compiler does not optimize tail calls. Many implementations achieve this by using a device known as atrampoline, a piece of code that repeatedly calls functions. All functions are entered via the trampoline. When a function has to tail-call another, instead of calling it directly and then returning the result, it returns the address of the function to be called and the call parameters back to the trampoline (from which it was called itself), and the trampoline takes care of calling this function next with the specified parameters. This ensures that the C stack does not grow and iteration can continue indefinitely.
It is possible to implement trampolines usinghigher-order functions in languages that support them, such asGroovy,Visual Basic .NET andC#.[20]
Using a trampoline for all function calls is rather more expensive than the normal C function call, so at least one Scheme compiler,Chicken, uses a technique first described byHenry Baker from an unpublished suggestion byAndrew Appel,[21] in which normal C calls are used but the stack size is checked before every call. When the stack reaches its maximum permitted size, objects on the stack aregarbage-collected using theCheney algorithm by moving all live data into a separate heap. Following this, the stack is unwound ("popped") and the program resumes from the state saved just before the garbage collection. Baker says "Appel's method avoids making a large number of small trampoline bounces by occasionally jumping off the Empire State Building."[21] The garbage collection ensures that mutual tail recursion can continue indefinitely. However, this approach requires that no C function call ever returns, since there is no guarantee that its caller's stack frame still exists; therefore, it involves a much more dramatic internal rewriting of the program code:continuation-passing style.
Tail recursion can be related to thewhile statement, an explicit iteration, for instance by transforming
procedure foo(x)ifp(x)return bar(x)elsereturn foo(baz(x))
into
procedure foo(x)whiletrueifp(x)return bar(x)elsex ← baz(x)
wherex may be a tuple involving more than one variable: if so, care must be taken in implementing theassignment statementx ← baz(x) so that dependencies are respected. One may need to introduce auxiliary variables or use aswap construct.
More generally,
procedure foo(x)ifp(x)return bar(x)else ifq(x)return baz(x) ...else ifr(x)return foo(qux(x)) ...elsereturn foo(quux(x))
can be transformed into
procedure foo(x)whiletrueifp(x)return bar(x)else ifq(x)return baz(x) ...else ifr(x)x ← qux(x) ...elsex ← quux(x)
For instance, thisJulia program gives a non-tail recursive definitionfact of the factorial:
functionfactorial(n)ifn==0return1elsereturnn*factorial(n-1)endend
Indeed,n * factorial(n - 1) wraps the call tofactorial. But it can be transformed into a tail-recursive definition by adding an argumenta called anaccumulator.[8]
This Julia program gives a tail-recursive definitionfact_iter of the factorial:
functionfactorial(n::Integer,a::Integer)ifn==0:returnaelsereturnfactorial(n-1,n*a)endendfunctionfactorial(n::Integer)returnfactorial(n,1)end
This Julia program gives an iterative definitionfact_iter of the factorial:
functionfact_iter(n::Integer,a::Integer)whilen>0a=n*an=n-1endreturnaendfunctionfactorial(n::Integer)returnfact_iter(n,one(n))end
recur special form.[22]tailrec modifier for functions[30]goto &NAME;[32]tailcall() function introduced in R.4.4.0[39]@tailrec annotation, which makes it a compilation error if the function is not tail recursive[43]tailcall command[47]if(ls){head=(LinkedList*)malloc(sizeof(*head));head->value=ls->value;head->next=duplicate(ls->next);}
if(ls){head=(LinkedList*)malloc(sizeof(*head));head->value=ls->value;duplicate(ls->next,&(head->next));}
call instruction first pushes the current code location onto the stack and then performs an unconditional jump to the code location indicated by the label. Theret instruction first pops a code location off the stack, then performs an unconditional jump to the retrieved code location.