Talk:Tail call

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Computer science (Rated C-class, Mid-importance)
WikiProject icon This article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C-Class article C  This article has been rated as C-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.

See the discussion at Talk:Tail recursion--Blaisorblade (talk) 19:21, 9 October 2010 (UTC)

Farking verbosity[edit]

JJ Behrens explains it much better:

Tail call optimization is an interesting technique that reduces the use of the stack. If function A calls function B, and then B's last step is to call function C, rather than having a return-to address for A and a return-to address for B, you can just have C return directly to A since there's nothing else in function B that needs to be done. (talk) 08:10, 20 April 2011 (UTC)

Tail recursion modulo cons vs usage of accumulators in general[edit]

Using accumulators is a quite general technique, not limited to handling cons after the call. Is there any reason nowadays to talk about the special case? I moved the text here during the merge, IIRC, but I'm not yet fully convinced. Does anybody know of any study of this folklore technique? We also have a reference to the usage of accumulators in the Example programs section, and it seems the two should be merged. Disclaimer: I've not read the original reference, but I guess the technique was later generalized. --Blaisorblade (talk) 18:38, 15 October 2010 (UTC)

Tail recursion modulo cons inconsistency[edit]

The text says that cons adds elements to the head of a list, but the C code adds elements to the tail of the linked list. --Medinoc (talk) 15:56, 5 July 2011 (UTC)

  • I have noticed however, that for duplicating a list there is little other choice. If we change this, we'd need a function with a completely different purpose. --Medinoc (talk) 19:35, 5 July 2011 (UTC)

Factorial example[edit]

I find using the factorial example to be very misleading. The difference between the tco factorial and the non-tco factorial functions is more of an algorithm change than a tco transformation. The tco version is greatly simplifying the accumulator by multiplying the numbers in a different order! For a demonstration, consider a creating a different function by replacing zero? with null?, * with cons, and -1 with cdr. This gives you a copylist function:

(define (copylist l)
  (if (null? l)
      (cons (car l)
            (copylist (cdr l)))))

If you try the same transformation that the factorial example uses, you might incorrectly try to implement an accumulator version of copy-list like so:

(define (copylist l) ;;Wrong!  reverses the list
  (let copy ([rest l] [acc '()])
    (if (null? rest)
      (copy (cdr rest) (cons (car rest) acc)))))

We can however implement a version of copy-list with the copy call in tail position by borrowing ideas from cps. We can make acc a continuation which we build up, and finally execute when we've fully traversed the list.

(define (copy-list l)
  (let copy ([rest l] [acc (lambda (x) x)])
    (if (null? rest)
        (acc '())
      (copy (cdr rest) (lambda (x) (acc (cons (car rest) x)))))))

Which brings me to my suggestion. I believe a much fairer refactoring of the factorial function would be:

(define (factorial n)
  (let fact ([i n] [acc (lambda (x) x)])
    (if (zero? i)
        (acc 1)
      (fact (- i 1) (lambda (x) (acc (* i x)))))))

Here, the order in which the factors are multiplied in the factorial is the same as the non-tco version. Fact is in a tail call position meaning the stack doesn't grow. The acc does grow, not in integer product amounts ie. 1, 2, 6, 24; but by continuations wrapping other continutation ie. for (factorial 3) by the time (zero? i) returns true, acc will be

(lambda (w)
  ((lambda (x)
     ((lambda (y)
        ((lambda (z) z)
         (* 3 y)))
      (* 2 x)))
   (* 1 w)))

I believe the current version of factorial overstates what tce gets you and this less efficient yet more didactic version of factorial defined above would be better suited for this article. — Preceding unsigned comment added by Marty.neal (talkcontribs) 04:50, 19 January 2011 (UTC)

You are correct that the rewriting of the call reverses the order of evaluation (in the case of factorial, multiplying in reversed order). However, your example given above essentially just creates a stack out of lambda expressions. The point of tail recursion is to avoid a stack all together. With lists there's not much difference, but with the factorial example (or even moreso if we had used a simple summation example, like adding from 1 to n) the difference between an accumulator that emulates a stack vs one that just keeps track of a temporary return value is noticeable.Antares5245 (talk) 10:43, 17 May 2011 (UTC) As usual in IT-related articles, the definitions of terms need improvement here.

What is tail-recursiveness?[edit]

The text currently says: If a tail call might lead to the same subroutine being called again later in the call chain, the subroutine is said to be tail-recursive.

Is that correct?

Example 1:

function a(i) {
    if (i == 1) {
        return i;
    } else if (i % 2) {
        return a(i/2);
    } else {
        return 3*a(i+1)+1;

The first call to a is a tail call; the second one is not. Is a tail-recursive?

Example 2:

function a(i) {
    if (i == 0) {
        return 0;
    } else {
        return b(i);

function b(i) {
    return a(i-1)+1;

The call to b is a tail call; the call to a is not. Is a tail-recursive?

What it says now: a function is tail-recursive iff at least one of its tail calls may start some call chain that calls the function itself.

What I'd expect to see: a function is tail-recursive iff it is recursive and all of the call chains by which it may call itself contain only tail calls.

Am I mistaken?

Rp (talk) 16:50, 9 January 2015 (UTC)

please check this code[edit]

I am not sure if the Prolog code:

% tail recursive modulo cons:
partition([], _, [], []).
partition([X|Xs], Pivot, [X|Rest], Bigs) :-
  X @< Pivot, !,
  partition(Xs, Pivot, Rest, Bigs).
partition([X|Xs], Pivot, Smalls, [X|Rest]) :-
  partition(Xs, Pivot, Smalls, Rest).

should be changed to

% tail recursive modulo cons:
partition([], _, [], []).
partition([X|Xs], Pivot, [X|Rest], Bigs) :-
  X @< Pivot, !,
  partition(Xs, Pivot, Rest, Bigs).
partition([X|Xs], Pivot, Smalls, [X|Rest]) :-
  Pivot @< X, !, 
  partition(Xs, Pivot, Smalls, Rest).

Please review in this is the right code!!!

Tail recursion (or tail-end recursion) is particularly useful, and often easy to handle in implementations.[edit]

I was trying to figure out the difference between tail call and tail recursion and I was surprised to find out what a ringing endorsement Wikipedia had for tail recursion.

Given such subjective adjectives as "particularly useful", and "easy to handle" I am given to wonder if it is also new and improved, smells and tastes like happiness, or can help my puppy learn new tricks faster.  ;)

On a slightly serious note, this could use a clearer (assume you are addressing a reasonably bright biology major who strayed into the Introduction to Computer Science course) definition at the start and sound less like something uttered by the US president elect... — Preceding unsigned comment added by (talk) 08:40, 13 January 2017 (UTC)