In programming, the term recursion has a special meaning and is often set off against iteration. Recursion and iteration are two basic ways of defining circular processes. Let us compare them, taking as an example the factorial function, and using Pascal as the programming language. The iterative definition is:
function fact(n:integer):integer; var f:integer; begin f := 1; while n > 0 do begin f := n * f; n := n - 1 end; fact := f endHere the heading tells us that fact is a function of one integer variable; n is the formal parameter. One more integer variable, f , is used in computation. At the beginning it becomes 1. Then a check occurs whether n exceeds 0. As long as it does not, the new values of f and n are computed, and control goes back to checking the condition. This is a simple loop, with two variables changing their values in the course of computation.
Compare it with the recursive definition of the same function:
function fact(n:integer):integer; begin if n = 0 then fact := 1 else fact := n * fact(n-1) endWhen this program is executed, the argument n is first compared with 0. If it is not 0 (and it is assumed that it is non-negative), then n must be multiplied by the value of the same function fact of a smaller argument n-1 . This means that the computation of the original function call must be suspended, and fact(n-1) be computed by the same program. The current value of n and fact (although the latter will not be actually used) must be kept in store to continue the computation of fact(n) after fact(n-1) is computed. This procedure of suspension will be repeated n times, until the current value of n becomes 0. Then the suspended calls are revived and completed in the reverse order, so that we finally come up with the answer. The suspended function calls in the process of recursive computation are said to form a stack (we put them one on top of the other, and use them in the reverse order). Thus before we actually start to multiply numbers we have to remember 2n numbers. In addition, we must organize the correct revival of suspended function calls. It is clear that the recursive computation is more complicated, and will take more time and space in the computer, than the iterative one.
A Refal function is recursive if it calls itself directly or through a sequence of calls of other functions. But recursion in this sense does not necessarily mean recursion in the sense of Pascal. The difference stems from the difference in the semantics of a function call. In Pascal, when a function calls itself it suspends the current program, to return to it later. In Refal, the right side of the sentence replaces the call of the function. If a function calls itself with a different argument:
F {... = <F ...> }this is, in terms of Pascal, iteration, not recursion: a function call reproduces itself, and only the arguments change. When the control structure is of the kind:
F1 { ... = <F1 ...<F2 ...>...> }or
F1 ... = <F2 ...> <F1 ...>it is, again, a `Pascal iteration', not a `Pascal recursion' proper. The call of F2 is computed first, and we again have a single function call of F1 . No accumulation of calls in the view field takes place.
The true `Pascal recursion' is caused by the structures of the kind:
F1 {... = <F2 ...<F1 ...>...> }or
F1 {... = <F1 ...> <F2 ...> }
In Refal there are no interrupted, or suspended, function calls because the operation of the Refal machine is a simple sequence of replacements. But a function call, as a whole, may wait for its turn in the view field. This is what happens with the calls of F2 : they accumulate in the view field, like the suspended calls of the Pascal function fact .
We have already had a chance to present the recursive definition of the factorial in Refal:
Fact { 0 = 1; s.N = <* s.N <Fact <- s.N 1>>>; }When we compute <Fact 10> , the first 20 steps create the structure:
<* 10 <* 9 ... <* 1 <Fact 0>> ... >>then the the first sentence of Fact works, and ten more steps of multiplication produce the result. This is very much like the case of the Pascal function, but the deferred calls are those of * , not Fact .
We can redefine Fact iteratively, in a complete analogy with the Pascal definition. The function that will do the job will have to contain all the variables used in the loop, namely the loop variable s.n , which is initially the same as the argument, and the accumulated product s.f , which is initially 1. The first step, therefore, will be a format transformation:
Fact { s.n = <Loop s.n 1>; };This is nothing but the initial assignments of values to the loop variables. The loop itself is defined as follows:
Loop { *1.Termination condition satisfied 0 s.f = s.f; *2.Otherwise s.n s.f = <Loop <- s.n 1> <* s.n s.f>>; }
Function definitions like that of Loop are known as tail-recursive. With such functions, the expansion of a function call includes no more than one recursive call of the function, and if it is there, it must be the last operation in the expansion. Because of that, tail-recursive functions allow an implementation which is iterative, i.e., avoids the accumulation of deferred function calls. Translators for Pascal and other common languages do not use this possibility and translate tail-recursive function definitions in the same manner as fully recursive definitions. This is also true for the older implementations of Lisp. The very latest implementation of Lisp and its variants (e.g.,Scheme) take care to implement tail recursion iteratively. In Refal we have no need to take special care of tail recursion: such definitions are executed iteratively by the semantics of the language.
Iterative (tail-recursive) definitions use space sparingly, while `truly recursive' definitions may create huge intermediate structures in the view field. But recursive definitions are, as a rule, shorter and more lucid.
Let us examine the character of recursion in the very simple string-processing function which transforms every A into B :
Fab { A e.1 = B <Fab e.1>; s.2 e.1 = s.2 <Fab e.1>; = ; }
When written and executed in Refal, this definition is iterative. Indeed, the expansion of a function call includes no more than one function call; characters accumulate in the view field as parts of the final answer, and there is no accumulation of function calls. Yet the definition itself is recursive, not iterative. If we write the exactly analogous definition in Lisp:
(define (fab x) (cond ((null x) nil) ((equal (car x)(quote a)) (cons (quote b) (fab (cdr x)))) (T (cons (car x)(fab (cdr x)))) ))it is clearly recursive, and not tail-recursive: the deferred calls of the constructor function cons accumulate in the memory until the argument of fab becomes nil (empty). The difference stems from the way we treat the basic operations on strings and expressions, i.e., pattern-matching and substitution. In Refal these operations are part of the background cybernetic machinery, which treats them as physical objects. The processed characters (or huge expressions in other cases) accumulate in the right places in the view field as they come, and there is no need to defer any function calls. In this way we combine the clarity and conciseness of a recursive definition with the efficiency of an iterative definition. If we had to use a special constructor Cons to concatenate a term on the left side of an expression, as is the case in Lisp, our definition would have been `Lisp recursive':
Fab { A e.1 = <Cons B <Fab e.1>>; s.2 e.1 = <Cons s.2 <Fab e.1>>; = ; }which causes accumulation of deferred function calls.
It is easy to rewrite our first definition of Fab in the classical tail-recursive (iterative) form:
Fab {e.1 = <Fab1 ()e.1>; } Fab1 { (e.1) A e.2 = <Fab1 (e.1 B) e.2>; (e.1) s.3 e.2 = <Fab1 (e.1 s.3) e.2>; (e.1) = e.1; }