The way that procedure call and return is implemented on traditional Von Neumann architecture devices is by way of “stack frames.” In this model, a region of memory is reserved to be a given thread’s “call stack.” When a procedure is about to be called, the currently executing procedure pushes any parameters to pass to the callee onto the call stack along with room for the callee’s return value and the “return address,” i.e. the location in memory where execution should resume after the callee completes its own execution. The caller then jumps to the starting address of the callee.
When the callee first begins to execute, it looks at the top-most stack frame for its parameters. After it has completed its calculation it writes its return value (if any) to the location reserved for that purpose in its stack frame and jumps to the “return address” that was pushed by the caller. That return address is simply the address of the next instruction in memory after the jump to the callee’s starting address. Depending on the details of how a particular program is implemented, either the caller or the callee will “clean up” the stack by popping the callee’s frame before the caller’s execution resumes. If the callee needs to call any procedures of its own before it returns to the caller it uses the same sequence of operations, creating and destroying additional stack frames in a Last-In / First-Out (LIFO) manner. Each time one procedure stores some “calling context” in this way for use by another procedure is an instance of recursion.
Important: Many programmers think that the term recursion applies only to the special case of “self-recursion,” where a procedure invokes itself. To understand the terminology and underlying concepts in this chapter, it is important to understand that recursion actually just means “procedure call” whether or not the caller and callee are the same procedure.
So long as there is enough memory allocated to a given thread’s stack to accommodate the maximum recursion depth during that thread’s execution, everything proceeds as expected. However, since memory is a finite resource, there is always the possibility of a given thread running out of stack space when running any given algorithm, even without regard to “infinite recursion” bugs and the like.
However, note that there is an important special case where recursion can proceed without the need to “grow the stack.” That is where the last thing that one procedure does is call another, such that the value that will be returned by the caller is simply whatever is returned by the callee. The term for that particular scenario is “tail recursion.” Consider the following two mutually-recursive functions:
(define (fn1 n) (when (> n 0) (fn2 n))) (define (fn2 n) (fn1 (- n 1)))
It does not matter what non-negative value for
n that you pass to
either procedure since both the call to
fn1 and the call to
fn2 both appear in “tail position.” In Scheme neither will
grow the stack. Instead, the Scheme run time system will simply
over-write the caller’s stack frame with the callee’s, so the program
will complete without danger of stack overflow.
Exercise: But does this mean that
fn2are immune from other defects related to recursion depth?
This feature can be exploited to use recursion in many cases where otherwise you would be forced to use some special form to support iteration. For example, here is the “traditional” implementation of the factorial function in Scheme:
(define (factorial n) (letrec ((loop (lambda (i f) (if (< i 1) f (loop (- i 1) (* f i)))))) (loop n 1)))
The preceding will work for any value of
n where there is sufficient memory to
represent the result as a
bignum. There is no danger of stack overflow because
the stack never grows.
(factorial 100) 93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000
This is such an important idiom in Scheme that there is a special “named let” syntax to support it. The preceding could have been written as:
(define (factorial n) (let loop ((i n) (f 1)) (if (< i 1) f (loop (- i 1) (* f i)))))
Most Scheme programmers would consider either of the preceding two
factorial to be better style than the equivalent logic
implemented using some iteration special form like
Note that using function application in place of explicit looping
constructs is just one aspect of “functional programming” enabled by
Scheme’s requirement to implement tail recursion without growing the
stack. Modern compilers for many programming languages often employ an
important optimization technique known as Continuation Passing Style
(CPS) in the flow of control for the code they emit. With Scheme’s
proper support of tail recursion, CPS can be used explicitly by the
programmer with or without the use of Scheme’s “first class”
continuations. In fact, every use of tail recursion in the style
demonstrated by the preceding implementation of
actually a special kind of self-recursive form of CPS. In particular,
the entry point named
loop is passed as its own continuation in
i is greater than 1. The first two “mutually recursive
functions” shown near the beginning of this section give a
hint of the much more sophisticated kinds of flow-of-control that can
be achieved using CPS. This is such an important idiom that Scheme
provides the ability to make use of the continuation at any point in
any calculation, not just continuations that were passed in explicitly
as in the examples shown in this section. See
call-with-current-continuation for more information.