Bottom Up Merge Sort in Scheme Recently I’ve begun a project to implement a number of basic algorithms in Scheme, which I’d like to eventually grow into a free (as in freedom) ebook. Having just done a Binary Search in Scheme, I thought it would be fun to give merge sort a try.

According to the mighty interwebs, merge sort is a good choice for sorting linked lists (a.k.a., Lisp lists). Unfortunately the only Lisp merge sort implementation examples I’ve been able to find on the web have been recursive, not iterative.

The implementation described here is an iterative, bottom-up merge sort, written in a functional style. (I daren’t say the functional style, lest any real Scheme wizards show up and burn me to a crisp.)

First, generate a list of random numbers

In order to have something to sort, we need a procedure that generates a list of random numbers – note that the docstring is allowed by MIT/GNU Scheme; YMMV with other Schemes.

(define (make-list-of-random-numbers list-length max)
;; Int Int -> List
"Make a list of random integers less than MAX that's LIST-LENGTH long."
(letrec ((maker
(lambda (list-length max result)
(let loop ((n list-length) (result '()))
(if (= n 0)
result
(loop (- n 1) (cons (random max) result)))))))
(maker list-length max '())))

Then, write a merge procedure

This implementation of the merge procedure is a straight port of the one described on the Wikipedia Merge Sort page, with one minor difference to make the sort faster 1.

An English description of the merge operation is as follows:

• If both items passed in are numbers (or strings), wrap them up in lists and recur. (In this example we only care about sorting numbers)
• If both lists are empty, return the result.
• If neither list is empty:
• If the first item in the first list is “less than” the first item in the second list, cons it onto the result and recur.
• Otherwise, cons the first item in the second list on the result and recur.
• If the first list still has items in it, cons the first item onto the result and recur.
• If the second list still has items in it, cons the first item onto the result and recur.
• If none of the above conditions are true, return #f. I put this here for debugging purposes while writing this code; now that the procedure is debugged, it is never reached. (Note: “debugged” just means “I haven’t found another bug yet”.)
(define (rml/merge pred l r)
(letrec ((merge-aux
(lambda (pred left right result)
(cond
((and (number? left)
(number? right))
(merge-aux pred
(list left)
(list right)
result))
((and (string? left)
(string? right))
(merge-aux pred
(list left)
(list right)
result))
((and (null? left)
(null? right))
(reverse result))
((and (not (null? left))
(not (null? right)))
(if (pred (car left)
(car right))
(merge-aux pred
(cdr left)
right
(cons (car left) result))
(merge-aux pred
left
(cdr right)
(cons (car right) result))))
((not (null? left))
(merge-aux pred (cdr left) right (cons (car left) result)))
((not (null? right))
(merge-aux pred left (cdr right) (cons (car right) result)))
(else #f)))))
(merge-aux pred l r '())))

We can run a few merges to get a feel for how it works. The comparison predicate we pass as the first argument will let us sort all kinds of things, but for the purposes of this example we’ll stick to numbers:

(rml/merge < '(360 388 577) '(10 811 875 995))
;Value 11: (10 360 388 577 811 875 995)

(rml/merge < '(8 173 227 463 528 817) '(10 360 388 577 811 875 995))
;Value 12: (8 10 173 227 360 388 463 528 577 811 817 875 995)

(rml/merge <
'(218 348 486 520 639 662 764 766 886 957 961 964)
'(8 10 173 227 360 388 463 528 577 811 817 875 995))
;Value 14: (8 10 173 218 227 348 360 388 463 486 520 528 577 639 662 764 766 811 817 875 886 957 961 964 995)

Finally, do a bottom up iterative merge sort

It took me a while to figure out how to do the iterative merge sort in a Schemely fashion. As usual, it wasn’t until I took the time to model the procedure on paper that I got somewhere. Here’s what I wrote in my notebook:

;;  XS                   |      RESULT
;;---------------------------------------------

'(5 1 2 9 7 8 4 3 6)            '()
'(2 9 7 8 4 3 6)            '((1 5))
'(7 8 4 3 6)            '((2 9) (1 5))
'(4 3 6)            '((7 8) (2 9) (1 5))
'(6)            '((3 4) (7 8) (2 9) (1 5))
'()            '((6) (3 4) (7 8) (2 9) (1 5))

;; XS is null, and RESULT is not of length 1 (meaning it isn't sorted
;; yet), so we recur, swapping the two:

'((6) (3 4) (7 8) (2 9) (1 5))  '()
'((7 8) (2 9) (1 5))  '((3 4 6))
'((1 5))  '((2 7 8 9) (3 4 6))
'()  '((1 5) (2 7 8 9) (3 4 6))

;; Once more XS is null, but RESULT is still not sorted, so we swap
;; and recur again

'((1 5) (2 7 8 9) (3 4 6))      '()
'(3 4 6)      '((1 2 5 7 8 9))
'()      '((3 4 6) (1 2 5 7 8 9))

;; Same story: swap and recur!

'((3 4 6) (1 2 5 7 8 9))        '()
'()        '((1 2 3 4 5 6 7 8 9))

;; Finally, we reach our base case: XS is null, and RESULT is of
;; length 1, meaning that it contains a sorted list

'(1 2 3 4 5 6 7 8 9)

This was a really fun little problem to think about and visualize. It just so happens that it fell out in a functional style; usually I don’t mind doing a bit of state-bashing, especially if it’s procedure-local. Here’s the code that does the sort shown above:

(define (rml/merge-sort xs pred)
(let loop ((xs xs)
(result '()))
(cond ((and (null? xs)
(null? (cdr result)))
(car result))
((null? xs)
(loop result
xs))
((null? (cdr xs))
(loop (cdr xs)
(cons (car xs) result)))
(else
(loop (cddr xs)
(cons (rml/merge <
(first xs)
(second xs))
result))))))

That’s nice, but how does it perform?

A good test of our merge sort is to compare it to the system’s sort procedure. In the case of MIT/GNU Scheme, we’ll need to compile our code if we hope to get anywhere close to the system’s speed. If your Scheme is interpreted, you don’t have to bother of course.

To make the test realistic, we’ll create three lists of random numbers: one with 20,000 items, another with 200,000, and finally a giant list of 2,000,000 random numbers. This should give us a good idea of our sort’s performance. Here’s the output of timing first two sorts, 20,000 and 200,000 2:

;;; Load compiled code

;Value: rml/insertion-sort2

;;; Define our lists

(define unsorted-20000 (make-list-of-random-numbers 20000 200000))
;Value: unsorted-20000

(define unsorted-200000 (make-list-of-random-numbers 200000 2000000))
;Value: unsorted-200000

;;; Sort the list with 20,000 items

(with-timing-output (rml/merge-sort unsorted-20000 <))
;Run time:      .03
;GC time:       0.
;Actual time:   .03

(with-timing-output (sort unsorted-20000 <))
;Run time:      .02
;GC time:       0.
;Actual time:   .021

;;; Sort the list with 200,000 items

(with-timing-output (rml/merge-sort unsorted-200000 <))
;Run time:      .23
;GC time:       0.
;Actual time:   .252

(with-timing-output (sort unsorted-200000 <))
;Run time:      .3
;GC time:       0.
;Actual time:   .3

As you can see, our sort procedure is on par with the system’s for these inputs. Now let’s turn up the heat. How about a list with 2,000,000 random numbers?

;;; Sort the list with 2,000,000 items

(define unsorted-2000000 (make-list-of-random-numbers 2000000 20000000))
;Value: unsorted-2000000

(with-timing-output (rml/merge-sort4 unsorted-2000000 <))
;Aborting!: out of memory
;GC #34: took:   0.80 (100%) CPU time,   0.10 (100%) real time; free: 11271137
;GC #35: took:   0.70 (100%) CPU time,   0.90  (81%) real time; free: 11271917
;GC #36: took:   0.60 (100%) CPU time,   0.90  (99%) real time; free: 11271917

(with-timing-output (sort unsorted-2000000 <))
;Run time:      2.48
;GC time:       0.
;Actual time:   2.474

No go. On a MacBook with 4GB of RAM, our merge sort runs out of memory, while the system sort procedure works just fine. It seems the wizards who implemented this Scheme system knew what they were doing after all! :-}

It should be pretty clear at this point why we’re running out of memory. In MIT/GNU Scheme, the system sort procedure uses vectors and mutation (and is no doubt highly tuned for the compiler), whereas we take a relatively brain-dead approach that uses lists and lots of cons-ing. I leave it as an exercise for the reader (or perhaps my future self) to rewrite this code so that it doesn’t run out of memory.

(Image courtesy mag3737 under Creative Commons license.)

Footnotes:

1 An earlier implementation started off the sort by “exploding” the list to be sorted so that ='(1 2 3)= became ='((1) (2) (3))=. This is convenient for testing purposes, but very expensive. It’s also unnecessary after the first round of merging. We avoid the need to explode the list altogether by teaching merge to accept numbers and listify them when they appear. We could also do the same for strings and other types as necessary.

2 For the definition of the with-timing-output macro, see here.

Turing’s Machine: A Simple Example I

The Turing Machine is a theoretical device first described by the British mathematician Alan Turing 1. It is a computing machine that manipulates symbols on a strip of tape. The tape is divided into cells which contain a predefined set of symbols, and is assumed to be infinite in length.

The machine can only perform a finite number of operations; in the example we’ll discuss here, it can read, write, erase, and move one cell to the left or right.

Strangely enough, this simple machine is powerful enough to compute any function that is capable of being computed (Ibid.).

Whatever computations the machine performs must be defined in terms of those operations. We’ll examine two basic operations, addition and subtraction – how they’re defined symbolically, as well as how they’re computed by the machine itself.

We begin with a strip of tape. On it is printed a representation of the two numbers we want to add. “Filled” cells are represented by the number 1. “Blank” cells are represented by 0. Here’s how we would write 4 followed by 3:

| 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

The easiest way to visualize adding the two numbers is to shift the number on the right one space to the left. However, even that simple concept is too complicated for the machine to understand. Therefore we must translate our wishes into a series of finite commands which will be executed in sequence. One way of defining an addition function is to use an instruction table like the following:

SYMBOL/STATE 1 0
0 RIGHT/STATE 0 RIGHT/STATE 1
1 RIGHT/STATE 1 LEFT/ERASE/STOP

A good way to understand these instructions is by seeing the machine operate. Let’s step the machine through the behavior defined by the table and see what happens (The machine is represented by the `<0>’ hovering above the tape, where 0 is the state the machine is in according to our table):

<0>
| 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<0>
| 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

In the above image, our machine is in STATE 0 and therefore keeps moving to the right whenever it encounters a marked cell (as defined in the table of behavior).

<0>
| 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<1>
| 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

When the machine (still in STATE 0) encounters a blank cell, it writes to the cell, switches to STATE 1, and moves right.

<1>
| 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

In STATE 1, the machine keeps scanning right, maintaining its current state. It continues right for as long as it reads dotted cells.

<1>
| 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<1>
| 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

Upon encountering an unmarked cell (while still in STATE 1), its
final action is to shift one cell to the left, erase, and stop.

<1>
| 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

We are now left with 7, the answer. Note that at no time did the machine understand the nature of what we call “addition”. It simply followed the instructions formalized in the table of behavior. It has no notion of its environment beyond the cell in front of it (It has no “memory”). It can only perform actions: read, write, erase, and move.

II

We’ve seen how to add two numbers using a simple Turing machine. Now let’s write a table of behavior for subtraction.

At this point, I challenge you to step away from the computer and do this yourself. You will learn more by attempting this than you will by reading about it here. Please do not come back until you have given it a try.

Welcome back. I assume that you’ve written a table of behavior for the subtraction operation. There are a number of tables that will work; here is one:

SYMBOL/STATE 1 0
0 RIGHT/STATE RIGHT/STATE
0 1
1 RIGHT/STATE RIGHT/STATE
2 1
2 LEFT/STATE 3 LEFT/STATE 5
3 ERASE/LEFT/STATE
4
4 ERASE/RIGHT/STATE LEFT/STATE 4
1
5 ERASE/LEFT/STATE
6
6 ERASE/STOP LEFT/STATE 6

Let’s trace the execution of the machine one step at a time as it subtracts 3 from 4.

In STATE 0, the machine moves to the right as it encounters filled cells. When it encounters a blank cell in STATE 0, the machine moves right and changes to STATE 1.

<0>
| 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<0>
| 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<1>
| 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

In STATE 1, the machine responds to a filled cell by moving right, and changing to STATE 2. In STATE 2, it shifts left and changes to STATE 3.

<2>
| 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<3>
| 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

In STATE 3, if it encounters a filled cell, the machine will erase,
move left, and change to STATE 4. In STATE 4, if a blank cell is
read, the machine moves left and remains in STATE 4.

<3>
| 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<4>
| 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

In STATE 4, when it encounters a filled cell, the machine erases, moves to the right, and changes to STATE 1. It then repeats (or recurs, if you will) through the steps shown above. We can see this in the next several illustrations.

<4>
| 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<4>
| 1 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<1>
| 1 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

All is as it was before. The machine is once again in STATE 1, poised above a filled cell. As before, it will shift right and enter STATE 2.

<1>
| 1 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<2>
| 1 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<3>
| 1 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<3>
| 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<4>
| 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<4>
| 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<4>
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<1>
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<1>
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

This time, however, the machine encounters a blank cell while in STATE 2. It shifts left and enters STATE 5.

<2>
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<5>
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

In STATE 5, the machine will erase the filled cell, shift left, and enter STATE 6. STATE 6 specifies that the machine will continue to shift left and remain in STATE 6 as long as it encounters blank cells.

<5>
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<6>
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

When the machine in STATE 6 encounters a filled cell, it will erase that cell and then stop. The subtraction is complete and we are left with the answer: 1.

<6>
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<6>
| 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

III

Here is the process shown in full without any verbal explication. The movements of the machine follow a predictable pattern. Note how the behavior of the machine during the transition between STATE 2 and STATE 5 as it nears the end of the computation resembles the base case of a recursive function 2. It’s quite beautiful.

<0>
| 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<0>
| 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<1>
| 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<2>
| 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<3>
| 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<3>
| 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<4>
| 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<4>
| 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<4>
| 1 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<1>
| 1 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<1>
| 1 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<2>
| 1 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<3>
| 1 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<3>
| 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<4>
| 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<4>
| 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<4>
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<1>
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<1>
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<2>
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<5>
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<5>
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<6>
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<6>
| 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

<6>
| 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

For a nice introduction to Turing Machines (among other things), I recommend Paul Hoffman’s book Archimedes’ Revenge.

(Image courtesy Melisande under Creative Commons license.)

Binary Search in Scheme Intro

Just for fun, I’ve begun translating some of the algorithms from Mastering Algorithms with Perl into Scheme. My hope is that I’ll get two things out of this: a better knowledge of algorithms, and of Scheme hacking.

Binary search is one of the first algorithms listed in the book; it’s tricky to write a correct binary search, but I had the Perl code to work from. Let’s see how I did.

What’s binary search?

Binary search is a method for finding a specific item in a sorted list. Here’s how it works:

1. Take a guess that the item you want is in the middle of the current search “window” (when you start, the search window is the entire list).
2. If the item is where you guessed it would be, return the index (the location of your guess).
3. If your guess is “less than” the item you want (based on a comparison function you choose), recur, this time raising the “bottom” of the search window to the midway point.
4. If your guess is “greater than” the item you want (based on your comparison function), recur, this time lowering the “top” of the search window to the midway point.

In other words, you cut the size of the search window in half every time through the loop. This gives you a worst-case running time of about (/ (log n) (log 2)) steps. This means you can find an item in a sorted list of 20,000,000,000 (twenty billion) items in about 34 steps.

Reading lines from a file

Before I could start writing a binary search, I needed a sorted list of items. I decided to work with a sorted list of words from /usr/share/dict/words, so I wrote a couple of little procedures to make a list of words from a subset of that file. (I didn’t want to read the entire large file into a list in memory.)

Note: Both format and the Lisp-inspired #!optional keyword are available in MIT Scheme; they made writing the re-matches? procedure more convenient.

• re-matches? checks if a regular expression matches a string (in this case, a line from a file).
• make-list-of-words-matching is used to loop over the lines of the words file and return a list of lines matching the provided regular expression.

Now I have the tools I need to make my word list.

(define (re-matches? re line #!optional display-matches)
;; Regex String . Boolean -> Boolean
"Attempt to match RE against LINE. Print the match if DISPLAY-MATCHES is set."
(let ((match (re-string-match re line)))
(if match
(if (not (default-object? display-matches))
(begin (format #t "|~A|~%" (re-match-extract line match 0))
#t)
#t)
#f)))

(define (make-list-of-words-matching re file)
;; Regex String -> List
"Given a regular expression RE, loop over FILE, gathering matches."
(call-with-input-file file
(lambda (port)
(let loop ((source (read-line port)) (sink '()))
(if (eof-object? source)
sink
(loop (read-line port) (if (re-matches? re source)
(cons source sink)
sink)))))))

Writing tests

Since I am not one of the 10% of programmers who can implement a correct binary search on paper, I started out by writing a test procedure. The test procedure grew over time as I found bugs and read an interesting discussion about the various edge cases a binary search procedure should handle. These include:

• Empty list
• List has one word
• List has two word
• Word is not there and “less than” anything in the list
• Word is not there and “greater than” anything in the list
• Word is first item
• Word is last item
• List is all one word
• If multiple copies of word are in list, return the first word found (this could be implemented to return the first or last duplicated word)

Furthermore, I added a few “sanity checks” that check the return values against known outputs. Here are the relevant procedures:

• assert= checks two numbers for equality and prints a result
• assert-equal checks two Scheme objects against each other with equal? and prints a result
• run-binary-search-tests reads in words from a file and runs all of our tests
(define (assert= expected got #!optional noise)
;; Int Int -> IO
(if (= expected got)
(format #t "~A is ~A\t...ok~%" expected got)
(format #t "~A is not ~A\t...FAIL~%" expected got)))

(define (assert-equal? expected got #!optional noise)
;; Thing Thing -> IO
(if (equal? expected got)
(format #t "~A is ~A\t...ok~%" expected got)
(format #t "~A is not ~A\t...FAIL~%" expected got)))

(define (run-binary-search-tests)
;; -> IO
"Run our binary search tests using known words from the 'words' file.
This file should be in the current working directory."
(with-working-directory-pathname (pwd)
(lambda ()
(if (file-exists? "words")
(begin
(format #t "file 'words' exists, making a list...~%")
(let* ((unsorted (make-list-of-words-matching "acc" "words"))
(sorted (sort unsorted string<?)))
(format #t "doing binary searches...~%")
(assert-equal? #f (binary-search "test" '())) ; empty list
(assert-equal? #f (binary-search "aardvark" sorted)) ; element absent and too small
(assert-equal? #f (binary-search "zebra" sorted)) ; element absent and too large
(assert= 0 (binary-search "accusive" '("accusive"))) ; list of length one
(assert= 0 (binary-search "acca" sorted)) ; first element of list
(assert= 1 (binary-search "aardvark" '("aardvark" "aardvark" "babylon"))) ; multiple copies of word in list
(assert= 1 (binary-search "barbaric" '("accusive" "barbaric"))) ; list of length two
(assert= 98 (binary-search "acclamator" sorted))
(assert= 127 (binary-search "aardvark" (map (lambda (x) "aardvark") test-list))) ; list is all one value
(assert= 143 (binary-search "accomplice" sorted))
(assert= 254 (binary-search "accustomedly" sorted))
(assert= 255 (binary-search "accustomedness" sorted)))))))) ; last element of list

The binary search procedure

Finally, here’s the binary search procedure; it uses a couple of helper procedures for clarity.

• ->int is a helper procedure that does a quick and dirty integer conversion on its argument
• split-difference takes a low and high number and returns the floor of the halfway point between the two
• binary-search takes an optional debug-print argument that I used a lot while debugging. The format statements and the optional argument tests add a lot of bulk – now that the procedure is debugged, they can probably be removed. (Aside: I wonder how much “elegant” code started out like this and was revised after sufficient initial testing and debugging?)
(define (->int n)
;; Number -> Int
"Given a number N, return its integer representation.
N can be an integer or flonum (yes, it's quick and dirty)."
(flo:floor->exact (exact->inexact n)))

(define (split-difference low high)
;; Int Int -> Int
"Given two numbers, return their rough average."
(if (= (- high low) 1)
1
(->int (/ (- high low) 2))))

(define (binary-search word xs #!optional debug-print)
;; String List -> Int
"Do binary search of list XS for WORD. Return the index found, or #f."
(if (null? xs)
#f
(let loop ((low 0) (high (- (length xs) 1)))
(let* ((try (+ low (split-difference low high)))
(word-at-try (list-ref xs try)))
(cond
((string=? word-at-try word) try)
((< (- high low) 1) #f)
((= (- high try) 1)
(if (string=? (list-ref xs low) word)
low
#f))
((string<? word-at-try word)
(if (not (default-object? debug-print))
(begin (format #f "(string<? ~A ~A) -> #t~%try: ~A high: ~A low: ~A ~2%"
word-at-try word try high low)
(loop (+ 1 try) high)) ; raise the bottom of the window
(loop (+ 1 try) high)))
((string>? word-at-try word)
(if (not (default-object? debug-print))
(begin (format #f "(string>? ~A ~A) -> #t~%try: ~A high: ~A low: ~A ~2%"
word-at-try word try high low)
(loop low (+ 1 try))) ; lower the top of the window
(loop low (+ 1 try))))
(else #f))))))

Takeaways

This exercise has taught me a lot.

1. Writing correct code is hard. (I’m confident that this code is not correct.) You need to figure out your invariants and edge cases first. I didn’t, and it made things a lot harder.
2. It’s been said a million times, but tests are code. The tests required some debugging of their own.
3. Once they worked, the tests were extremely helpful. Especially now that I’m at the point where (if this were “for real”) additional features would need to be added, the format calls removed, the procedure speeded up, and so on.

I hope this has been useful to some other aspiring Scheme wizards out there. Happy Hacking!

(Image courtesy Melisande under Creative Commons license.)

SICP Exercises 1.10 through 1.13 Intro

In this post I’ll share some of the fun I had doing exercises 1.10 through 1.13 from Structure and Interpretation of Computer Programs. Remember that I’m working from the first edition, since it’s what I have. Fortunately it’s not too different from the second edition so far; I’ve written the exercises out so you don’t need the book to follow along.

Exercise 1.10

Draw the tree illustrating the process generated by the count-change procedure of section 1.2.2 in making change for 11 cents. What are the orders of growth of the space and time used by this process as the amount to be changed increases?

Having already drawn the tree out by hand for an earlier post, I noticed a lot of inefficient procedure calls, and sidetracked a little to implement a version called cc-faster which massively reduced their number. For example, when counting change for 600 cents (using all 5 types of coins), the number of procedure calls was reduced from 29,806,269 to 233,609. In other words, it made approximately 128 times fewer recursive calls for the same input! Even so, cc-faster still generates a tree-recursive process and is destined to blow up. Depending on your application it might be good enough though.

Space

But I digress. On p. 11 of the text, the authors state that

In general, the time required by a tree-recursive process will be proportional to the number of nodes in the tree, while the space required will be proportional to the maximum depth of the tree.

Therefore I’m happy to guesstimate the order of growth of the space used by (cc 11 5) to be somewhere in the neighborhood of O(n), where n is the amount to be changed. (Looking at the earlier post mentioned above, it looks like cc-faster is also O(n), but in the number of different kinds of coins.)

Time

Speaking of guesstimation, I decided to instrument the cc procedure in the following hackish manner:

;;; `C-CC': A VERSION OF `CC' THAT COUNTS HOW OFTEN IT CALLS ITSELF
;; Yes, a gross hack. But written quickly! :-)

(define *call-count* 0)

(define-syntax incf!
(syntax-rules ()
((incf! var)
(begin (set! var (+ 1 var))
var))))

(define (c-cc amount kinds-of-coins)
(begin (incf! *call-count*)
(cond
;; ((= kinds-of-coins 1) 1) ; Uncomment this to greatly
;; lessen the # of procedure calls by using the `cc-faster'
;; algorithm.
((= amount 0) 1)
((or (< amount 0) (= kinds-of-coins 0)) 0)
(else (+
(c-cc amount (- kinds-of-coins 1))
(c-cc (- amount (first-denomination kinds-of-coins))
kinds-of-coins))))))

(define (print-call-count-statistics n #!optional display-proc)
(set! *call-count* 0)
; Use the special version of `cc' above
(c-cc n 5)
(display *call-count*)
(newline)
(set! *call-count* 0)
(if (not (default-object? display-proc))
(begin (display (apply proc (list *call-count*)))
(newline))))

This allowed me to do the following measurements:

;; CALL COUNT STATISTICS FOR THE ORIGINAL `CC' ALGORITHM
; (for-each (lambda (n) (print-call-count-statistics n)) '(50 100 150
; 200 250 300 350 400 450 500 550 600))
; Note: the number in the right column comes from dividing the current
; value by the previous.
;    1571
;   15499 0.101361378153
;   71775 0.215938697318
;  229589 0.312623862642
;  587331 0.390902234004
; 1292591 0.454382708838
; 2552159 ...
; 4642025
; 7917379
;12822611
;19901311
;29806269 0.667688767085
;Unspecified return value

I plotted the results of my measurements and tried to do a little curve-fitting by hand. I initially settled on x^e * ϕ, but it didn’t grow steeply enough to match cc‘s growth as you can see in the plot below. (This guess seemed reasonable at first because I only went up to 450 with cc.)

Then I started dividing the number of nodes in the call tree for input n by the number of nodes generated by input n+1 (see the right column in the code block above). At first glance it looks like cc‘s order of growth is a function that will asymptotically approach 1. So what’s my final answer? cc is exponential in a bad way. Don’t use it! :-)

Exercise 1.11

Design a procedure that evolves an iterative exponentiation process that uses successive squaring and works in logarithmic time, as does fast-exp.

Unlike Exercise 1.9, the authors walk you through how to do this exercise. Even with their patient description, I managed to implement a completely wrong solution. I finally figured out what I was doing wrong by going back and reading the text, and then modeling the procedure by hand on paper. It turns out I hadn’t understood their explanation after all! So I guess the late Prof. Dijkstra really was onto something.

While trying to debug it, I wrote fast-expt-tester, which showed me how hopelessly off the rails I was, but not necessarily how to fix it. Thank goodness for pen and paper, the ultimate debugging tools (at least for a student-level problem like this).

; Call it like so: (fast-expt-iter 7 7 1)
(define (fast-expt-iter b n a)
(cond
((= n 0) a)
((even? n)
(fast-expt-iter (square b) (/ n 2) a))
(else (fast-expt-iter b (- n 1) (* b a)))))

(define (fast-expt-tester rounds)
(let loop ((n rounds))
(if (= n 0)
(display "all done")
(let* ((z1 (random 20))
(z2 (random 20))
(correct (expt z1 z2))
(maybe-correct (fast-expt-iter z1 z2 1))
(message (string-append
(number->string correct)
"\nis not equal to fast-expt-iter's answer\n"
(number->string maybe-correct)
"\nfor inputs: "
(number->string z1)
" "
(number->string z2)
"\n")))
(if (not (= correct maybe-correct))
(begin (display message)
(newline)
(loop (- n 1))))))))

Exercise 1.12

… One can perform integer multiplication by means of repeated addition. … Now suppose we include, together with addition, the operations double, which doubles an integer, and halve, which divides an (even) integer by 2. Using these, design a multiplication procedure analogous to fast-exp that works in logarithmic time.

This was slightly simpler to understand than 1.11 (at least for me).

(define (halve n)
(/ n 2))

(define (double n)
(+ n n))

(define (rml/fast-* a b)
(cond ((= b 0) 0)
((even? b)
(rml/fast-* (double a) (halve b)))
(else
(+ a (rml/fast-* a (- b 1))))))

(* 123123123123 123123123123)
;Value: 15159303447561417273129

(rml/fast-* 123123123123 123123123123)
;Value: 15159303447561417273129

Exercise 1.13

Using the results of exercises 1.11 and 1.12, devise a procedure that generates an iterative process for multiplying two integers in terms of adding, doubling, and halving and works in logarithmic time.

I have to admit it, expressing iteration using recursive procedure definitions is great fun!

(define (rml/iter-* a b c)
(cond ((= b 0) c)
((even? b)
(rml/iter-* (double a) (halve b) c))
(else
(rml/iter-* a (- b 1) (+ a c)))))

(* 123123123123 123123123123)
;Value: 15159303447561417273129

(rml/fast-* 123123123123 123123123123)
;Value: 15159303447561417273129

(rml/iter-* 123123123123 123123123123 0)
;Value: 15159303447561417273129

(Origami image courtesy Melisande under Creative Commons license.)

An Iterative Change-Counting Procedure

… the process is really iterative: Its state is captured completely
by its three state variables, and an interpreter need keep track of
only three variables in order to execute the program.

– SICP (1st ed.), p.33

After much head-scratching and experimentation, I have implemented the following iterative change-counting procedure, which I call cc-linear (See also previously, previously):

(define (cc-linear amt kinds)
(define (cc-linear-aux amount kinds-of-coins count stack)
(cond ((or (= amount 0)
(= kinds-of-coins 1))
(if (null? stack)
(+ count 1)
(let* ((new-amt-pair (car stack))
(new-stack (cdr stack)))
(cc-linear-aux (car new-amt-pair) (cdr new-amt-pair) (+ 1 count) new-stack))))
((or (< amount 0)
(= kinds-of-coins 0))
(if (null? stack)
count
(let ((new-amt-pair (car stack))
(new-stack (cdr stack)))
(cc-linear-aux (car new-amt-pair) (cdr new-amt-pair) count new-stack))))
(else (cc-linear-aux amount
(- kinds-of-coins 1)
count
(cons (cons (- amount (first-denomination kinds-of-coins))
kinds-of-coins)
stack)))))
(cc-linear-aux amt kinds 0 '()))

Note that the definition of the first-denomination procedure is given in the text, as well as being downloadable from the SICP website.

This procedure returns the correct answer. There are a few issues with it, however:

• It uses cons, let*, car, cdr, and null?, none of which are available to the student at this point in the text.
• It simulates a stack by passing the list of delayed computations as an argument to subsequent invocations of itself. This means that it doesn’t generate a “linear iterative” process, since the amount of space used is not constant.

As for the use of car, cdr, and friends, I’m willing to let it pass since I still learned a lot by forcing myself to think in terms of iterating via recursive procedure calls.

As far as the shape of the process being generated is concerned, it can be described as “resembling linear recursion”. In the course of implementing this solution I wrote out the following simulation by hand of changing \$0.12 using only two kinds of coins, a nickel and a penny.

Pre-visualizing the process to be generated by `cc-linear’

(12 2) 0 ()
( 2 2) 0 ((12 1))
(-3 2) 0 ((2 1) (12 1))
( 2 1) 0 ((12 1))
( 1 1) 0 ((2 0) (12 1))
( 0 1) 0 ((2 0) (12 1))
( 2 0) 1 ((12 1))
(12 1) 1 ()
(11 1) 1 ((12 0))
(10 1) 1 ((11 0) (12 0))
( 9 1) 1 ((10 0) (11 0) (12 0))
( 8 1) 1 (( 9 0) (10 0) (11 0) (12 0))
( 7 1) 1 (( 8 0) ( 9 0) (10 0) (11 0) (12 0))
( 6 1) 1 (( 7 0) ( 8 0) ( 9 0) (10 0) (11 0) (12 0))
( 5 1) 1 (( 6 0) ( 7 0) ( 8 0) ( 9 0) (10 0) (11 0) (12 0))
( 4 1) 1 (( 5 0) ( 6 0) ( 7 0) ( 8 0) ( 9 0) (10 0) (11 0) (12 0))
( 3 1) 1 (( 4 0) ( 5 0) ( 6 0) ( 7 0) ( 8 0) ( 9 0) (10 0) (11 0) (12 0))
( 2 1) 1 (( 3 0) ( 4 0) ( 5 0) ( 6 0) ( 7 0) ( 8 0) ( 9 0) (10 0) (11 0) (12 0))
( 1 1) 1 (( 2 0) ( 3 0) ( 4 0) ( 5 0) ( 6 0) ( 7 0) ( 8 0) ( 9 0) (10 0) (11 0) (12 0))
( 0 1) 1 (( 1 0) ( 2 0) ( 3 0) ( 4 0) ( 5 0) ( 6 0) ( 7 0) ( 8 0) ( 9 0) (10 0) (11 0) (12 0))
( 1 0) 2 (( 2 0) ( 3 0) ( 4 0) ( 5 0) ( 6 0) ( 7 0) ( 8 0) ( 9 0) (10 0) (11 0) (12 0))

Et cetera…

As you can see, the stack continues to grow, and passing that huge association list in as a procedure argument is not without cost, as the following performance analysis shows (for the definitions of with-timing-output and cc-faster, see here.):

Performance analysis of `cc’ vs. `cc-faster’ vs. `cc-linear’ – `cc-linear’ spends a fair amount of time in garbage collection, presumably due to all of the list manipulation.

(with-timing-output (cc 292 5))
Run time: 3.42
GC time: .03
Actual time: 3.478
;Value: 8526

(with-timing-output (cc-faster 292 5))
Run time: .05
GC time: 0.
Actual time: .047
;Value: 8526

(with-timing-output (cc-linear 292 5))
Run time: .11
GC time: .04
Actual time: .15
;Value: 8526

Attempting to Count Change, Iteratively

I have discovered what I believe is a nice optimization for the tree-recursive change-counting procedure, cc, mentioned in a previous post. (In the first edition, it’s Exercise 1.9.)

Unfortunately I’m still struggling with the real problem: how to implement this problem to evolve an iterative process, as requested by the book. My attempts to recast the computation using only variables passed as procedure arguments are failing. These attempts always need more “registers” than I have available, and I start thinking in terms of some kind of “storage” where I can stash the intermediate problems that I can’t work on yet.

Of course, these “registers” are arguments to the function, and this “storage” is also known as the stack. And growing the stack is what we’re trying to avoid.

Scannning ahead in the text, Exercise 1.10 asks the reader to

Draw the tree illustrating the process generated by the (cc) procedure … making change for 11 cents.

Here’s what appeared in my notebook – Note that the procedure calls surrounded by boxes are the ones that return a value of 1. In this case, the return value of (cc 11 3) is 4, shown by the 4 boxed procedure calls: While drawing out this tree, I noticed that once the procedure tries to make change for an amount using only pennies, it makes (* 2 amount) unnecessary procedure calls. It’s easy to see that they just bang down the list from amount to 0 and finally return 1. My idea was to reformulate the procedure so that we perform this check first, like so:

(define (cc-faster amount kinds-of-coins)
(cond ((= kinds-of-coins 1) 1) ; Are we making change with pennies? If so, there's only one way to do it!
((= amount 0) 1)
((or (< amount 0)
(= kinds-of-coins 0))
0)
(else (+
(cc-faster amount (- kinds-of-coins 1))
(cc-faster (- amount (first-denomination kinds-of-coins))
kinds-of-coins)))))

Calling this procedure results in the following tree of procedure calls: Note that instead of 55 procedure calls, we now make only 15! And the discrepancy only grows wider as we count larger amounts of change, prompting me to make the following measurements using MIT Scheme – the with-timing-output macro is described at the bottom:

(with-timing-output (cc 700 5))
; Run time:     75.81
; GC time:      .34
; Actual time:  76.641
;Value: 207530

(with-timing-output (cc-faster 700 5))
; Run time:     .56
; GC time:      .02
; Actual time:  .608
;Value: 207530

This reduces running time from over a minute to less than a second, which is kind of exciting! It’s still not an iterative process, though.

Just for completeness, here’s how long the same procedure takes using m-cc, the memoizing version of cc described previously. Obviously looking stuff up in memory is faster than computing it over and over again:

(with-timing-output (m-cc 700 5))
; Run time:     0.
; GC time:      0.
; Actual time:  0.
;Value: 207530

And here’s the definition of the with-timing-output macro. It’s just sugar on top of the with-timings measurement procedure built into MIT Scheme:

(define-syntax with-timing-output
(syntax-rules ()
((with-timing-output body)
(with-timings
(lambda () body)
(lambda (run-time gc-time real-time)
(display "Run time:\t")
(write (internal-time/ticks->seconds run-time))
(newline)
(display "GC time:\t")
(write (internal-time/ticks->seconds gc-time))
(newline)
(display "Actual time:\t")
(write (internal-time/ticks->seconds real-time))
(newline))))))

Next step: ITERATION!

Edwin `dired-do-shell-command’ on files

Evaluate this code in your Edwin REPL and you’re one step closer to being able to use Edwin as your primary file manager. I’ve reimplemented the Emacs `dired-do-shell-command’ function as an Edwin command (note that it puts these definitions in the `edwin dired’ environment’):

(ge '(edwin dired))

(define (string-join xs)
(list->string (concatenate
(map (lambda (s) (string->list s)) xs))))

(define (shell-command-prompt prompt)
(prompt-for-string prompt #f
'DEFAULT-TYPE 'INSERTED-DEFAULT
'HISTORY 'SHELL-COMMAND))

(define (pathname->string pathname)
(uri->string (pathname->uri pathname)))

(define-command dired-do-shell-command
"Run a shell command on the file or directory at the current point."
(lambda ()
(list (shell-command-prompt "Shell command on file: ")
(command-argument)))
(lambda (command pathname)
((ref-command shell-command)
(string-join
(list command " "
(pathname->string (dired-current-pathname)) " &")) #f)))

(define-key 'dired #\! 'dired-do-shell-command)

Edwin Support for External Schemes

I’m pleased to announce an alpha release of support for using Edwin to interact with external Schemes. This is similar to the way Emacs can interact with REPLs from languages other than Emacs Lisp.

The code is available at http://github.com/rmloveland/edwin-external-scheme. The repository contains two Edwin modes, “External Scheme mode” and “External Scheme REPL mode”.

Usage

“External Scheme REPL mode” allows you to interact with an external Scheme REPL’s process from Edwin in the same way you would interact with a shell using `#\M-x shell’. Load the file `external-scheme-repl.scm’ and enter the command `#\M-x external-scheme-repl’. You’ll be asked to enter the name of the external Scheme you’d like to run, and away you go.

External Scheme mode inherits from Edwin’s Scheme mode for all of its formatting and editing commands, but provides its own commands for sending expressions to a running external Scheme REPL, if one exists.

Load the file `external-scheme-mode.scm’ and enter the command `#\M-x external-scheme-mode’ in a buffer containing Scheme code that you want to send to the external Scheme. Right now you can run only one external Scheme REPL, so be sure that the code you’re sending is going to be understood by that Scheme. It’s a simple matter of programming to extend this to multiple external Scheme buffers if you care to.

Caveats

Right now you may run only one external Scheme REPL at a time. Any Scheme buffers in External Scheme mode will send their eval’d code to that REPL.

Finally, note that files containing Scheme code are automatically opened by Edwin in its own Scheme mode, no matter what Scheme they’re written in, so you’ll need to do `#\M-x external-scheme-mode’.

Motivation

Finally, why bother? Isn’t MIT Scheme good enough? The answer is yes: it’s great. However, I often write scripts using Scheme Shell due to its tight integration with UNIX and excellent process notation. I could already write these programs in Emacs, but Edwin is my preferred editor.

Trading Space for Time

Working through SICP for the first time. Found a used first edition (hardcover, no less) on eBay. Some misguided person had marked each of the covers up with a giant X in black marker.

The first edition differs from the second in a few ways. For example, at the end of the tree-recursive change-counting problem on pages 39-40 of the second edition, the authors say “it is not obvious how to design a better algorithm for computing the result, and we leave this problem as a challenge”. They also note that memoization can be used to reduce the number of redundant computations that occur.

In other words, we can trade space for time.

In the first edition, Exercise 1.9 asks you to design a procedure that evolves an iterative process for solving the change-counting problem. I’m working on a solution to that problem; in the meantime, I implemented a procedure that I call memoizing-apply which does the following:

1. See if an answer has already been computed and stored in a table *mr* (short for `memoized-results’).
2. If the answer hasn’t already been computed, compute it. Then stick the answer in the table *mr*.

This process is known as memoization or tabulation, and *mr* serves as a lookup table.

We define a table *mr*, and a procedure memoized? that checks if the procedure has already been called with the current arguments. This assumes that the procedure always returns the same answer when you give it the same arguments.

(define *mr* '())

(define (memoized? args)
(let ((result (assoc args *mr*)))
(if result
(cdr result)
#f)))

Memoizing-apply does what we described above: it checks if the answer has already been computed (is already in the lookup table), and if so it just returns the answer. If not, it computes the answer and sticks it in the table.

(define (memoizing-apply f xs)
(or (memoized? xs)
(let ((result (apply f xs)))
(begin (push! (cons xs result) *mr*)
result))))

Note that push! is not a standard Scheme procedure; here’s the definition:

(define-syntax push!
(syntax-rules ()
((push! item seq)
(set! seq (cons item seq)))))

Let’s try it out. (Note that you can get the definitions for the procedures cc and first-denomination from a freely available copy of the book available at the SICP website.)

Here’s my modified version of cc, the change-counting procedure. I call it m-cc, and it uses the memoizing-apply procedure above to avoid doing redundant work:

(define (m-cc amount kinds-of-coins)
(cond ((= amount 0) 1)
((or (< amount 0) (= kinds-of-coins 0)) 0)
(else (+
(memoizing-apply m-cc (list amount (- kinds-of-coins 1)))
(memoizing-apply m-cc (list (- amount (first-denomination kinds-of-coins))
kinds-of-coins))))))

The above code is different from the original in that it uses memoizing-apply to make the recursive call to itself. In my testing this is much faster than the equivalent cc version.

The memoization technique works very nicely, but it’s only appropriate in situations where memory is freely available. On most computers, most of the time, it’s probably the best way to go, since the tree-recursive implementation is readable and easy to understand. The exercise that asks you to solve the problem using an iterative process will likely be more complicated, but I’m not sure yet.