Category Archives: Exercise 1.12

SICP Exercises 1.10 through 1.13



In this post I’ll share some of the fun I had doing exercises 1.10 through 1.13 from Structure and Interpretation of Computer Programs. Remember that I’m working from the first edition, since it’s what I have. Fortunately it’s not too different from the second edition so far; I’ve written the exercises out so you don’t need the book to follow along.

Exercise 1.10

Draw the tree illustrating the process generated by the count-change procedure of section 1.2.2 in making change for 11 cents. What are the orders of growth of the space and time used by this process as the amount to be changed increases?

Having already drawn the tree out by hand for an earlier post, I noticed a lot of inefficient procedure calls, and sidetracked a little to implement a version called cc-faster which massively reduced their number. For example, when counting change for 600 cents (using all 5 types of coins), the number of procedure calls was reduced from 29,806,269 to 233,609. In other words, it made approximately 128 times fewer recursive calls for the same input! Even so, cc-faster still generates a tree-recursive process and is destined to blow up. Depending on your application it might be good enough though.


But I digress. On p. 11 of the text, the authors state that

In general, the time required by a tree-recursive process will be proportional to the number of nodes in the tree, while the space required will be proportional to the maximum depth of the tree.

Therefore I’m happy to guesstimate the order of growth of the space used by (cc 11 5) to be somewhere in the neighborhood of O(n), where n is the amount to be changed. (Looking at the earlier post mentioned above, it looks like cc-faster is also O(n), but in the number of different kinds of coins.)


Speaking of guesstimation, I decided to instrument the cc procedure in the following hackish manner:

;; Yes, a gross hack. But written quickly! :-)

(define *call-count* 0)

(define-syntax incf!
  (syntax-rules ()
    ((incf! var)
     (begin (set! var (+ 1 var))

(define (c-cc amount kinds-of-coins)
  (begin (incf! *call-count*)
          ;; ((= kinds-of-coins 1) 1) ; Uncomment this to greatly
          ;; lessen the # of procedure calls by using the `cc-faster'
          ;; algorithm.
          ((= amount 0) 1)
          ((or (< amount 0) (= kinds-of-coins 0)) 0)
          (else (+
                 (c-cc amount (- kinds-of-coins 1))
                 (c-cc (- amount (first-denomination kinds-of-coins))

(define (print-call-count-statistics n #!optional display-proc)
  (set! *call-count* 0)
  ; Use the special version of `cc' above
  (c-cc n 5)
  (display *call-count*)
  (set! *call-count* 0)
  (if (not (default-object? display-proc))
      (begin (display (apply proc (list *call-count*)))

This allowed me to do the following measurements:

; (for-each (lambda (n) (print-call-count-statistics n)) '(50 100 150
; 200 250 300 350 400 450 500 550 600))
; Note: the number in the right column comes from dividing the current
; value by the previous.
;    1571
;   15499 0.101361378153
;   71775 0.215938697318
;  229589 0.312623862642
;  587331 0.390902234004
; 1292591 0.454382708838
; 2552159 ...
; 4642025
; 7917379
;29806269 0.667688767085
;Unspecified return value

I plotted the results of my measurements and tried to do a little curve-fitting by hand. I initially settled on x^e * ϕ, but it didn’t grow steeply enough to match cc‘s growth as you can see in the plot below. (This guess seemed reasonable at first because I only went up to 450 with cc.)

Then I started dividing the number of nodes in the call tree for input n by the number of nodes generated by input n+1 (see the right column in the code block above). At first glance it looks like cc‘s order of growth is a function that will asymptotically approach 1.


So what’s my final answer? cc is exponential in a bad way. Don’t use it! :-)

Exercise 1.11

Design a procedure that evolves an iterative exponentiation process that uses successive squaring and works in logarithmic time, as does fast-exp.

Unlike Exercise 1.9, the authors walk you through how to do this exercise. Even with their patient description, I managed to implement a completely wrong solution. I finally figured out what I was doing wrong by going back and reading the text, and then modeling the procedure by hand on paper. It turns out I hadn’t understood their explanation after all! So I guess the late Prof. Dijkstra really was onto something.

While trying to debug it, I wrote fast-expt-tester, which showed me how hopelessly off the rails I was, but not necessarily how to fix it. Thank goodness for pen and paper, the ultimate debugging tools (at least for a student-level problem like this).

; Call it like so: (fast-expt-iter 7 7 1)
(define (fast-expt-iter b n a)
   ((= n 0) a)
   ((even? n)
    (fast-expt-iter (square b) (/ n 2) a))
   (else (fast-expt-iter b (- n 1) (* b a)))))

(define (fast-expt-tester rounds)
    (let loop ((n rounds))
      (if (= n 0)
          (display "all done")
          (let* ((z1 (random 20))
                 (z2 (random 20))
                 (correct (expt z1 z2))
                 (maybe-correct (fast-expt-iter z1 z2 1))
                 (message (string-append
                           "expt's answer\n"
                           (number->string correct)
                           "\nis not equal to fast-expt-iter's answer\n"
                           (number->string maybe-correct)
                           "\nfor inputs: "
                           (number->string z1)
                           " "
                           (number->string z2)
            (if (not (= correct maybe-correct))
                (begin (display message)
                       (loop (- n 1))))))))

Exercise 1.12

… One can perform integer multiplication by means of repeated addition. … Now suppose we include, together with addition, the operations double, which doubles an integer, and halve, which divides an (even) integer by 2. Using these, design a multiplication procedure analogous to fast-exp that works in logarithmic time.

This was slightly simpler to understand than 1.11 (at least for me).

(define (halve n)
  (/ n 2))

(define (double n)
  (+ n n))

(define (rml/fast-* a b)
  (cond ((= b 0) 0)
        ((even? b)
         (rml/fast-* (double a) (halve b)))
         (+ a (rml/fast-* a (- b 1))))))

(* 123123123123 123123123123)
;Value: 15159303447561417273129

(rml/fast-* 123123123123 123123123123)
;Value: 15159303447561417273129

Exercise 1.13

Using the results of exercises 1.11 and 1.12, devise a procedure that generates an iterative process for multiplying two integers in terms of adding, doubling, and halving and works in logarithmic time.

I have to admit it, expressing iteration using recursive procedure definitions is great fun!

(define (rml/iter-* a b c)
  (cond ((= b 0) c)
        ((even? b)
         (rml/iter-* (double a) (halve b) c))
         (rml/iter-* a (- b 1) (+ a c)))))

(* 123123123123 123123123123)
;Value: 15159303447561417273129

(rml/fast-* 123123123123 123123123123)
;Value: 15159303447561417273129

(rml/iter-* 123123123123 123123123123 0)
;Value: 15159303447561417273129

(Origami image courtesy Melisande under Creative Commons license.)