In my most recent dream, I was arguing in a classroom about the "right" way to understand repeated squaring. This was the argument that I wanted to make, but failed to express clearly in my dream.

I wrote most of the algorithmist article on repeated squaring awhile ago. The way I used to understand repeated squaring is explained by example in the write up. It basically comes down to writing the exponent in binary, accumulating the appropriate terms for each position the binary expansion of the exponent by repeatedly squaring the last position, and then finally multiplying out all the terms where there is a bit set. This works and provides some insight into how the algorithm works.

However, this is the wrong way to understand the algorithm.

The right way is expressed beautifully by this equation.

What does the first part equation say?

Well, it says divide the problem into problem of approximately half the size. This smaller problem can be solved recursively. The division here is the C-style truncating integer division. The square then uses the information from the subproblem to compute the larger problem. The last part of the equation,

Simply handles the case that the division truncated exponent and we would have otherwise lost a multiple of the base.

On a technical note, from either view of the algorithm, it's pretty easy to

see that the number of total number of multiplications required is no worse than 2lg(y).

What is the key difference between the two views of the algorithm? The first is iterative, understanding the computation involves understanding every minute detail of the accumulation loop and it's relation to the binary expansion of the exponent. The second view uses recursion to leverage abstraction. Instead of understanding the whole process, you simply need to understand how to make the problem smaller and how to put the pieces together on a high level. There is really a parallel here between iterative and functional programming. Ironically, I am coming out on the functional programming side.

So why is the second view better than the first view? The higher level, more abstracted view is much easier to generalize. Instead of computing x^y efficiently, let's consider the problem of computing a^b * x^y. Let n = max(b, y). This can be done straight-forwardly with two applications of the repeated squaring algorithm in 4lg(n) + some constant k multiplications. However, a modified algorithm can solve the problem with 2lg(n) + k multiplications. Credit for this problem goes to my sister's "Reasoning about Computation" class. It's the best class I never took. Sigh, sometimes I wish I did CS at Princeton.

How do you do it?

Here is some empty space in case you want to try to figure it out yourself. Hint: Try to use divide and conquer.

Well, to compute a^b * x^y, first compute a^(b/2) * x^(y/2) recursively. Now square it. There are just 4 cases depending on the truncation of b and y. If b and y were both truncated, we need an additional multiple of a * x, which can be precomputed ahead of time for cost of a single multiplication at this recursive step. Otherwise, if only b was truncated, we just need an additional multiple of a. Likewise, if only y was truncated, we need an additional multiple of x. Otherwise, no additional multiplications are needed. Simple, elegant, clever. Gotta love it.

Exercise for the readers: Further generalize the algorithm. Show how to compute a_1 ^ b_1 * a_2 ^ b_2 * ... a_j ^ b_j in 2 lg(n) + 2^j + k multiplications, where n = max(b_1, b_2, ... b_j) and k is a constant.

I spent quite a bit of time trying to solve this problem, mostly stuck because I was trying to use the first understanding of the problem and I couldn't see how to leverage the binary expansion of both exponents simultaneously.